A Blog by

Are These Crime Drama Clues Fact or Fiction?

Steven Avery, featured in the Netflix documentary Making a Murderer, served 18 years in prison for rape, then was exonerated by DNA. He was convicted of murder in 2007, based partly on DNA evidence.
Steven Avery, featured in the Netflix documentary Making a Murderer, served 18 years in prison for rape before being exonerated by DNA in 2003. In 2007, he was convicted of murder, based partly on DNA evidence.

I’m often just as surprised by what forensic scientists can’t do as by what they can. In the Netflix documentary Making a Murderer, for instance, the question of whether police planted the main character’s blood at a crime scene comes down to whether or not the FBI can detect a common laboratory chemical called EDTA in a bloodstain.

On a TV crime show, this would be a snap. The test would take about five minutes and would involve inserting a swab into a magic detector box that beeps and spits out an analysis of every substance known to humankind.

In real life, there’s no common and accepted test in forensic labs for EDTA even today, nine years after the FBI tested blood for the Steven Avery trial featured in Making a Murderer. In that case, the FBI resurrected a test they had last used in the 1995 O.J. Simpson trial, and testified that the blood in question did not contain EDTA and therefore was not planted using EDTA-preserved blood from an evidence vial. (Avery was convicted.)

Questions about the test’s power and reliability have dogged the case ever since. There’s even an in-depth Reddit thread where fans of the Netflix show are trying to sort out the science.

Having worked in chemistry labs, it surprised me at first that this analysis would be difficult or controversial. After all, a quick search of the scientific literature turns up methods for detecting low levels of EDTA in everything from natural waters to beverages.

Steven Averys
Steven Avery’s attorneys Jerome Buting (shown) and Dean Strang struggled to dispute chemical evidence introduced mid-trial that undermined the idea that police had planted blood evidence.

But the key here is that we’re talking about forensic science, not beverage chemistry. Beverage chemistry, in this case, is much more exacting. Was there really no EDTA in the blood swabbed from victim Teresa Halbach’s vehicle, or was the chemical simply too diluted or degraded to be detected with the FBI’s method? Could the test have missed a small amount of EDTA? It would be hard to say without further experiments that replicate crime scene conditions, experiments that essentially put the test to the test.

The reality is that forensic science today is a strange mix of the high-tech and the outdated, so questions about evidence like those in Avery’s case are not uncommon. Methods that we take for granted, like measuring a particular chemical, or lifting a fingerprint off a gun and matching it to a suspect, can be difficult—and far from foolproof. On the other hand, some of the real science happening now sounds like something dreamed up by Hollywood script writers, such as new methods aiming to reconstruct what a person’s face looks like using only their DNA.

Making a Murderer, whether it sways your opinion on Steven Avery or not, has done a service by getting people interested in something as arcane as EDTA tests, and by showing why real-life crimes are not solved nearly so neatly as fictional ones.

I see the messiness of forensic science all the time, because I scan its journals and often come across new studies that make me think either “you mean we couldn’t already do that?” or “I had no idea that was possible.” I’ve gathered a few recent examples for a quiz.

How well can you separate CSI fact from fiction? Here are a few crime-solving scenarios I’ve cooked up; see if you can tell which use real methods based on new forensic research. You’ll find the answers below.

  1. A skeleton is found buried in a shallow grave. The body’s soft tissues have completely decomposed, so only the teeth and bones remain. A forensic anthropologist examines the bones and reports that they come from a female who was five foot six inches tall, and obese. Could she really tell the person was overweight?
  2. The body of a white male in his 50s turns up on a nature trail, scavenged by animals. The victim’s bones show a number of puncture wounds consistent with animal bites, but x-rays reveal fine lines of different density in the bone around some of the punctures. An expert says these lines show that the wounds were made about 10 years before death. Is it possible to tell the approximate age of these wounds from x-rays?
  3. A woman is found dead in her home, bludgeoned to death. A bloody frying pan lies on the floor next to her. Her husband is the main suspect. Fingerprints on the pan’s handle are too smudged to make a definitive ID, but an analyst says she can still rule out the husband: All of the fingerprints on the pan came from a woman, the expert says. Is it possible to tell if the fingerprints were from a male or female?
  4. A woman is sexually assaulted and identifies her male attacker in a lineup. The suspect’s DNA matches DNA found on her body. It looks like an easy case for the prosecutor—until the suspect reveals that he has an identical twin. Neither twin admits to the crime. Is it possible to tell which twin’s DNA was found at the crime scene?
  5. A witness sees a man in a stocking mask rob and shoot a man outside his home. A stocking is found near the house, and a hair-analysis expert testifies that 13 hairs in the mask are all human head hairs from an African-American. A microscopic analysis matches the characteristics of one hair to a particular African-American suspect. The prosecutor tells the jury that the chances are one in ten million that this could be someone else’s hair. Can hairs be matched to an individual this accurately?


Answers Below


  1. Yes. Biologists have long known that greater body mass changes the weight-bearing bones of the legs and spine, and a new study shows that even bones that aren’t supporting most of the body’s weight, such as arm bones, have greater bone mass and are stronger in obese people. So even in a skeleton missing its legs, our forensic anthropologist might be able to tell that the person was obese.
  2. No. This one is from an actual episode of Bones (The Secret in the Siege, Season 8, Episode 24, reviewed here by real-life bioarchaeologist Kristina Killgrove). In the episode, Dr. Temperance Brennan uses Harris lines to determine the age of bone injuries in two victims. Harris lines are real, but they form only in growing bones, so are useful only in determining childhood injuries or illness.
  3. Yes. A study published in November showed that the level of amino acids in sweat is about twice as high in women’s fingerprints as in men’s. Of course, as with all the new methods, this one could face challenges as evidence in a U.S. court of law, where the Daubert standard allows judges to decide whether scientific evidence is admissible based on factors including its degree of acceptance by the scientific community.
  4. Yes, if you do it right. Standard DNA tests don’t distinguish between twins, who are born with nearly identical DNA, but it’s possible to do a more sophisticated test to catch post-birth mutations and epigenetic differences, which you can think of as genetic “add-ons” that don’t affect the DNA sequence itself. One new test distinguishes between twins by looking for small differences in the melting temperature of their DNA that are caused by such epigenetic modifications.
  5. No. The field of hair analysis has come under heavy scrutiny, especially after a review by the U.S. Justice Department revealed major flaws in 257 out of 268 hair analyses from the FBI. The case described here is the real-life case of Santae Tribble, convicted in 1978 of murder. In 2012, DNA tests showed that none of the hairs matched Tribble—and one was from a dog.
A Blog by

Personhood Week: Conception Is a Process

Earlier this month voters in two U.S. States, Colorado and North Dakota, considered new laws that would bolster the legal rights of a fetus before birth. Neither of these ballot initiatives passed, but they’re part of a “personhood movement” that’s been gaining notoriety among pro-life advocates since about 2008. Reading about this movement in the press (Vox has a great overview) has made me wonder about the slippery, contentious, and profound meaning of “personhood.”

The Wikipedia page for personhood gives this definition: “Personhood is the status of being a person.” Right-o.

The page for person isn’t much clearer: “A person is a being, such as a human, that has certain capacities or attributes constituting personhood, which in turn is defined differently by different authors in different disciplines, and by different cultures in different times and places.”

I’ve chosen five personhood perspectives to write about this week. Today’s installment is all about conception (another fuzzy concept). Tomorrow I’ll try to tackle the transition from child to adult. Wednesday I’ll ask whether dead bodies are people. Thursday goes to non-human animals, and Friday to neuroscientists who argue that “personhood” is a convenient, if illusory construction of the human brain.

I’d love to hear about how you guys define personhood, and why. Feel free to leave comments on these posts, or jump in to the #whatisaperson conversation on Twitter.


I went to a Catholic high school, where I was taught in religion class that life begins at conception. I don’t remember my teacher getting into the biological details, but we all knew what she meant: Life begins at the moment that an earnest sperm finishes his treacherous swimming odyssey and hits that big, beautiful egg.

That’s what many Christians believe, and it’s also the fundamental idea behind the personhood movement. The website of Personhood USA, a nonprofit Christian ministry, highlights this quote by French geneticist Jérôme Lejeune: “After fertilization has taken place a new human being has come into being. It is no longer a matter of taste or opinion…it is plain experimental evidence. Each individual has a very neat beginning, at conception.”

That’s not a common belief among biologists, however. Scott Gilbert of Swarthmore calls the conception story a “founding myth,” like The Aeneid. As he jokes in a popular lecture, “We are not the progeny of some wimpy sperm — we are the progeny of heroes!”

In reality, conception — or more precisely, fertilization — is not a moment. It’s a process.

After the sperm DNA enters the egg, it takes at least 12 hours for it to find its way to the egg’s DNA. The sperm and egg chromosomes condense in a coordinated dance, with the help of lots of proteins call microtubules, eventually forming a zygote. But a true diploid nucleus — that is, one that contains a full set of chromosomes from each parent — does not exist until the zygote has split into two cells, about two days after the sperm first arrive.

So is that two-cell stage, then, at day two, when personhood begins?

It could be, if you define personhood on a purely genetic level. I have a hard time doing so, though, because of twins. Identical twins share exactly the same genome, but are obviously not the same person.

Based on this logic, some biologists push back the start of personhood to about 14 days after the sperm enters the egg, a stage called gastrulation. This is when the zygote transforms from one layer into three, with each layer destined to become different types of tissues. It’s only after this stage that you could look at a zygote and say definitively that it’s not going to split into identical twins (or triplets or even quadruplets).

Via Wikipedia: Gastrulation occurs when a blastula, made up of one layer, folds inward and enlarges to create a gastrula.
Image via Wikipedia

So is the 14th day of gestation, then, when personhood begins?

Some doctors would say no, you have to also consider the fetal brain. We define a person’s death, after all, as the loss of brain activity. So why wouldn’t we also define a person’s emergence based on brain activity? If you take this view, Gilbert notes, then you’ll push personhood to about the 28th week of gestation. That’s the earliest point when researchers (like this group) have been able to pick up tell-tell brain activity patterns in a developing fetus.

Most legal definitions of personhood in the United States also focus on this late stage of gestation. The famous Roe v. Wade case in 1973 made it illegal for states to ban abortions before the third trimester of pregnancy, which begins at 28 weeks. Subsequent rulings by the court got rid of this trimester notion, saying instead that abortions can’t happen after a fetus is “viable,” or able to live outside the womb, which can be as early as 22 or 23 weeks. (And in 2003, Congress banned a specific procedure called a partial-birth abortion, which happens between 15 and 26 weeks.)

So there you have it. From a biological perspective, neither conception nor personhood is easily defined. “I really can’t tell you when personhood begins,” Gilbert says in his lecture. “But I can say with absolute certainty that there’s no consensus among scientists.”

These definitions don’t necessarily get easier after birth, either. But we’ll get to that tomorrow.

A Blog by

Emotion Is Not the Enemy of Reason

This is a post about emotion, so — fair warning — I’m going to begin with an emotional story.

On April 9, 1994, in the middle of the night, 19-year-old Jennifer Collins went into labor. She was in her bedroom in an apartment shared with several roommates. She moved into her bathroom and stayed there until morning. At some point she sat down on the toilet, and at some point, she delivered. Around 9 a.m. she started screaming in pain, waking up her roommates. She asked them for a pair of scissors, which they passed her through a crack in the door. Some minutes later, Collins opened the door and collapsed. The roommates—who had no idea Collins had been pregnant, let alone what happened in that bloody bathroom—called 911. Paramedics came, and after some questioning, Collins told them about the pregnancy. They lifted the toilet lid, expecting to see the tiny remains of a miscarried fetus. Instead they saw a 7-pound baby girl, floating face down.

The State of Tennessee charged Collins with second-degree murder (which means that death was intentional but not premeditated). At trial, the defense claimed that Collins had passed out on the toilet during labor and not realized that the baby had drowned.

The prosecutors wanted to show the jury photos of the victim — bruised and bloody, with part of her umbilical cord still attached — that had been taken at the morgue. With the jury out of the courtroom, the judge heard arguments from both sides about the admissibility of the photos. At issue was number 403 of the Federal Rules of Evidence, which says that evidence may be excluded if it is unfairly prejudicial. Unfair prejudice, the rule states, means “an undue tendency to suggest decision on an improper basis, commonly, though not necessarily, an emotional one.” In other words, evidence is not supposed to turn up the jury’s emotional thermostat. The rule takes as a given that emotions interfere with rational decision-making.

This neat-and-tidy distinction between reason and emotion comes up all the time. (I even used it on this blog last week, it in my post about juries and stress.) But it’s a false dichotomy. A large body of research in neuroscience and psychology has shown that emotions are not the enemy of reason, but rather are a crucial part of it. This more nuanced understanding of reason and emotion is underscored in a riveting (no, really) legal study that was published earlier this year in the Arizona State Law Journal.

In the paper, legal scholars Susan Bandes and Jessica Salerno acknowledge that certain emotions — such as anger — can lead to prejudiced decisions and a feeling of certainty about them. But that’s not the case for all emotions. Sadness, for example, has been linked to more careful decision-making and less confidence about them. “The current broad-brush attitude toward emotion ought to shift to a more nuanced set of questions designed to determine which emotions, under which circumstances, enhance legal decision-making,” Bandes and Salerno write.

The idea that emotion impedes logic is pervasive and wrong. (Actually, it’s not even wrong.) Consider neuroscientist Antonio Damasio’s famous patient “Elliot,” a businessman who lost part of his brain’s frontal lobe while having surgery to remove a tumor. After the surgery Elliot still had a very high IQ, but he was incapable of making decisions and was totally disengaged with the world. “I never saw a tinge of emotion in my many hours of conversation with him: no sadness, no impatience, no frustration,” Damasio wrote in Descartes’ Error. Elliot’s brain could no longer connect reason and emotion, leaving his marriage and professional life in ruin.

Damasio met Elliot in the 1980s. Since then many brain-imaging studies have revealed neural links between emotion and reason. It’s true, as I wrote about last week, that emotions can bias our thinking. What’s not true is that the best thinking comes from a lack of emotion. “Emotion helps us screen, organize and prioritize the information that bombards us,” Bandes and Salerno write. “It influences what information we find salient, relevant, convincing or memorable.”

So does it really make sense, then, to minimize all emotion in the courtroom? The question doesn’t have easy answers.

Consider those gruesome baby photos from the Collins case. Several years ago psychology researchers in Australia set up a mock trial experiment in which study volunteers were jury members. The fictional case was a man on trial for murdering his wife. Some mock jurors heard gruesome verbal descriptions of the murder, while others saw gruesome photographs. Jurors who heard the gruesome descriptions generally came to the same decision about the man’s guilt as those who heard non-greusome descriptions. Not so for the photos. Jurors who saw gruesome pictures were more likely to feel angry toward the accused, more likely to rate the prosecution’s evidence as strong, and more likely to find the man guilty than were jurors who saw neutral photos or no photos.

In that study, photos were emotionally powerful and seemed to bias the jurors’ decisions in a certain direction. But is that necessarily a bad thing?

In a similar experiment, another research group tried to make some mock jurors feel sadness by telling them about trauma experienced by both the victim and the defendant. The jurors who felt sad were more likely than others to accurately spot inconsistencies in witness testimony, suggesting more careful decision-making.

These are just two studies, poking at just a couple of the many, many open questions regarding “emotional” evidence in court, Bandes and Salerno point out. For example, is a color photo more influential than black and white? What’s the difference between seeing one or two gory photos verses a series of many? What about the framing of the image’s content? And what about videos? Do three-dimensional animations of the crime scene (now somewhat common in trials) lead to bias by allowing jurors to picture themselves as the victim? “The legal system too often approaches these questions armed only with instinct and folk knowledge,” Bandes and Salerno write. What we need is more data.

In the meantime, though, let’s all ditch that vague notion that “emotion” is the enemy of reason. And let’s also remember that the level of emotion needed in a courtroom often depends on the legal question at hand. In death penalty cases, for example, juries often must decide whether a crime was “heinous” enough to warrant punishment by death. Heinous is a somewhat subjective term, and one that arguably could be — must be? — informed by feeling emotions.

Returning to the Collins case, at first the trial judge didn’t think the gruesome baby photos would add much to what the jury had heard in verbal testimony. There was no question that Collins had had a baby, that she knew it, and that the baby had died of drowning. The judge asked the medical examiner whether he thought the photos would add anything to his testimony. He replied that the only extra thing the pictures would depict was what the baby looked like, including her size. The judge decided that was an important addition: “I don’t have any concept what seven pounds and six ounces is as opposed to eight pounds and three ounces, I can’t picture that in my mind,” he said, “but when I look at these photographs and I see this is a seven pound, six ounce baby, I can tell more what a seven pound, six ounce baby … is.”

So the jury saw two of the autopsy photos, and ultimately found Collins guilty of murder. Several years later, however, an appeals court reversed her conviction because of the prejudicial autopsy photos.

“Murder is an absolutely reprehensible crime,” reads the opinion of the appeals court. “Yet our criminal justice system is designed to establish a forum for unimpaired reason, not emotional reaction. Evidence which only appeals to sympathies, conveys a sense of horror, or engenders an instinct to punish should be excluded.”

A Blog by

Why Jurors and Policemen Need Stress Relief

I’ll be sitting on a jury tomorrow for the first time. The logistics are annoying. I have to take an indefinite time off work, wait in long security lines at the courthouse, and deal with a constant stream of bureaucratic nonsense. But all that is dwarfed by excitement. And, OK, yes, some pride. My judgments will affect several lives in an immediate and concrete way. There’s a heaviness to that, a responsibility, that can’t be brushed aside.

My focus on jury duty may be why a new study on social judgments caught my eye. Whether part of a jury or not, we judge other people’s behaviors every day. If you’re walking down a city sidewalk and someone slams into you, you’re probably going to make a judgment about that behavior. If you’re driving down the highway and get stuck behind a slow car, you’re probably going to make a judgment about that driver’s behavior. If somebody leaves a meandering and inappropriate comment on your blog…

Since the 1960s psychology researchers have known that people tend to make social judgments with a consistent bias: We’re more likely to attribute someone’s behavior to inherent personality traits than to the particulars of the situation. The guy who bumps into me on the sidewalk did so because he’s a dumb jerk, not because he’s rushing to the hospital to see his sick child. The driver is slow because she’s a feeble old lady, not because her engine is stalling.

Those are flippant examples, but this bias, known as the ‘fundamental attribution error’ or FAE, can be pernicious. Consider a policeman who’s making a split-second decision about whether to shoot a suspect wearing a hoodie. Because of the FAE, he “might make a shoot decision based on stereotypical characteristics about that person, and fail to take into account the context,” says Jennifer Kubota, an assistant professor of psychology at the University of Chicago. But the suspect “could be wearing a hoodie just because it’s cold outside.”

We can overcome this bias, but it takes time and deliberate thought. Studies have shown that when people are distracted or under strong time pressure, they’re more likely to make the FAE.

In the new study, now in press in Biological Psychology, Kubota and her colleagues found another factor that pushes people toward the FAE: stress.

To create physiological stress, the researchers asked volunteers to plunge their forearms into ice water for three minutes. This so-called ‘cold-pressor task’ is known to spike levels of cortisol, a stress hormone.

After the stress exposure, volunteers read statements about a fictional character and saw a picture of the person’s face. They would get one sentence of behavioral information (“Jenny read a book in an hour”) and another sentence of situational information (“The book was a children’s book”). Then they gave two ratings: 1) the degree to which the behavior was caused by dispositional factors as opposed to situational ones, 2) how much they liked the fictional person.

As it turns out, compared with non-stressed participants, those who were exposed to stress (and showed increases in cortisol) were more likely to make dispositional attributions than situational ones. They also gave more negative evaluations of the fictional characters.

“When we’re under stress we’re more likely to think that someone behaved the way they did because of something about their personality,” Kubota says. “And we’re ignoring all of these important situational and environmental factors that actually could have had a pretty big impact on why they did what they did.”

The differences between stressed and unstressed groups were small, but nevertheless notable, says Amy Arnsten, a professor of neurobiology at Yale who was not involved in the work. The cold-water stress, after all, is quite subtle compared with common real-world stressors such as sleep deprivation, divorce or financial woes.

The findings also “fit perfectly with what we already know” about stress and the brain, Arnsten says, a topic she has been studying for 30 years. In times of acute stress, our rational brain circuits (centered in the prefrontal cortex) rapidly shut down and our more primitive ones (based in the amygdala and basal ganglia) take over. “The automatic, unconscious circuits in your brain become in charge of decisions,” she says.

The same thing happens, it seems, when we’re making a social judgment. Last year a brain-imaging study reported that when people make judgments based on situational factors, they show more activity in their dorsolateral prefrontal cortex (DLPFC) than when they make judgments based on personality traits. Because stress is particularly damaging to circuits in the DLPFC, it makes sense that stress would make situational judgments more difficult and exacerbate the FAE.

“This has a lot of relevance to what’s going on right now with the police in places like Ferguson,” Arnsten says. “If the police are stressed, they’re going to be more likely to attribute bad things to people.” It may also come into play in conflict zones such as the Middle East and the Ukraine, she adds. “People become primitive [and] seek revenge” against those they perceive as inherently “bad.” This bias makes them “unable to see the bigger situation and represent long-term solutions that would actually be more helpful.”

In a second experiment, Kubota’s team tried to replicate their findings using more realistic scenarios. The researchers shared 30, one-sentence stories about crime with 204 American volunteers recruited online with Amazon’s Mechanical Turk. The vignettes varied in the number of situational details. For example, the sentence “A woman stabs another woman to death after an argument” has less situational information than “A 13-year-old boy in the slums of Chicago robs an 87-year-old man of $2.27.”

For each sentence, volunteers rated how much they thought the behavior was caused by dispositional factors as opposed to situational ones, as well as whether they believed the behavior was criminal, how much they liked the offender, and how severe the offender’s punishment should be.

Consistent with the first experiment, this one found that the higher the level of (self-reported) current stress, the more likely the person was to attribute a criminal behavior to the offender’s disposition.

After talking through these findings, I told Kubota about my upcoming jury service and asked her what I could do, if anything, to combat the FAE. She gave two pieces of advice. “First, for jurors, there are a number of important ways to decrease your stress level,” she said, such as doing relaxation exercises or mindfulness training.

Second, regardless of stress level, the best way to combat the FAE “is to give yourself a bit more time,” she said. Take the time to think of the person you’re judging and the complexity of their unique situation. “Put yourself in their shoes.” I’ll do my best.

A Blog by

My DNA Made Me Do It? How Behavioral Genetics Is Influencing the Justice System

On December 14, 2012, 20-year-old Adam Lanza killed 20 children at a Connecticut elementary school, as well as 6 school staffers, his mother, and himself. Within two weeks, the Connecticut Medical Examiner commissioned a group of geneticists to screen Lanza’s DNA.

And for what, exactly? Who knows. There are any number of genetic variants the scientists could zero in on — variants that have been linked to a propensity for violence, aggression, psychopathy, or psychiatric disorders. One thing I’d bet on: The screen will find something. Each of us carries genetic mutations somewhere along our 3-billion-letter DNA code. Some mutations are benign, some are not; some have huge effects, others tiny. But there’s no way to know how (or whether) any of them affects behavior.

Another thing I’d bet on: The media (and the public) will use the results of that genetic screen to explain what Lanza did. We all want answers, and a genetic test seemingly provides a long string of them. Answers from science, no less. But, as was pointed out by many scientists and commentators at the time, searching for answers in Lanza’s DNA is futile. “There is no one-to-one relationship between genetics and mental health or between mental health and violence,” read an editorial in Nature. “Something as simple as a DNA sequence cannot explain anything as complex as behaviour.”

The Connecticut Medical Examiner is apparently the first to ever request a genetic screen of a dead murderer. It’s an odd move, and perhaps one that can be blamed on intense public scrutiny in the wake of the tragedy. But using genetics to inform criminal cases is not new or even all that rare. As I learned in a fascinating commentary published in today’s issue of Neuron, behavioral genetics has a long history in the American justice system.

The “feeble minded” Carrie Buck, who was forcibly sterilized by the Commonwealth of Virginia. Photo from Wikipedia.

The author of the commentary, Paul Appelbaum of Columbia University, cites, for example, the Buck v. Bell Supreme Court case from 1927. The court upheld a Virginia law authorizing mandatory sterilization of people who are intellectually disabled, or “feeble minded”, because they threaten the gene pool. I’m not exaggerating. “It is better for all the world if, instead of waiting to execute degenerate offspring for crime or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind,” wrote Justice Oliver Wendell Holmes in the majority opinion. (If you want to be depressed all day, go read the Wikipedia entry about the case.)

Explicit genetic testing entered the courts in the late 1960s, but this time it was on behalf of the accused. Lawyers representing men carrying an extra Y chromosome — known today as ‘XYY syndrome’ — argued that because this genetic condition was overrepresented in prisons, it must drive violent behaviors. But most courts, according to Appelbaum, weren’t sympathetic to this logic, and refused to allow the genetic information into evidence.

Most cases calling on behavioral genetics, like the XYY example, do so in an attempt to lessen the culpability of a defendant who committed a crime. This isn’t usually relevant when deciding the verdict of the case (except for the very rare instances in which a defendant is found not guilty by reason of insanity). But mitigating factors — such as child abuse, drug use, abnormal brain activity, or genetic disposition — can matter a great deal during sentencing proceedings, particularly if the death penalty is on the table. “Judges tend to be fairly permissive at death penalty hearings,” Appelbaum writes.

In 2011 Deborah Denno, a law professor at Fordham University, reported 33 recorded* instances of neuropsychiatric genetic evidence in criminal courts between 2007 and 2011. She had previously reported 44 instances between 1994 and 2007, suggesting that it’s becoming slightly more common. In almost every instance, genetic evidence was used as a mitigating factor in a death penalty case.

The genetic evidence in Denno’s reports tended to be fairly crude: a family history of a condition. But specific genetic tests are beginning to seep into court, too. In 2007, several psychiatrists and geneticists described their experiences presenting evidence at criminal trials related to two gene variants: a variant of monoamine oxidase A, which when mixed with child maltreatment increases the risk of violent behavior, and a variant of the serotonin transporter gene, which when mixed with multiple stressful life events ups the risk of serious depression and suicide. A couple of cases used these scientific links to argue that defendants didn’t have the mental ability to plan their crime in advance. But most of the time genetic evidence was used to mitigate sentences. In 2011, for example, an Italian court reduced a female defendant’s sentence from life in prison to 20 years based on genetic evidence and brain scans that supposedly proved “partial mental illness.”

None of these examples trouble me too much. The U.S. court allows “any aspect of character or record” to be used as a mitigating factor during sentencing, including a defendant’s age, stress level, childhood experiences, criminal history, employment history, and even military service. So why not genetic predisposition, too? It also seems that, so far at least, judges and juries are showing an adequate level of skepticism about this kind of evidence. In 2010, I wrote a story about serial killer Brian Dugan, whose lawyers tried to use brain scans to show that he was a psychopath and didn’t deserve the death penalty. The jury wasn’t swayed.

Most shocking, to me, is how genetic evidence might be used in the civil court system, at least according to Appelbaum. Last year in Canada, a tenant sued her landlord for a fire that, she claimed, caused several injuries that will prevent her from ever working again. The plaintiff had a family history of Huntington’s disease, and the court ordered her to have a blood test to screen for the mutant gene to help determine whether her injuries were the result of the fire or her DNA. She didn’t want to take the test, but if she didn’t she’d have to drop the lawsuit. Appelbaum envisions other possible scenarios in future civil cases:

Employers contesting work-related mental disability claims might… want to compel claimants to undergo genetic testing to prove that an underlying disorder was not responsible for their impairment. Divorcing couples in child-custody disputes, in which court-ordered psychological evaluations are routine, may want to add genetic testing for behavioral traits or neuropsychiatric disorders to the list of procedures that their estranged spouses must undergo to assess their fitness to parent a child. Plaintiffs seeking to establish that a defendant acted recklessly (e.g., in precipitating an auto accident) might attempt to seek data regarding the defendant’s genetic predisposition to impulsive behavior. With increasing utilization of next-generation sequencing in medical settings, and arguments being made for sequencing newborns at birth, adverse parties in civil litigation may not need to compel genetic testing but merely to seek access to existing data.

In these civil cases, which are not usually matters of life and death, I would imagine that the bar for scientific scrutiny would be set lower than in criminal cases. That’s troubling, and all the more reason that we need to better educate the public about what genes can and cannot tell us. As genetic testing continues to infiltrate our medical system, and now our justice system, too, perhaps this education will happen naturally. One can hope.

The Nature editorial regarding the Lanza testing was titled “No easy answer”, and that’s really the crux of all of this. When a person does something awful, we want to know why. But it may be an impossible question.

*Most of Denno’s cases came from appellate courts because usually lower courts don’t have written opinions. So that means her numbers are almost certainly underestimates.

A Blog by

How Many People Are Wrongly Convicted? Researchers Do the Math.

Is there a more tragic story than an innocent person going to prison? Tragic, and powerful. That’s why The Shawshank Redemption is one of the most beloved movies of our time. And why we’ve all heard of this quote from an esoteric 18th-century English guy, William Blackstone: “It is better that ten guilty persons escape than that one innocent suffer.” And why real-life stories of the exonerated always make headlines. Here’s the first line of a Washington Post story about Glenn Ford, who was exonerated last month:

“My sons, when I left, was babies,” Louisiana’s longest-serving death row inmate told reporters after his release late Tuesday. “Now they’re grown men with babies.”

It hits you in the gut. You first think about this particular person, this man who lost his family, who spent decades in some awful cell believing he was going to be electrocuted. And then you think that other frightening thought, the bugaboo lurking behind all exoneration stories: How many other Glenn Fords are still behind bars? How many will die there? Just how often does our venerated justice system fail?

Rarely, at least according to U.S. Supreme Court Justice Antonin Scalia. In a 2006 opinion he cited an approximate error rate of 0.027 percent, based on back-of-the-envelope calculations by an Oregon district attorney in a fiery op-ed for the New York Times. The op-ed was in response to a report by Samuel Gross, a law professor at the University of Michigan, cataloguing 340 exonerations between 1989 and 2003. “Let’s give the professor the benefit of the doubt,” the op-ed read. “Let’s assume that he understated the number of innocents by roughly a factor of 10, that instead of 340 there were 4,000 people in prison who weren’t involved in the crime in any way. During that same 15 years, there were more than 15 million felony convictions across the country. That would make the error rate .027 percent — or, to put it another way, a success rate of 99.973 percent.”

But that claim, Gross writes in today’s issue of the Proceedings of the National Academy of Sciences, “is silly.” Here’s the problem with its logic. The known exonerations were almost all murder and rape cases, which get much more post-conviction attention, whereas the total number of felonies also includes burglary, car theft, tax fraud, and drug possession. Some 95 percent of felony convictions are the result of plea bargains, with no formal evidence ever presented, and most never bother with an appeal.

There’s a more rigorous way to crunch the numbers, according to Gross’s new study. And that approach leads to a false conviction rate that was high enough to make me gasp — 4.1 percent.

To be more precise: Gross and his colleagues calculated a 4.1 percent error rate among people who are sentenced to death. This is a small subset (less than 0.1 percent) of the total number of prison sentences but, because of the stakes, these cases are scrutinized far more than most. For capital cases, Gross writes, “everyone from defense lawyers to innocence projects to governors and state and federal judges is likely to be particularly careful to avoid the execution of innocent defendants.”

Because of all of the resources spent on capital cases, the researchers reason, it’s likely that many (and perhaps a majority) of innocent defendants on death row will ultimately be exonerated. Calculating this rate would then give an approximate, albeit low estimate of the real false conviction rate.

Even with this small subset, though, getting a reasonable estimate is tricky, for two reasons. One is that the exoneration process takes time. To date 143 people on death row have been exonerated, and their time spent on death row ranged from 1 to 33 years, with an average of 10. That means there are some number of innocent people on death row today who have not yet been exonerated but will be in the future.

The other issue is that some number of innocent people on death row will have their sentence reduced to life in prison but will never be freed. (Why? Because once they’re off death row, their case is no longer under such intense scrutiny and exoneration is unlikely.)

Gross and his colleagues collected data on the 7,482 people who were sentenced to death between 1973 — the first year of modern death-penalty laws — and 2004. Of these, 117 were exonerated, or 1.6 percent. But among these, 107 were exonerated while they were still on death row, whereas only 10 were exonerated after their sentence had been reduced to life in prison.

This leads to a bizarre situation. If you’re on death row and your sentence is reduced to life in prison, you’re suddenly much less likely to be exonerated than someone who stays on death row.

To account for the X number of innocent defendants who were lost to life in prison, Gross’s team borrowed a statistical approach from medical research called “survival analysis.” Researchers typically track how long it takes a group of people with some disease, say diabetes, to achieve a certain outcome, such as death. Survival curves for diabetics on some new drug treatment, say, can then be compared with those on no treatment.

In this study, the “disease” is being on death row, the “outcome” is exoneration, and the “treatment” is being moved to life in prison. (Just as a medical treatment is likely to lower a diabetic’s risk of death, being moved to life in prison lowers the risk of exoneration.) Using this mathematical approach*, the researchers were able to calculate the cumulative risk of exoneration for the “untreated” population — that is, people who were never removed from death row. Because these cases get the most intense scrutiny, the researchers say, this is the closest we can get to the true rate of death-row false convictions. And that rate is 4.1 percent.

All of this assumes that the rate of innocent people being sentenced to death hasn’t changed over the past four decades. The authors say there’s no specific evidence to support a change, though it’s true that we’re sending far fewer people to death row than we used to. I thought that improvements in DNA technology might have lowered our error rate, but the researchers don’t think it makes much of a dent. Just 18 of the 142 exonerations since 1973 were thanks to DNA testing.

So then the next terrifying question is, geeze, how many innocent people have actually been executed?

Fortunately it’s probably not many. Innocent defendants are far more likley to have their sentenced changed to life in prison than to be executed. Still, with an error rate of 4 percent, the researchers write, “it is all but certain that several of the 1,320 defendants executed since 1977 were innocent.”

It’s impossible to say whether this 4.1 percent false conviction rate applies to defendants who never went to death row. But I’ll leave you with one last depressing thought. Of all of the people found guilty of capital murder, less than half actually get a death-penalty sentence. And when juries are determining whether to send a defendant to death row or to life in prison, surveys show that they tend to choose life sentences when they have “residual doubt” about the defendant’s guilt.

That means, then, that the rate of innocent defendants serving life in prison is higher than those on death row. “They are sentenced,” the authors write, “and then forgotten.”

*If you want to know about the particular equations involved in the survival analysis, check out Gross’s paper or the Wikipedia article on the method.

Update, 4:51pm: The text has been amended to correct a statistic about the number of death sentences. They make up 0.1 percent of the total number of prison sentences, not of the criminal population at large.