By Michael Coghlan via Flickr

Making Juries Better: Some Ideas from Neuroeconomics

ByVirginia Hughes
September 26, 2013
7 min read

We Americans love jury trials, in which an accused person is judged by a group of peers from the community. Every citizen, when called, must sit on a jury. For anyone who finds this civic duty a painful chore: Go watch 12 Angry Men, A Few Good Men, or any episode of Law & Order. You’ll feel all warm and fuzzy with the knowledge that, though juries don’t always make the right call, they’re our best hope for carrying out justice.

But…what if they aren’t? Juries are made of people. And people, as psychologists and social scientists have reported for decades, come into a decision with pre-existing biases. We tend to weigh evidence that confirms our bias more heavily than evidence that contradicts it.

Here’s a hypothetical (and pretty callous) example, which I plucked from one of those psych studies. Consider an elementary school teacher who is trying to suss out which of two new students, Mary and Bob, is smarter. The teacher may think of them as equally smart, at first. Then Mary gets a perfect score on a vocabulary quiz, say, leading the teacher to hypothesize that Mary is smarter. Sometime after that, Mary says something mildly clever. Objectively, that one utterance shouldn’t say much about Mary’s intelligence. But because of the earlier evidence from the quiz, the teacher is primed to see this new event in a more impressive light, bolstering the emerging theory that Mary is smarter than Bob. This goes on and on, until the teacher firmly believes in Mary’s genius.

Even more concerning than confirmation bias itself is the fact that the more bias we have, the more confident we are in our decision.

All of that research means, ironically, that if you start with a group of individuals who have differing beliefs, and present them all with the same evidence, they’re more likely to diverge, rather than converge, on a decision. “This polarization can be really bad,” says Isabelle Brocas, an economist at the University of Southern California.

Although the psychological literature is lousy with studies of confirmation bias, nobody really knows its root cause. In an intriguing new paper, Brocas and her colleague Juan Carrillo propose an explanation based on neuroscience. The biases of juries, they say, can be explained by the way that our neurons encode information from the outside world. Their model (and let’s be clear: it’s a mathematical model, rife with assumptions) points to several recommendations for making our justice system more just.

FREE BONUS ISSUE

Where did we get the idea that juries make reasonable decisions, anyway? It came from the “Condorcet Jury Theorem,” proposed by the French philosopher Nicolas de Condorcet in 1785. His idea was simple. Assume that you have a group of people trying to decide between two outcomes, and that one of the outcomes will be unequivocally better for everybody in the group. This is the situation of the average jury trial. It would be better for everybody in the group to send a guilty person to jail, after all, and to let an innocent person go free. If every individual is more likely to choose the correct outcome than the incorrect one, Condorcet reasoned, then the probability of making the right decision increases with the size of the group.

Trouble is, that logic assumes that people act like perfectly rational statisticians, weighing every detail of every piece of evidence carefully and then adjusting their final decision accordingly. This is the traditional “Bayesian” view of behavioral economics. But Brocas and Carrillo point out that this model doesn’t explain the confirmation bias and polarization we see so often when people make decisions.

Their alternative model is based on studies of how monkeys make decisions. For the past decade or so, neuroscientists have been recording from single brain cells in monkeys while the animals are in the process of making fast and simple perceptual decisions, such as choosing whether an object is moving to the left or to the right (in order to get a juice reward).

Many neurons in the brains outer layers fire in response to specific stimuli. Some cells, for example, fire when the monkey sees leftward motion, while others fire when it sees rightward motion. This neuronal signaling is noisy, though. Sometimes the neurons that prefer rightward motion will fire during leftward motion, and vice-versa. And yet the brain manages to juggle this noisy set of inputs and reach a decision in a short amount of time. The way the brain optimizes both speed and accuracy, these studies have shown, is by accumulating evidence and adjusting its expectations in real time. Say, for example, that the initial batch of signals is 70 percent left-loving neuronal firings and 30 percent right-loving neuronal firings. The brain will process this information and then re-adjust its expectations to favor the left hypothesis — just as the teacher used the quiz data to favor the hypothesis about Mary’s superior intelligence.

Brocas and Carrillo say the same thing applies to decisions made by juries. Each person on a jury is trying to make a decision based on lots of evidence presented over a relatively brief period of time. Jurors come to the trial with pre-conceived ideas — about the validity of certain laws, say, or about the death penalty, or about people of certain races or ages or cultural backgrounds. Given the time constraints, those prior beliefs exert an undue influence on their decision.

The model is provocative, but I wonder how well it actually applies to these kinds of complex decisions. In the monkey work, the animals had to make their decisions on the order of seconds or minutes. But when you’re sitting on a jury, your brain can take in lots of evidence over a much longer timescale. Brocas agrees to some extent, saying that one way to get more impartial juries might be to allow jurors more time with each piece of evidence, or to allow them to watch tapings of the testimony.

I asked Josh Gold, a neuroscientist at the University of Pennsylvania who conducted some of the key monkey experiments, whether he thought his data applied to more complex and emotional decisions, such as deciding a peer’s guilt or innocence. He said, essentially, that no one could be sure yet, but that he would bet on it. “It probably didn’t have to be the case that simple perceptual decisions use the same mechanisms as more complex decisions — but the more we look, the more it seems that they do,” Gold said.

So if the model’s true, it has several interesting implications for real-world trials. The first is related to the order in which evidence is presented. Facts or testimony presented at the beginning of a trial will be weighed more strongly in the jurors’ minds than evidence presented at the end. (And for that matter, the authors say, cases that a judge presides over at the beginning of her career will have a strong influence on those later on.)

It also means that it would be better for everybody if jurors were chosen who didn’t have strong views to begin with. “If you want to have an impartial judgment, you need to have relatively impartial people,” Brocas says.

Jury selection — the process before the trial in which both lawyers have a chance to kick out certain jurors based on their backgrounds and preferences — might be one way to get impartial people. But Brocas notes that lawyers don’t necessarily want impartial people so much as they want people who will be sympathetic with their arguments. “Economists never get through jury selection,” she says, chuckling. “The lawyers think they’ll be too cold.”

Go Further