Will vs. Grace – are people honest because they resist temptation or because they don’t feel it?

ByEd Yong
July 14, 2009
9 min read

In a world where the temptation to lie, deceive and cheat is both strong and profitable, what compels some people to choose the straight and narrow path? According to a new brain-scanning study, honest moral decisions depend more on the absence of temptation in the first place than on people wilfully resisting these lures.

Joshua Greene and Joseph Paxton and Harvard University came to this conclusion by using a technique called functional magnetic resonance imaging (fMRI) to study the brain activity of people who were given a chance to lie. The volunteers were trying to predict the outcomes of coin-flips for money and they could walk away with more cash by lying about their accuracy.

The task allowed Greene and Paxton to test two competing (and wonderfully named) explanations for honest behaviour. The first -the “Will” hypothesis – suggests that we behave morally by exerting control over the desire to cheat. The second – the “Grace” hypothesis – says that honesty is more a passive process than an active one, fuelled by an absence of temptation rather than the presence of willpower. It follows on from a growing body of psychological studies, which suggest that much of our behaviour is governed by unconscious, automatic processes.

Many studies (and several awful popular science articles) have tried to place brain-scanning technology in the role of fancy lie detectors but in almost all of these cases, people are told to lie rather than doing so spontaneously. Greene and Paxton were much more interested in what happens in a person’s brain when they make the choice to lie.

They recruited 35 people and asked them to predict the result of computerised coin-flips while sitting in an fMRI scanner. They were paid in proportion to their accuracy. In some ‘No-Opportunity trials’, they had to make their predictions beforehand, giving them no room for cheating. In other ‘Opportunity trials’, they simply had say whether they had guessed correctly after the fact, opening the door to dishonesty.

To cover up the somewhat transparent nature of the experiment, Greene and Paxton fibbed themselves. They told the recruits that they were taking part in a study of psychic ability, where the idea was that people were more clairvoyant if their predictions were private and motivated by money. Under this ruse, the very nature of the “study” meant that people had the opportunity to lie, but were expected not to.

LIMITED TIME OFFER

Get a FREE tote featuring 1 of 7 ICONIC PLACES OF THE WORLD

Based on their average results, the duo classified 14 of their would-be psychics as “dishonest”, for they achieved improbable levels of accuracy of 69% of more. Everyone else was placed in either an honest or an ambiguous group. Of course, these labels referred to their overall behaviour during the task rather than personality traits – the ‘dishonest’ people didn’t always behave that way and, in fact, that was critical to the first part of the experiment.

Honest_Dishonest.jpg

Greene and Paxton found that the honest people had the same reaction times whether they won or lost money, and whether they had the opportunity to lie or not. These results support the Grace hypothesis for they suggest that honest people aren’t making any extra mental effort when they forgo the opportunity for cheating.

The brain-scanning data matched the pattern suggested by the reaction times. When interpreting the scans, Greene and Paxton focused on areas at the front of the brain that are associated with mental control, such as the anteriror cingulated cortex (ACC), dorsolateral prefrontal cortex (DLPFC) and ventromedial prefrontal cortex (VLFPC). These areas are active when, for example, we delay instant gratification to follow through on a plan, and Greene and Paxton refer to them as “the control network”.

In the control networks of honest volunteers, there were no significant differences in activity when the do-gooders lost money because they had to (in No-Opportunity trials) and when they lost money because they gave an honest answer (in the Opportunity trials). Even when Greene and Paxton fine-tuned their analysis of the brain scans to the highest possible resolution, they couldn’t detect any differences.

Both the reaction-time tests and the brain scans show that for people who mostly behave honestly, these experiments strongly favour the Grace hypothesis – these people don’t seem to require any extra effort to resist the opportunity to lie. This isn’t due to ignorance, for every one of the 14 honest people said afterwards that they knew they could cheat if they wanted too. They’re not oblivious to the option – they just don’t need to work to ignore it.

The dishonest group showed different patterns. Their DLPFC was more active when they won money in the Opportunity trials (which they often did by lying) than in the No-Opportunity ones (where they always had to win honestly). This suggests that the choice to be dishonest is associated  with brain activity in the DLPFC.

But as a whole, their control network was more active in the few Opportunity trials where they lost money due to an honest response than in No-Opportunity trials when they had no choice in losing money. This matches the results from the reaction time measurements, where they took much longer to respond in Opportunity trials where they could cheat but didn’t, than in No-Opportunity trials where their losses were forced. Both sets of results suggest that people who are usually willing to lie exert some extra mental control when they refrain from doing so.

Surely this is compatible with the Will hypothesis?  Greene and Paxton argue otherwise – they describe these situations as “limited honesty” and they say that “the Grace hypothesis applies only to honest decisions in individuals who consistently behaved honestly and not to decisions reflecting limited honesty.”

I’m not sure I really buy that distinction, but the duo offer up an interesting explanation for the fact that control network activity is associated with both limited honesty and the decision to lie. They suggest that this pattern reflects attempts by the dishonest players to resist temptation. The fact that they fail more often than they succeed explains why the network is active in trials where they end up lying as well as those few where they successfully resist.

To prove their point, Greene and Paxton created a mathematical model that predicted how many wins people would get in the Opportunity trials (a rough indicator of how often they lied) based on activity levels from 9 different control network areas. The predictions from the model matched the real figures with an almost 80% accuracy.

Honest_prediction.jpg

So, people who veer towards dishonesty try to resist it but fail more often than they succeed. People who are mostly honest don’t really need to try.  This result is fairly counter-intuitive, for we tend to believe that honesty is an act of will overcoming temptation. In a survey done before any of the experiments, Greene and Paxton found that given a choice, ordinary people believe that the Will hypothesis is the right one.

Obviously, the study has its limitations. Greene and Paxton couldn’t work out how many of their dishonest volunteers were aware of their deceit, whether the honest lot wilfully pushed aside temptation well before the brain-scanning commenced, what the motivations of either group were, or whether their degree of honesty in the experiment carries over into their normal lives. Nonetheless, it’s an intriguing start, and as the duo concludes: “The present findings do suggest, however, that some individuals can, at least temporarily, achieve a state of moral grace.”

PS On a final note, one of the stranger results amid the data is the fact that VLPFC of honest players was more active during Opportunity trials where they had to accept money by admitting to choices they had made fairly, than in No-Opportunity trials where their win was a given. Why would people need to exert more mental control to accept a just outcome? Perhaps this activity reflects their “pride or self-doubt upon accepting legitimately won rewards”?

Reference: PNAS  DOI: 10.1073/pnas.0900152106

Go Further