A Blog by

Quick intuitive decisions foster more charity and cooperation than slow calculated ones

Our lives are governed by both fast and slow – by quick, intuitive decisions based on our gut feelings; and by deliberate, ponderous ones based on careful reflection. How do these varying speeds affect our choices? Consider the many situations when we must put our own self-interest against the public good, from giving to charity to paying out taxes. Are we naturally prone to selfishness, behaving altruistically only through slow acts of self-control? Or do we intuitively reveal our better angels, giving way to self-interest as we take time to think?

According to David Rand from Harvard University, it’s the latter. Through a series of experiments, he has found that, on average, people behave more selflessly if they make decisions quickly and intuitively. If they take time to weigh things up, cooperation gives way to selfishness. The title of his paper – “Spontaneous giving and calculated greed” – says it all.


A Blog by

The signature of the bluffing brain


The best poker players are masters of deception. They’re good at manipulating the actions of other players, while masking their own so that their lies become undetectable. But even the best deceivers have tells, and Meghana Bhatt from Baylor University has found some fascinating ones. By scanning the brains and studying the behaviour of volunteers playing a simple bargaining game, she has found different patterns of brain activity that correspond to different playing styles. These “neural signatures” separate the players who are adept at strategic deception from those who play more straightforwardly.


A Blog by

Pay it forward? Cooperative behaviour spreads through a group, but so does cheating

Ever wonder if acts of kindness or malice really do ripple outwards? If you give up a seat on a train to a stranger, do they go onto “pay it forward” to others? Likewise, if you steal someone’s seat, does the bad mood you engender topple over to other people like a set of malicious dominoes? We’d all probably assume that the answers to both questions were yes, but James Fowler and Nicholas Christakis think they have found experimental evidence for the contagious nature of cooperation and cheating.

The duo analysed data from an earlier psychological experiment by Ernst Fehr and Simon Gachter, where groups of four volunteers had to decide how much money to put in a public pot. For every unit they chipped in, each member would get 0.4 back. So any donations represent a loss to the donor, but a gain to the group as a whole. The best way for the group to benefit would be for everyone to put in all their money, but each individual player could do even better by putting in nothing and feeding off their peers’ generosity.  

This “public goods game” went on for six rounds. At the end of each one, the players were told what their other comrades did, although everyone’s identities were kept secret. The groups were shuffled between rounds so that players never played with each other more than once.

Fowler and Christakis found that the volunteers’ later moves were influenced by the behaviour of their fellow players. Each act of generosity by an individual influenced the other three players to also give more money themselves, and each of them influenced the people they played with later. One act became three, which became nine. Likewise, players who experienced stingy strategies were more likely to be stingy themselves.

Even though the groups swapped every time, the contagious nature of generous or miserly actions carried on for at least three degrees of separation. You can see an example of one such cascade in the diagram below. Eleni contributes some money to the public pot and her fellow player, Lucas, benefits (one degree). In the next round, Lucas himself offers money for the good of the group, which benefits Erika (two degrees), who gives more when paired with Jay in her next game (three degrees). Meanwhile, the effects of Eleni’s initial charity continue to spread throughout the players as Lucas and Erika persist in their cooperation in later rounds.



A Blog by

Prejudice vs. biology – testosterone makes people more selfish, but only if they think it does

What do you think a group of women would do if they were given a dose of testosterone before playing a game? Our folk wisdom tells us that they would probably become more aggressive, selfish or antisocial. Well, that’s true… but only if they think they’ve been given testosterone.

Hulk.jpgIf they don’t know whether they’ve been given testosterone or placebo, the hormone actually has the opposite effect to the one most people would expect – it promotes fair play. The belligerent behaviour stereotypically linked to testosterone only surfaces if people think they’ve been given hormone, whether they receive a placebo or not. So strong are the negative connotations linked to testosterone that they can actually overwhelm and reverse the hormone’s actual biological effects.

If ever a hormone was the subject of clichés and stereotypes, it is testosterone. In pop culture, it has become synonymous with masculinity, although women are subject to its influence too. Injections of testosterone can make lab rats more aggressive, and this link is widely applied to humans. The media portrays “testosterone-charged” people as sex-crazed and financially flippant and the apparent link with violence is so pervasive that the use of steroids has even been used as a legal defence in a US court.

Christoph Eisenegger from the University of Zurich tested this folk wisdom by enrolling 60 women in a double-blind randomised controlled trial. They were randomly given either a 0.5 milligram drop of testosterone or a placebo. He only recruited women because previous research shows exactly how much testosterone you need to have an effect, and how long it takes to do so. We don’t know that for men.

The women couldn’t have known which substance they were given, but Eisenegger asked them to guess anyway. Their answers confirmed that they couldn’t tell the difference between the two drops. But they would also confirm something more startling by the trial’s end.

Each woman was paired with a partner (from another group of 60) and played an “Ultimatum game” for a pot of ten Swiss francs. One woman, the “proposer”, decided how to allocate it and her partner, “the responder” could choose to accept or refuse the offer. If she accepts, the money is split as suggested and if she refuses, both players go empty-handed. The fairest split would be an equal one but from the responder’s point of view, any money would be better than nothing. The game rarely plays out like that though – so disgusted are humans with unfairness that responders tend to reject low offers, sacrificing their own meagre gains to spite their proposers.

Overall, Eisenegger found that women under the influence of testosterone actually offered more money to their partners than those who received the placebo. The effect was statistically significant and it’s exactly the opposite of the selfish, risk-taking, antagonistic behaviour that stereotypes would have us predict.

Those behaviours only surfaced if women thought they had been given testosterone. Those women made lower offers than their peers who believed they had tasted a placebo, regardless of which drop they had been given. The amazing thing is that this negative ‘imagined’ effect actually outweighed the positive ‘real’ one. On average, a drop of testosterone increased a proposer’s offer by 0.6 units, but belief in the hormone’s effects reduced the offer by 0.9 units.

The difference between these values is not statistically significant, so we can’t conclude that the negative effect outweighs the positive one, but the two are certainly comparable. Either way, it is a staggering result. It implies that the biological effect of a behaviour-altering hormone can be masked, if not reversed, by what we think it does. It’s somewhat similar to the nocebo effect, where people experience unwanted side effects from a drug because they believe that such effects will happen.



How can we explain these results? Certainly, Eisenegger accounted for the volunteers’ levels of testosterone before the experiment, as well as their levels of cortisol (a stress hormone), their mood and their feelings of anxiety, anger, calmness or wakefulness. None of these factors affected his results.

It’s possible that people who are naturally inclined towards selfish, aggressive or dominant behaviour would find it easier to rationalise their actions if they felt that they were under the spell of testosterone. However, these personality traits weren’t any more common among the recruits who thought they were given testosterone than those who thought they had a placebo.

Instead, Eisenegger suggests that testosterone’s negative stereotype provided some of the women with a licence to misbehave. Their beliefs relieved them from the responsibility of making socially acceptable offers because they thought they would be driven to make greedy ones.

At first, this work seems to contradict the results from earlier studies, which suggest that high testosterone levels are linked with risk-taking, selfishness and aggression. But these studies can’t tell us whether the former causes the latter. Indeed, another randomised trial that I’ve blogged about before found that doses of testosterone didn’t affect a woman’s selflessness, trust, trustworthiness, fairness or attitude to risk. This study also used an Ultimatum game but it only analysed the behaviour of the responder rather than the proposer.

The alternative hypothesis says that testosterone plays a much subtler role in shaping our social lives. When our social status is challenged, testosterone drives us to increase our standing; how we do that depends on the situation. Traders might take bigger financial risks, while prisoners might have a dust-up.  Eisenegger thinks that this is the right explanation, and his results support his view. In his experiment, women who received testosterone would be more inclined towards acts that boosted their social status, and the best way of doing that was to make a fair offer.

The message from this study is clear, and Eisenegger sums it up best himself:

“Whereas other animals may be predominantly under the influence of biological factors such as hormones, biology seems to exert less control over human behaviour. Our findings also teach an important methodological lesson for future studies: it is crucial to control for subjects’ beliefs because the [effect of a pure substance] may be otherwise under- or overestimated.”

Reference: Nature doi:10.1038/nature08711

More on hormones and placebo:


A Blog by

How to lose friends and alienate people by disrupting the brain

Oscar Wilde once said, “One can survive everything nowadays, except death, and live down anything, except a good reputation.” All well and witty, but for those of us who aren’t Victorian cads, reputation matters. It’s the bedrock that our social lives are built upon and people go to great lengths to build and maintain a solid one. A new study shows that our ability to do this involves the right half of our brain, and particularly an area called the lateral prefrontal cortex (PFC).

Disrupting the neurons in this area hampers a person’s ability to build a reputation while playing psychological games. They can still act selflessly, and they still know what they would need to do in order to garner good repute. They just find it difficult to resist the temptation to cheat, even though they know it will cost them their status among other players. Most of us know from personal experience that knowing what’s best for us is very different to acting on it – this study shows that this distinction exists at a neurological level.

Daria Knoch and colleages from the University of Basel focused on the PFC because it’s a key player in mental abilities that centre around self-control, including planning, decision-making and attention. These “executive processes” must surely play a key part in building a good reputation, for doing so typically involves a cost (such as time, effort or money) and a tradeoff between current and future benefits. For example, I might return a dropped wallet so that I’ll be seen in a good light, rather than pocket the cash and be done with it.

Other studies have compared neural activity in the PFC with people’s behaviour, but these brain scans can’t tell us whether the activity caused the behaviour or vice versa.  To do that, Knoch decided to take the PFC out of the game entirely. She used a technique called transcranial magnetic stimulation (TMS) where rapidly changing magnetic fields induce weak electric currents in specific parts of the brain the suppresses the buzz of the local neurons.

After going through this treatment, 87 volunteers played a “trust game” in pairs. In each round, an investor decides how many points (out of 10) to donate to a trustee. These are quadrupled, and the trustee decides how many of these to give back.  Some games were played anonymously and the investors never knew about the trustees’ decisions. With the investor in the dark, the trustees had no strategic incentive to return any points at all, and doing so is a measure of their selflessness.


In other games, the trustee’s last three decisions were public knowledge and that brought reputation into play. The trustee could achieve a good reputation by equalising the shares or paying back even more, or shatter their credibility by paying back nothing or very little. The latter option nets big rewards in the short-term, but the trustees needed to override their immediate self-interests for bigger gains in the long-term. And if the initial investment is greater, the trustees also need more self-control for the amount they have to return is greater.

This worked in practice. If trustees always equalised their payoffs, they had a 71% chance of being trusted with the full 10 point investment; if they gave nothing back, this probability fell to 6%. In the long run, those who always cooperated until the last hurdle earned 43% more points than constant cheats. And trustees cared about their reputation – when the game was anonymous, they send back around a quarter of their investment, but if their status was on the line, they gave back 44%. 

TMS didn’t affect the trustees’ choices in the anonymous games, or in the reputational ones if investments were low. But when big points were on the table, things changed. Targeting their right lateral PFC significantly reduced their likelihood of paying back the investors to 30%, down from 41% for a fake round of TMS, or 48% for a burst directed to the left brain.

In fact, the trustees whose right brains were targeted with TMS behaved in exactly the same way regardless of whether the investors knew about their choices or not. Anonymous or transparent, it didn’t matter – even though their reputation was on the line, their behaviour didn’t change.


Knoch also found that the TMS didn’t affect the volunteers’ perceptions of fairness. They knew that hoarding large investments was unfair, and they knew that if they did so, the investors would probably give them fewer points in the future. They knew all of this – they just couldn’t put it into useful practice. They couldn’t put off the short-term gains of having lots of points, in favour of earning even more in the long-term – a basic skill when it comes to building a reputation.

Of course, the lateral PFC is probably only part of the story. It’s fashionable to try and discover the brain region “responsible for” different abilities or behaviours, but the PFC is no more the brain’s “reputation centre” than a steering wheel is a car’s “driving centre” – clearly other parts like the wheels, axle and engine help too. Knoch (more so than many neuroscientists), is aware of this and says, ” In highly complex processes such as reputation formation, brain areas do not act in isolation, but rather must work together as a network.” Her next goal is to investigate how different parts of the brain interact when reputation is on the line.

Reference: PNAS 10.1073/pnas.0911619106

More on reputation and coopoeration: 


A Blog by

Carrots trump sticks for fostering cooperation

When it comes to encouraging people to work together for the greater good, carrots work better than sticks. That’s the message from a new study showing that rewarding people for good behaviour is better at promoting cooperation than punishing them for offences.

David Rand from Harvard University asked teams of volunteers to play “public goods games”, where they could cheat or cooperate with each other for real money. After many rounds of play, the players were more likely to work together if they could reward each other for good behaviour or punish each other for offences. But of these two strategies, the carrot was better for the group than the stick, earning them far greater rewards. .

Public goods games, albeit in a more complex form, are part and parcel of modern life. We play them when we decide to take personal responsibility for reducing carbon emissions, or rely on others to do so. We play them when we choose to do our share of the household chores, or when we rely on our housemates or partners to sort it out.

These sorts of games are useful for understanding our tendency to help unrelated strangers even if we stand to gain nothing in return. The big question is why such selflessness exists when altruists can be easily exploited by cheats and slackers, who reap common benefits without contributing anything of their own. How does selflessness persist in the face of such vulnerabilities?


A Blog by

Do testosterone and oestrogen affect our attitudes to fairness, trust, risk and altruism?

Blogging on Peer-Reviewed ResearchSome people go out of their way to help their peers, while others are more selfish. Some lend their trust easily, while others are more suspicious and distrustful. Some dive headlong into risky ventures; others shun risk like visiting in-laws. There’s every reason to believe that these differences in behaviour have biological roots, and some studies have suggested that they are influenced by sex hormones, like testosterone and oestrogen.

Roulette.jpgIt’s an intriguing idea, not least because men and women have very different levels of these hormones. Could that explain differences in behaviour between the two sexes? Certainly, several studies have found links between people’s levels of sex hormones and their behaviour in psychological experiments. But to Niklas Zethraeus and colleagues from the Stockholm School of Economics, this evidence merely showed that the two things were connected in some way – they weren’t strong enough to show that sex hormones were directly influencing behaviour.

To get a clearer answer, Zethraeus set up a clinical trial. He recruited 200 women, between 50-65 years of age, and randomly split them into three groups – one took tablets of oestrogen, the second took testosterone tablets and the third took simply sugar pills.

After four weeks of tablets, the women took part in a suite of psychological games, where they had the chance to play for real money. The games were designed to test their selflessness, trust, trustworthiness, fairness and attitudes to risk. If sex hormones truly change these behaviours, the three groups of women would have played the games differently. They didn’t.

Their levels of hormones had changed appropriately. At the end of the four weeks, the group that dosed up on oestrogen had about 8 times more than they did at the start, but normal levels of testosterone. Likewise, the testosterone-takers had 4-6 time more testosterone and free testosterone (the “active” fraction that isn’t attached to any proteins) but normal levels of oestrogen. The sugar-takers weren’t any different. Despite these changes, the women didn’t play the four psychological games any differently.


A Blog by

Our moral thermostat – why being good can give people license to misbehave

Blogging on Peer-Reviewed ResearchWhat happens when you remember a good deed, or think of yourself as a stand-up citizen? You might think that your shining self-image would reinforce the value of selflessness and make you more likely to behave morally in the future. But a new study disagrees.

Shoulderangeldevil.jpgThrough three psychological experiments, Sonya Sachdeva from Northwestern University found that people who are primed to think well of themselves behave less altruistically than those whose moral identity is threatened. They donate less to charity and they become less likely to make decisions for the good of the environment.

Sachdeva suggests that the choice to behave morally is a balancing act between the desire to do good and the costs of doing so – be they time, effort or (in the case of giving to charities) actual financial costs. The point at which these balance is set by our own sense of self-worth. Tip the scales by threatening our saintly personas and we become more likely to behave selflessly to cleanse our tarnished perception. Do the opposite, and our bolstered moral identity slackens our commitment, giving us a license to act immorally. Having established our persona as a do-gooder, we feel less impetus to bear the costs of future moral actions.

It’s a fascinating idea. It implies both that we have a sort of moral thermostat, and that it’s possible for us to feel “too moral”. Rather than a black-and-white world of heroes and villains, Sachdeva paints a picture of a world full of “saintly sinners and sinning saints”.


A Blog by

Globalisation increases cooperation at an international scale

Blogging on Peer-Reviewed ResearchI live in London. According to Google Analytics, 96% of this blog’s readers make their homes in a different city and 91% live in another country altogether. The fact that most of you are reading this post at all is a symptom of the globalised state of the 21st century.

Through telecommunications, the Internet, free trade, air travel and more, the world’s population is becoming increasingly connected and dependent on one another. And as this happens, the problems that face us as a species are becoming ever more apparent, from our relentless overuse of natural resources to the threat of climate change. But how will globalisation affect our ability to handle these problems? Will it see the cliquey side of human behaviour writ large, or the rise of cooperation on a global scale?

Opinions differ. Some say that globalisation makes the differences between ethnic or geographical groups even starker, strengthening the lines between them. This bleak viewpoint suggests that exposing people to an ever greater variety of world views only reinforces xenophobia. And indeed, recent decades have seen a surge in xenophobic political parties and states seeking independent status.

Others take a more optimistic stance, arguing that in a globalised world, people are more likely to find a sense of common belonging and concepts of ethnicity or nationhood become less relevant. After all, recent decades have also seen an increase in foreign aid to developing countries and human rights campaigns.

Nancy Buchan form the University of South Carolina has used a clever psychological game to show that the latter perspective is stronger. Her group recruited volunteers from six countries across five  continents and asked them to play a game where they could cooperate with each other at local or global levels. She found that people who were more connected internationally, or who came from more globalised countries, were more likely to work together at a global level. Globalisation, it seems, breed cosmopolitan attitudes, not insular ones.


A Blog by

Saucy study reveals a gene that affects aggression after provocation

Blogging on Peer-Reviewed ResearchPeople seem inordinately keen to pit nature and nurture as imagined adversaries, but this naive view glosses over the far more interesting interactions between the two. These interactions between genes and environment lie at the heart of a new study by Rose McDermott from Brown University, which elegantly fuses two of my favourite topics – genetic influences on behaviour and the psychology of punishment.

<Regular readers may remember that I've written three previous pieces on punishment. Each was based on a study that used clever psychological games to investigate how people behave when they are given a choice to cooperate with, cheat, or punish their peers.

McDermott reasoned that the way people behave in these games might be influenced by the genes they carry and especially one called monoamine oxidase A (MAOA), which has been linked to aggressive behaviour. Her international team of scientists set out to investigate the effect that different versions of MAOA would have in a real situation, where people believe that they actually have the chance to hurt other people.

MAOA encodes a protein that helps to break down a variety of signalling chemicals in the brain, including dopamine and serotonin. It has been saddled with the tag of “warrior gene” because of its consistent link with aggressive behaviour. A single fault in the gene, which leads to a useless protein, was associated with a pattern of impulsive aggression and violent criminal behaviour among the men of a particular Dutch family. Removing the gene from mice makes them similarly aggressive.

These are all-or-nothing changes, but subtler variations exist. For example, there is a high-activity version of the gene (MAOA-H), which produces lots of enzyme and a low-activity version (MAOA-L), which produces very little. The two versions are separated by changes in the gene’s “promoter region”, which controls how strongly it is activated.

A few years ago, British scientists found that children who had been abused are less likely to develop antisocial problems if they carry the MAOA-H gene than if those who bear the low-activity MAOA-L version. An Italian group has since found the same thing. It is a truly fascinating result for it tells us that the MAOA gene not only affects a person’s behaviour, but also their reactions to other people’s behaviour.

But both studies had a big flaw – they measured aggression by asking people to fill in a questionnaire. Essentially, they relied on people to accurately say how belligerent they are and we all know that many people like to talk big. McDermott wanted to look at actions not claims.

To that end, she recruited 78 male volunteers and sequenced their MAOA gene to see which version they carried (just over a quarter had the low-activity version). The volunteers played out a scenario where they believed that they could actually physically harm another person for taking money that they had earned. Their weapon of retribution? Spicy sauce.


A Blog by

Why punishment is worth it in the end

Blogging on Peer-Reviewed ResearchIs punishment a destructive force that breaks societies or part of the very glue that holds them together? Last year, I blogged about two studies that tried to answer this question using similar psychological games. In both, volunteers played with tokens that were eventually exchanged for money. They had the option to either cooperate with each other so that the group as a whole reaped the greatest benefits, or cheat and freeload off the efforts of their peers.

Punisher2.jpgIn both studies, giving the players the option to punish each other soon put a relative end to cheating. Faced with the threat of retaliation, most players behaved themselves and levels of cooperation kept stable. But this collaboration came at a heavy cost – in both cases, players ended up poorer for it. Indeed, one of the papers was titled “Winners don’t punish“, and its authors concluded, “Winners do not use costly punishment, whereas losers punish and perish.”

But in both these cases, the experiments lasted no more than 10 ’rounds’ in total, and to Simon Gaechter, that was too short. He reasoned that more protracted games would more accurately reveal the legacy of punishment, and more closely reflect the pressures that social species might experience over evolutionary time spans. With a longer version of the games used in previous studies, he ably demonstrated that in the long run, if punishment is an option, both groups and individuals end up better off. 

Together with colleagues from the University of Nottingham, Gaechter recruited 207 people and watched as they played a “public goods game” in groups of three. All of them were told that their group would remain the same for the entire game, which could last for either ten rounds or fifty.


A Blog by

Why do people overbid in auctions?

Blogging on Peer-Reviewed ResearchThe art of auctioning is an ancient one. The concept of competitively bidding for goods has lasted from Roman times, when spoils of war were divvied up around a planted spear, to the 21st century, when the spoils of the loft are sold through eBay. But despite society’s familiarity with the concept, people who take part in auctions still behave in a strange way – they tend to overbid, offering more money than what they actually think an object is worth.

Some economists have suggested that people overbid because they are averse to risk. They would rather make spend more money to be sure of a win than to risk making a steal by gambling with a low bid.  Others have suggested that it’s the element of competition that drives people to overbid – the joy of winning is what they’re after. Now, Mauricio Delgado and colleagues from Rutgers University have provided new evidence to show that neither theory is right.

With a combination of brain-scanning and psychological games, they have found that economists who suggested a social competition angle were moving along the right lines. But it’s not the joy of winning that’s important – it’s the fear of losing. People cough up too much because of simple social competition.

Delgado’s team  (which included Elizabeth Phelps, whose work I have blogged about before) used a brain-scanning technique called functional resonance magnetic imaging (fMRI) to study the brains of 17 volunteers as they played two games – a two-player auction or a single-player lottery.



A Blog by

Children learn to share by age 7-8

Blogging on Peer-Reviewed Research Yesterday, I wrote about selfless capuchin monkeys, who find personal reward in the act of giving other monkeys. The results seemed to demonstrate that monkeys are sensitive to the welfare of their peers, and will make choices that benefit others without any material gain for themselves. Today, another study looks at the same processes in a very different sort of cheeky monkey – human children.

Sharingsweets.jpgHumans are notable among other animals for our vast capacity for cooperation and empathy. Our concern about the experiences of other people, and our natural aversion to unfair play are the bedrocks on which our societies and moral codes are built. But are we born with this penchant for equality or does it develop as we grow up?

To find out, Ernst Fehr from the University of Zurich played a series of three decision-making games with 229 children between the ages of 3 and 8. The study used similar methods to those employed by Frans de Waal in his experiments on capuchins, but with some notable differences. For a start, the choices were anonymous. In each trial, a child had to decide between two ways of distributing sweets between themselves and a second child, who was only ever represented by a photo.


A Blog by

Why cooperation is hard for people with borderline personality disorder

Blogging on Peer-Reviewed Research
Social lives are delicate things. We’ve all had situations where friendships and relationships have been dented and broken, and we’re reasonably skilled at repairing the damage. This ability to keep our social ties from snapping relies on being able to read other people, and on knowing a thing or two about what’s normal in human society.

Fatalattraction.jpgFor instance, we appreciate that cheating fosters ill-will, while generosity can engender trust. So cheaters might try to win back their companions with giving gestures. These little exchanges are the glue that bind groups of people into happy and cooperative wholes. Now, a new study uses psychological games and brain scans to show what happens when they go amiss.

Brooks King-Casas at Baylor College of Medicine used a simple game to compare the social skills of healthy volunteers with those of people diagnosed with psychiatric condition called borderline personality disorder (BPD). People who suffer from BPD show erratic mood-swings and find it difficult to trust and understand the motives of others. As a result, they suffer from fraught personal relationships with friends, colleagues and partners.

So it was in the games. Each one was played by two players, an investor and a trustee, over the course of 10 rounds. The investor was given a princely sum of $20 and could split as much of it as they liked with the trustee. This investment was tripled and the trustee could then decide how much to return to the investor. Trust and cooperation is essential if both players are to benefit. The investor can make the most money by trusting the trustee with a sizeable share, on the assumption that some of it will find its way back. If the trustee violates that agreement, they are likely to get smaller investments in future rounds.


A Blog by

Winners don’t punish: “Punishing slackers Part 2”

Blogging on Peer-Reviewed ResearchTwo weeks ago, I wrote about a Science paper which looked at the effects of punishment in different societies across the world. Through a series of fascinating psychological experiments, the paper showed that the ability to punish freeloaders stabilises cooperative behaviour, bringing out the selfless side in people by making things more difficult for cheaters. The paper also showed that ‘antisocial punishment’, where the punished seek revenge on the punishers, derails the high levels of cooperation that other fairer forms of punishment help to entrench.

Punisher.jpgNow a new study published in that other minor journal Nature adds another twist to the story. In it, Anna Dreber, Martin Nowak and colleagues from Harvard University confirm that groups of people are indeed more likely to cooperate if they can dole out punishment, but they also reap smaller rewards. In Dreber’s experiments, the groups that left with the highest payoffs were those that shunned punishment completely. It’s a conclusion best summed up by the stark and simple title of their paper: “Winners don’t punish.” 

Dreber revealed the dark side of punishment by modifying one of the classic experiments of game theory – the Prisoner’s Dilemma. Inspired by the plight of separately interrogated prisoners, the game pits two players who have a choice to cooperate or defect. For each ‘prisoner’, the best choice no matter what his partner does is to defect, but if they both defect, their outcomes are far poorer than if they had both cooperated – hence the dilemma.