A Blog by

People don’t know when they’re lying to themselves

“I am on a drug. It’s called Charlie Sheen. It’s not available because if you try it, you will die. Your face will melt off and your children will weep over your exploded body.” – Charlie Sheen

“We put our fingers in the eyes of those who doubt that Libya is ruled by anyone other than its people.” – Muammar Gaddafi

You don’t have to look far for instances of people lying to themselves. Whether it’s a drug-addled actor or an almost-toppled dictator, some people seem to have an endless capacity for rationalising what they did, no matter how questionable. We might imagine that these people really know that they’re deceiving themselves, and that their words are mere bravado. But Zoe Chance from Harvard Business School thinks otherwise.

Using experiments where people could cheat on a test, Chance has found that cheaters not only deceive themselves, but are largely oblivious to their own lies. Their ruse is so potent that they’ll continue to overestimate their abilities in the future, even if they suffer for it. Cheaters continue to prosper in their own heads, even if they fail in reality.

Chance asked 76 students to take a maths test, half of whom could see an answer key at the bottom of their sheets. Afterwards, they had to predict their scores on a second longer test. Even though they knew that they wouldn’t be able to see the answers this time round, they imagined higher scores for themselves (81%) if they had the answers on the first test than if they hadn’t (72%). They might have deliberately cheated, or they might have told themselves that they were only looking to “check” the answers they knew all along. Either way, they had fooled themselves into thinking that their strong performance reflected their own intellect, rather than the presence of the answers.

And they were wrong – when Chance asked her recruits to actually take the hypothetical second test, neither group outperformed the other. Those who had used the answers the first-time round were labouring under an inflated view of their abilities.

Chance also found that the students weren’t aware that they were deceiving themselves. She asked 36 fresh recruits to run through the same hypothetical scenario in their heads. Those who imagined having the answers predicted that they’d get a higher score, but not that they would also expect a better score in the second test. They knew that they would cheat the test, but not that they would cheat themselves.

Some people are more prone to this than others. Before the second test, Chance gave the students a questionnaire designed to measure their capacity for deceiving themselves. The “high self-deceivers” not only predicted that they would get better scores in the second test, but they were especially prone to “taking credit for their answers-aided performance”.

These experiments are part of a rich vein of psychological studies, which show just how easy it is for people to lie to themselves, In a previous (and smaller) study, Chance herself asked 23 men to choose between two fake sports magazines, one with broader coverage and one with more features. She found that the volunteers would pick whichever one was accompanied by a special swimsuit cover, but they cited the coverage or features as the reason for their choice (Chance even titled her paper “I read Playboy for the articles”)

In 2004, Michael Norton (who worked with Chance on the latest study) showed that people can explain away biases in recruitment choices just as easily. He asked male volunteers to pick male or female candidates for the position of construction company manager. For some of the recruiters, the male candidate had more experience but poorer education and for others, he had better education but less experience. In both cases, the recruits preferred the male applicant, and they cited whichever area he was strongest in as the deciding factor.  Norton found the same trends in racial biases in college admissions.

In these cases, it’s debatable whether the volunteers were actually lying to themselves, or merely justifying their choices to the researchers. But Chance addressed that problem in her latest study that by putting money on the line. In a variant of the same experiment, She told a new batch of recruits that they could earn up to $20 depending on their score on the second test and how accurately they predicted that score. Despite the potential reward, the group that saw the answers were no better at predicting their scores. And as a result, they earned less money. Even when there was an actual reward at stake, they failed to correct for their self-deception.

Things get even worse when people are actually rewarded for cheating. In a final experiment, Chance gave some of the students a certificate of recognition, in honour of their above-average scores. And if students saw the answers on the first test and got the certificate, they predicted that they would get even higher scores on the second. Those who didn’t see the answers first-time round were unmoved by the extra recognition.

This final result could not be more important. Cheaters convince themselves that they succeed because of their own skill, and if other people agree, their capacity for conning themselves increases. Chance puts it mildly: “The fact that social recognition, which so often accompanies self-deception in the real world, enhances self-deception has troubling implications.”

This tells us a little about the mindset of people who fake their research, who build careers on plagiarised work or who wave around spurious credentials. There’s a tendency to think that these people know full well what they’re doing and go through life with a sort of Machiavellian glee. But the outlook from Chance’s study is subtler.

She showed that even though people know that they occasionally behave dishonestly, they don’t know that they can convincingly lie to themselves to gloss over these misdeeds. Their scam is so convincing that they don’t know that they’re doing it. As she writes, “Our findings show that people not only fail to judge themselves harshly for unethical behaviour, but can even use the positive results of such behaviour to see themselves as better than ever.”

Reference: Chance, Norton, Gino & Ariely. 2011. Temporal view of the costs and benefits of self-deception. PNAS http://dx.doi.org/10.1073/pnas.1010658108

More on our unaware decisions:

24 thoughts on “People don’t know when they’re lying to themselves

  1. Love the photos. Sheen is the one on the right, correct? They’re both in the news so much lately I’m beginning to mix them up…


  2. My friend and collaborator Bill von Hippel has just published a review together with Bob Trivers in which they discuss the evolution of self deception. It’s in Behavioral and Brain Sciences, and should be coming out any day now – with commentaries from a variety of luminaries in the field.

    I’m hoping to write about it myself at http://www.robbrooks.net in a month or so.

  3. Great post, Ed. I sort of feel bad for the cheaters for being so oblivious. I wonder what kind of effect knowledge of a cheater’s self-deception has on one’s desire to punish the offender. Is a liar perceived as less bad if he is lying to himself as well, as opposed to purposely deceiving others while retaining the truth?

  4. You’re lying to yourself right now. When was the last time you thought about cosmic radiation destroying the Earth?

    The brain/psyche is an organ of delusion and denial….and that’s a good thing.

  5. I am sorry,but O(100) sample size is tiny (and “I read playboy for the articles”, they have a total of 23 participants…). And these papers are published and taken seriously?

    I think there is some irony here about making such grand conclusions from small statistic papers and our capacity for self-delusion….

  6. However, it does neatly illustrate the point I think Heinlein made, in that humanity is not a rational creature, but a rationalizing one. People will make decisions based on emotion, and then come up with “rational” reasons for that decision.

  7. And JMW, you just illustrated why we should stop with these small statistics pronouncements. It doesn’t illustrate anything — it gives erroneous reinforcement with preconceived notions (such as the one about rational/irrational human beings, which you are ready to cite as agreeing with the results).

    Statistics have rules. If you want to use statistics to illustrate a point, you follow the rules. The rules say that you need a much larger sample size than the embarrassingly puny numbers. The rules are there so we don’t let our emotions delude our judgement. Which is the irony of the whole thing.

  8. I had an uncle, long dead now, who was famous (within the family you understand) for lying. I think there was grudging admiration for his capability this way; once you knew what the situation was, there was no real conflict anymore. I was a child and all this went far over my head.

    Anyhow the adults in the family always said that the key to his ability was this: He believed it. Somehow he talked himself into believing the lie and that made him convincing to others!

  9. Is there’s one thing that’s worse that small sample sizes, it lazy statistical quibbling. Let’s not forget that in this study, the original experiment (whether people with the answers overestimate their scores on a second test) was replicated four times, with different people and significant results on each. If the argument is that the sample sizes are small and these results could be random, then that’s some pretty systematic randomness you’ve got there.

  10. “Is there’s one thing that’s worse that small sample sizes, it lazy statistical quibbling.”

    No, it’s bad statistics, plain and simple. Each study was *not* replicated four times. There were 4 different studies that had approximately similar approaches, but the variables were changed (money was offered, certificates given, etc.). So it’s a bit “deceptive” to state such a thing, but according to these studies you’ll disagree, of course.

    I’ll give you an example. You’re a biological researcher looking for a particular type of fungus in a forest. You sample 25 trees on the edge of the forest, and find no trace of the fungus. Ah-ha! No fungus! Let’s say another researcher doesn’t like your results, so he goes and replicates the exact same test on 25 trees of the same species, at the edge of a forest. Again, no special fungus is found, reinforcing the experiment. This occurs several more times, and finally, the researchers say “Collectively we’ve sampled 125 trees of this species, and found no fungus — therefore, this fungus does not occur on these kinds of trees.” The problem? If they had sampled trees inside the forest (the population), rather than those on the fringe, they would have likely found the photo-sensitive fungus they were looking for.

    Key point: replication of bad statistical methods does not ever equal good statistical sampling. Reporting on bad statistical methods and hailing it as “scientific” makes it even worse.

  11. What I think is important to recognize in this, is that people are not in control of their thoughts. I imagine one of the substantial contributing factors in the cheaters decieving themselves, is simply listening to the thoughts that arise in the mind, and believing that these thoughts ARE themselves, that the thoughts are theirs, that they are controlling their thoughts, and thus having no problem with believing the justifications of cheating and following through with that in action. Ego, the part of the psyche which calls itself ‘I’ and believes itself separate from everything else, will gladly label arising thoughts as its own. In actuality a thought arises from a set of circumstances, internal, external or a mixture of both. Thoughts are not independent from everything else, they too have a cause. This cause can no more be labeled as ones own as then the cause of the moon waxing and waning. I would be interested in knowing the amount of energy and power given to the ego in the individuals who cheated and justified cheating to themselves. I would not be surprised to find that the ego was more developed in those who had cheating then in those who had not.

  12. Ben Fauxnom

    I don’t understand whether your argument is that the sample size was small or that the methods were borked. You say at the end that the replication of bad statistical methods doesn’t equal good statistical sampling, but since you decided to use an illustrative fable rather than describing what their bad statistical methods are, I can’t identify your specific objection. Disclaimer: I only glanced at the original paper and it’s way outside my field, but the p-values are significant, the magnitude of effect of the self-estimation error seems pretty large, and the relationship held over similar experiments. I’m asking out of genuine curiosity – I’m a grad student without any statistics background, and I’ve just started to kind of get into some of the arguments about data analysis while trying to report my own very different data. What standards do you believe the authors violated?

  13. Ed,

    I am sorry that you are even trying to defend the studies. 4 “different” times, with different variables and different conditions (I can think of a few : how about the students are all from the same class and just had a lecture about psychology?). You use the word “significant” too loosely. “Significance” is a statistical statement, and that’s sqrt{N}. And here N is tiny.

    And worse still, instead of trying to argue sensibly, you decide to employ ad hominem attacks. That’s terrible, to be honest.


  14. @Eugene Says…who in their right mind believes statistical data to begin with, when they are so often made up anyway>

  15. Have there been studies where confronting liars mitigate the the self-delusional effect? I’d imagine it would have to come from someone the liar respects. Also, if the self deception magnifies one’s self image, then doesn’t this visualization of greater success actually lead to said success on qualitative goals? Obviously, it would do nothing for quantitate tests. Having been in corporate America for the last 13 years, it may explain how some people, less qualified, have made it so far up the ladder.

    Though my statistics is rusty, especially about the null hypothesis for statistical significance, I do think there’s enough correlation across studies to acknowledge a definite trend worthy of further study. Maybe Dan Ariely can take this up in his next book on irrationality. You listening Dan?

  16. Does this work n reverse? If you are made to do badly at something, even if you know the cause, will you then under-estimate your ability in subsequent tests?

  17. Humanity is the biggest sampling of all — I’m pretty sure almost everyone knows SOMEONE with an extraordinary capacity for self-delusion. This study is a fascinating microcosm of how it works on a smaller scale. Charlie Sheen and Muammar Gaddafi are two good examples of how far a little (or rather a lot) of self-delusion can propel an otherwise perfectly irrational person. Would like to see a follow-on from this study and maybe they could try out @Wumar’s query about reversing it

  18. Boy, that dude Gaddafi looks crazier to me every time I see him in a picture. I don’t believe Charlie Sheen thinks he realy is on a drug called Charlie Sheen and if anybody tries it there face is going to melt of.I think it was just something he said at one time.Actualy I realy dont know why these two are even being compared with each other, there not at all the same type person. As far as the cheating goes, I believe that if you give a person a chance to cheat more than likely they will do it because that is the way people are. Naturaly if a person is taking a test and the answers are right there they are going to be tempted to look at it because it is human nature. Noone is perfectly honest all the time.There are those however who take cheating and lieing to new unacceptable levels.I have and uncle in my family,who over the years became famous for lieing.He even began to believe he was telling the truth when it was obvious he was not. It’s a sad day when a person lies so much they cant even tell when they are telling the truth or not. I think this is from a loss of what is reality and a mental problem.

  19. Every good liar KNOWS that step 1 is to convince YOURSELF first.

    Forcing people “to believe” X (argumentum ad baculum) is excellent liar training.

    If you can’t believe something you know is false is true, you’re not going to be a convincing – good – liar.

    It’s like method acting plus selective amnesia.

Leave a Reply

Your email address will not be published. Required fields are marked *