Why a new case of misconduct in psychology heralds interesting times for the field

ByEd Yong
June 26, 2012
5 min read

Photo by: Ilan Adler, www.Putchka.com

[Update: The mysterious avenger has been revealed as Uri Simonsohn. He is one fo the co-authors on the Simmons paper that I wrote about below.]

Social psychology is not having the best time of it. After last year’s scandal in which rising star Diederik Stapel was found guilty of scientific fraud, Dirk Smeesters from Erasmus University is also facing charges of misconduct. Here’s Ivan Oransky, writing in Retraction Watch:

“According to an Erasmus press release, a scientific integrity committee found that the results in two of Smeesters’ papers were statistically highly unlikely. Smeesters could not produce the raw data behind the findings, and told the committee that he cherry-picked the data to produce a statistically significant result. Those two papers are being retracted, and the university accepted Smeesters’ resignation on June 21.”

The notable thing about this particular instance of misconduct is that it wasn’t uncovered by internal whistleblowers, as were psychology’s three big fraud cases – Diederik Stapel (exposed in 2011), Marc Hauser (2010) and Karen Ruggiero (2001). Instead, Smeesters was found out because someone external did some data-sleuthing and deemed one of his papers “too good to be true”. Reporting for ScienceInsider, Martin Enserink has more details

“The whistleblower contacted Smeesters himself last year, the report says; Smeesters sent him a data file, which didn’t convince his accuser…. In its report sent to ScienceInsider, the whistleblower’s name is redacted, as are most details about his method and names of Smeesters’s collaborators and others who were involved. (Even the panel members’ names are blacked out, but a university spokesperson says that was a mistake.) The whistleblower, a U.S. scientist, used a new and unpublished statistical method to search for suspicious patterns in the data, the spokesperson says, and agreed to share details about it provided that the method and his identity remain under wraps.”

This might seem like a trivial difference, but I don’t think it could be more important. If you can root out misconduct in this way, through the simple application of a statistical method, we’re likely to see many more such cases.

LIMITED TIME OFFER

Get a FREE tote featuring 1 of 7 ICONIC PLACES OF THE WORLD

Greg Francis from Purdue University has already published three analyses of previous papers (with more to follow), in which he used statistical techniques to show that published results were too good to be true. His test looks for an overabundance of positive results given the nature of the experiments – a sign that researchers have deliberately omitted negative results that didn’t support their conclusion, or massaged their data in a way that produces positive results. When I spoke to Francis about an earlier story, he told me: “For the field in general, if somebody just gives me a study and says here’s a result, I’m inclined to believe that it might be contaminated by publication bias.”

Francis has reason to be suspicious, because behaviour is surprisingly common. This is another notable point about the Smeesters case. He didn’t fabricate data entirely in the way that Stapel did. As one of his co-authors writes,“Unlike Stapel, Dirk actually ran studies.” Instead, he was busted for behaviour that many of his peers wouldn’t consider to be that unusual. He even says as much. Again, from Enserink’s report:

“According to the report, Smeesters said this type of massaging was nothing out of the ordinary. He “repeatedly indicates that the culture in his field and his department is such that he does not feel personally responsible, and is convinced that in the area of marketing and (to a lesser extent) social psychology, many consciously leave out data to reach significance without saying so.”

He’s not wrong. Here’s what I wrote about this in my feature on psychology’s bias and replication problems for Nature:

“[Joseph Simmons] recently published a tongue-in-cheek paper in Psychological Science ‘showing’ that listening to the song When I’m Sixty-four by the Beatles can actually reduce a listener’s age by 1.5 years7. Simmons designed the experiments to show how “unacceptably easy” it can be to find statistically significant results to support a hypothesis. Many psychologists make on-the-fly decisions about key aspects of their studies, including how many volunteers to recruit, which variables to measure and how to analyse the results. These choices could be innocently made, but they give researchers the freedom to torture experiments and data until they produce positive results. [Note: one of the co-authors behind this study, Uri Simonsohn, has now been revealed as the whistleblower in the Smeesters case- Ed, 28/07/12, 1400 GMT]

In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked”8. On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana–Champaign.”

I look at the Smeesters case and wonder if it’s just the first flake of the avalanche. If psychologists are developing the methodological tools to root out poor practices that are reportedly commonplace, and if it is clear that such behaviour is worthy of retraction and resignation, there may be very interesting times ahead.

Image by Chagai

Related Topics

Go Further