How many people are "not everyone"? Some thoughts on scientific debates and smackdowns

You may have heard about the new paper on how people tend to pick friends who carry a similar gene variant. If true, it would be very cool. But in Nature, Amy Maxmen quotes scientists who don’t like the study at all:

“If this was a study looking for shared genes in patients with diabetes, it would not be up to the standards of the field,” says David Altshuler, a geneticist at the Broad Institute in Cambridge. “We set these standards after 10 years of seeing so many irreproducible results in gene-association studies.”

Because most genes have modest effects on behaviour or health, many scientists assume that thousands of SNPs — rather than six — need to be analysed before a correlation to any trait can be confidently made. Geneticists are often hard-pressed to find one SNP in a million that reproducibly correlates with a disease, says Altshuler. “It’s like the team bought six lottery tickets and won the megabucks twice — this is not how things work.”

Stanley Nelson, a human geneticist at the University of California, Los Angeles, agrees, adding: “It certainly is a provocative study — I would have loved to have seen it done with information from the rest of the genome.”

Kudos to Maxmen to dig deep, rather than reprint a press release. But I found it odd that these critics were relegated to the end of the article, introduced with the following:

But not everyone is convinced.

I find this an annoyingly vague phrase. If a hundred experts read a paper and pass judgment, what does it mean for “not everyone” to be convinced? It sounds like it could mean 99 out of 100 love it, or 0 out of 100, or anywhere in between. If the quotations in the article are a representative sample, “not everyone” means “almost no one.”  In addition to Altshuler and Nelson, Maxmen quoted one other scientist who said it was an interesting study, “assuming it’s right.” Hardly a ringing endorsement. If almost no one thinks much of the paper, shouldn’t that be the lead?

Maxmen faces a challenge for which there is no simple solution I know of. How does one report research? If it’s gone through peer review, is it enough to just explain the results and wait until other scientists test them? Or does one seek out a criticism from an expert in the field, simply to demonstrate that “not everyone” is in agreement? And if you contact a lot of people and they mostly say a paper is no good, can you go further and make their collective judgment the story?

That’s what I did recently when I reported for Slate on arsenic-based life: I got in touch with a bunch of scientists, almost all of whom criticized the paper. Is it true that “not everyone is convinced”? Well, yes, but it’s also true that not everyone can bench press five hundred pounds.

In that case, I stand by my journalistic decision–especially given the emails I’ve since gotten from a number of experts who read the article and agreed with it, as well as the absence of a spirited defense of the paper from someone who’s not a co-author.

But in other cases, the decision may be trickier. It’s easy to find biologists who will criticize papers in evolutionary psychology–see, for example, University of Chicago evolutionary biologist Jerry Coyne rage about some recent stories.

Coyne writes, “You can bet your sweet tuchus that had Carl Zimmer written something like this, it would have included a lot more bet-hedging.”

Leaving tuchuses aside (tuchi?), I read Coyne’s post and wonder, how should bets be hedged?  In the case of the friend-gene paper and the arsenic paper, the critics were getting into the details of how these kinds of studies are supposed to be done. But it seems that Coyne doesn’t think that evolutionary psychology can be done, period–or at least, he thinks the whole field is pretty lousy. [Update: This sentence is wrong. My apologies to Coyne.] And since his post, evolutionary psychologist Robert Kurzban has counter-attacked, observing that Coyne and others hold evolutionary psychology to an unreasonable standard that they do not impose on other areas of research. My instinct in such cases is to write about a debate, rather than a critique. But there’s no hard and fast rule about when a story shifts from one to the other. On that, everyone–not not everyone–should agree.

21 thoughts on “How many people are "not everyone"? Some thoughts on scientific debates and smackdowns

  1. Presumably the Kurzban-Coyne debate is something that should be covered seperately to recent developments in the field, and neither should be included in your journalistic round of peer re-review.

  2. Oh my! I am so torn on this.

    On one hand, I think Amy Maxmen did a very good job (not perfect, I agree with some of your criticisms, but very good nonetheless, and much better than one usually sees in MSM when this topic comes around).

    On the other hand, as a scientist, I am strongly taking the Myers/Coyne side on this topic and see the rebuttals as weak special pleading and apologetics for a field that is based on misinformed fundamental premises.

    On the third hand, I literally do the posting of Jesse’s posts now. And I try not to get too annoyed at places where I think he crosses the line that I, as a scientist, think he should not cross. When I post his stuff I want to take off my scientist hat and put on my editor’s hat. And wonder if I am doing it right? Should I intervene or not? How far can I go? He is an independent blogger, so I am not directly his editor who can pick and choose to post or not post his stuff, so I just post everything without any editing (unless I spot a typo which I fix). And much in his posts is good and definitely fun to read – he is a provocative blogger, and in those rare cases when he oversteps the line, commenters come in droves and try to set it straight – which is what the commenters are for, right?

    So, should I be blogging about this particular case? What are my conflicts of interest? As an independent blogger I may feel I should rip the pieces apart, just like PZ and Coyne did it, and like I would have done a few years back. On the other hand, should I be ripping into my own blogger who I like a lot and want to promote?

    Head hurts!

  3. “It’s like the team bought six lottery tickets and won the megabucks twice”

    Except, it’s not. They weren’t looking for a needle in a haystack. They were looking at six genes that had already been discovered to produce known brain chemistry phenotypes whose relevance was great enough to attract attention and be genotyped in a large sample of adolescents.

    It is the moral equivalent of looking at six pigmentation genes and writing a paper that concludes that people can accurately sort people into genotypes with remarkably high accuracy without scientific equipment.

    These were already winning tickets, we simply weren’t quite sure what prize came with each one.

  4. OOooh! This is MY kind of science!!!

    I’m not a scientist; rather, I’m science-ish. I’m science-y. I looove the art of science, not the Science of science.

    Back in school, Letter-of-the-law Science killed my love of learning and sense of wonder. I crave the whimsey of half-baked theories and want more, more, MORE!!!!!

  5. Can you report on the numbers pro and con? It wouldn’t be a scientific study; that would require actual statistically valid research. However it might bring the reportage back to facts and something that means something. “I contacted 6 respected scientists in the field and 2 of them thought it was bunkum.” Something like that.

  6. “But not everyone is convinced” seems to be a stock phrase when it comes to science/biomedicine coverage. It’s that journalistic attempt to cover all sides of the story, which, when it comes to science creates a perfect opportunity for outspoken quacks and detractors who are usually in a minority to create a stir. I’d like to see a proportional response – like giving a rough proportion of who agrees/disagrees, or at least at knowledge minority/majority opinions. Even though these terms are still vague, they’re better than nothing, but they’re usually left out of non-science publications.

  7. I was the editor on this particular story, and I have to confess that “not everyone is convinced” may have been introduced by me rather than Amy.

    This was a tough story. There were some obvious issues with the science, but we knew it would be widely covered elsewhere, probably with fewer caveats, so we felt it was important to give it the Nature treatment and be a bit more sceptical.

    In this case, it is a tough debate to quantify. The authors are social scientists who are using some genetic techniques to investigate a hypothesis, while most of the criticism came from “real” geneticists who designed those techniques and use them all the time. Perhaps it would have been better to make this clearer, saying something like “those who use genome assays are not convinced”. Although I am never keen to encourage interdisciplinary bun fights.

  8. “I’d like to see a proportional response – like giving a rough proportion of who agrees/disagrees, or at least at knowledge minority/majority opinions.”

    In practice this is often very difficult to find out with enough accuracy to use such a description confidently. I agree that the prase is highly unsatisfactory, but in many cases that’s really all one knows for certain.

  9. I think Carl is reading “not everyone is convinced” too literarily. I think its a stock phrase meaning that there is substantial skepticism. Without being able to quantify the skepticism, or spend an equal length article going over pros and cons, it raises a reader’s sensitivity to the assertions of the study.

  10. Bora, I’m pretty sure that if you met you, you would advise yourself not to interfere editorially – that the blogosphere is a free-form, dog-eat-dog world and good blogs/bloggers will be successful and bad ones won’t. In fact I’m pretty sure at #scio11 you reminded a crowded room “bloggers do not like being told what to do”. As for whether to strongly criticize a blog post on a blog network that you run, I think you would tell you to go ahead, in the spirit of openness, fair debate and intellectual integrity but that you should be civil – of course, that’s true no matter what and where you’re posting – and you might warn your blogger through back channels first that it’s a-coming.

  11. “The authors are social scientists who are using some genetic techniques to investigate a hypothesis, while most of the criticism came from “real” geneticists who designed those techniques and use them all the time.”

    This is the problem with EP in a nutshell. To people who do behavioral or evolutionary genetics in model organisms, the “evidence” presented in a typical EP paper is pathetic; even worse are the causal inferences and adaptive hypotheses accompanying the Likert scale / IAT score / fMRI or other quasi-empirical tool that happens to give you a p less than .05 in a SNP association. That’s fine, human work is hard for many reasons. What grinds is that conclusions are often presented with a degree of confidence far outstripping the quality of the data (often by the authors, always by the media).

    Many people study the evolution of behavior productively…they are called biologists. There is obviously a place for a field called “evolutionary psychology,” but the best work is being done is by those who wisely refrain from referring to themselves as evolutionary psychologists. In its current form, EP is a parochial sideshow of the cultural preoccupations of a particular time and place.

  12. Re: Carl’s comment to my comment. I agree that “not everyone” is a dodge and a cliche. But going deeper more work than is reasonable to ask for a quick summary of a recent finding. The question, then, is should a science reporter be required to go deeper? Related question, what is the threshold for gaging a spectrum of opinion? When should it be done?

  13. Carl, I think Coyne would disagree with your statement that he thinks evo psych can’t be done at all. This is from your link to him:

    “Now I don’t oppose evolutionary psychology on principle. The evolutionary source of our behavior is a fascinating topic, and I’m convinced that the genetic influences are far stronger than, say, posited by anti-determinists like Dick Lewontin, Steve Rose, and Steve Gould. Evolved adaptations are particularly likely to be found in sexual behavior, which is intimately connected with the real object of selection: the currency of reproduction. I’m far closer in my views on this topic to Steve Pinker than to Steve Gould. And there are many good studies in the field, so I don’t mean to tar the whole endeavor.”

  14. I agree with Brian, whose quote accurately reflects my take on evolutionary psychology. And I think Carl’s statement in the post above isn’t accurate at all:

    “But it seems that Coyne doesn’t think that evolutionary psychology can be done, period–or at least, he thinks the whole field is pretty lousy.”

    Why just this morning I wrote in extenso about a GOOD example of evolutionary psychology, and related what I thought distinguished it from bad specimens. See:

  15. Steve Silberman tweeted this post, not clear why this is being argued. It is a convention of news writing to write a sentence like, ‘not everyone agrees’ to signal to readers that a finding is not a slam dunk. That’s all. To complain it is a dodge and a cliche is true, but banal.

    Aside from outright errors (which everyone who writes for a living makes) every story has imperfections seen from someone else’s angle. Some more than others. The only worthwhile kind of press criticism is to write a better story.

Leave a Reply

Your email address will not be published. Required fields are marked *