The dangers of cherry-picking evidence

Posted 28 September 2011

" A deliberately incomplete view of the literature, as I hope I've explained, isn't a neutral or marginal failure. It is exactly as bad as a deliberately flawed experiment, and to present it to readers without warning is bizarre." 

An excellent article by Dr Ben Goldacre and published in the Guardian.


Republished below in the event that the Guardian website is down. 

The dangers of cherry-picking evidence 

It's one thing to produce a bias-free experiment – but the second, crucial stage is to synthesise the evidence fairly 

Ben Goldacre, Friday 23 September 2011 16.15 EDT 

Last week the Daily Mail and the Today programme took some bait from Aric Sigman, an author of popular sciencey books about the merits of traditional values. "Sending babies and toddlers to daycare could do untold damage to the development of their brains and their future health," explained the Mail. 

These news stories were based on a scientific paper by Sigman in The Biologist. It misrepresents individual studies, as Professor Dorothy Bishop demonstrated almost immediately, and it cherry-picks the scientific literature, selectively referencing only the studies that support Sigman's view. Normally this charge of cherry-picking would take a column of effort to prove, but this time Sigman himself admits it, frankly, in a PDF posted on his own website. 

Let me explain why this behaviour is a problem. Nobody reading The Biologist, or its press release, could possibly have known that the evidence presented was deliberately incomplete. That is, in my opinion, an act of deceit by the journal: but it also illustrates one of the most important principles in science, and one of the most bafflingly recent to emerge. 

Here is the paradox. In science, we design every individual experiment as cleanly as possible. In a trial comparing two pills, for example, we make sure that participants don't know which pill they're getting, so that their expectations don't change the symptoms they report. We design experiments carefully like this to exclude bias: to isolate individual factors, and ensure that the findings we get really do reflect the thing we're trying to measure. 

But individual experiments are not the end of the story. There is a second, crucial process in science, which is synthesising that evidence together to create a coherent picture. 

In the very recent past, this was done badly. In the 1980s, researchers such as Celia Mulrow produced damning research showing that review articles in academic journals and textbooks, which everyone had trusted, actually presented a distorted and unrepresentative view, when compared with a systematic search of the academic literature. After struggling to exclude bias from every individual study, doctors and academics would then synthesise that evidence together with frightening arbitrariness. 

The science of "systematic reviews" that grew from this research is exactly that: a science. It's a series of reproducible methods for searching information, to ensure that your evidence synthesis is as free from bias as your individual experiments. You describe not just what you found, but how you looked, which research databases you used, what search terms you typed, and so on. This apparently obvious manoeuvre has revolutionised the science of medicine. 

What does that have to do with Aric Sigman, the Society of Biologists, and their journal, The Biologist? Well, this article was not a systematic review, the cleanest form of research summary, and it was not presented as one. But it also wasn't a reasonable summary of the research literature, and that wasn't just a function of Sigman's unconscious desire to make a case: it was entirely deliberate. A deliberately incomplete view of the literature, as I hope I've explained, isn't a neutral or marginal failure. It is exactly as bad as a deliberately flawed experiment, and to present it to readers without warning is bizarre. 

Blame is not interesting, but I got in touch with the Society of Biology, as I think we're more entitled to have high expectations of them than Sigman, who is, after all, some guy writing fun books in Brighton. They agree that what they did was wrong, that mistakes were made, and that they will do differently in future. 

Here's why I don't think that's true. The last time they did exactly the same thing, not long ago, with another deliberately incomplete article from Sigman, I wrote to the journal, the editor, and the editorial board, setting out these concerns very clearly. 

The Biologist has actively decided to continue publishing these pieces by Sigman, without warning. They get the journal huge publicity: and fair enough. I'm no policeman. But in the two-actor process of communication, until they explain to their readers that they knowingly present cherry-picked papers without warning – and make a public commitment to stop – it's for the reader to decide whether they can trust what they publish.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>




This site uses Akismet to reduce spam. Learn how your comment data is processed.