Study of studies shows few citations

Readers may assume that this site is anti-complementary medicine, or anti alternatives to orthodox medicine. In fact, this site is anti bad-science – which is not confined to only non-orthodox medicine (although this is a major feature of this group) but may also be occur in orthodox medicine. Below is an article that deals exactly with such a topic – a major failure of many orthodox medicine studies.

"Science, so the story goes, is a meticulously built edifice."

"Discoveries balance on ones that preceded them. Research is stimulated by studies that went on before."  

"But what, then, can explain the findings by two investigators at Johns Hopkins University School of Medicine? The researchers, Karen A. Robinson and Dr. Steven N. Goodman, looked at how often published papers on clinical trials in medicine cite previous clinical trials addressing the same question." 

"They report in the Jan. 4 issue of Annals of Internal Medicine what Dr. Goodman describes as “a rather shocking result.”

He summarizes: “No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them.”

Trial in a Vacuum: Study of Studies Shows Few Citations
By GINA KOLATA

New York Times

Published: January 17, 2011 

Science, so the story goes, is a meticulously built edifice.

Discoveries balance on ones that preceded them. Research is stimulated by studies that went on before. 

But what, then, can explain the findings by two investigators at Johns Hopkins University School of Medicine? The researchers, Karen A. Robinson and Dr. Steven N. Goodman, looked at how often published papers on clinical trials in medicine cite previous clinical trials addressing the same question.  

They report in the Jan. 4 issue of Annals of Internal Medicine what Dr. Goodman describes as “a rather shocking result.” He summarizes:

“No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them.” 

“As cynical as I am about such things, I didn’t realize the situation was this bad,” Dr. Goodman said. 

It seems, Dr. Goodman said in an e-mail, that “either everyone thinks their study is really unique (when others thought it wasn’t), or they want to unjustifiably claim originality, or they just don’t know how or want to look.” 

The situation can have serious consequences for patients, said Sir Iain Chalmers, editor of the James Lind Library, which is a source of information on appropriate tests of medical treatments. He said some patients have suffered severe side effects and even died in studies because researchers were not aware of previous studies documenting a treatment’s dangers. 

“That’s the tragedy,” he said. “Not only is it unscientific, it is unethical.” 

Dr. Goodman said their results might help explain a troubling phenomenon in medicine: All too often, despite a multitude of clinical trials on a particular subject, the data do not supply the answers doctors need to treat patients. 

“This shows part of what’s behind it,” Dr. Goodman said. Failure to cite can affect hypotheses and conclusions. 

“If you are not citing the most similar studies, it is really hard to imagine that the evidence they provided played a role in the formulation of your hypothesis,” Dr. Goodman said. And, he added, if researchers do not cite other studies, they cannot play a role in formulating conclusions. “If the eighth study is positive, and the preceding seven were cold negative, is it proper to report that a treatment works?” he said. “This may not be the fire, but it’s a heck of a lot of smoke.” 

Dr. Robinson, an assistant professor in the divisions of internal medicine and health sciences informatics, and Dr. Goodman, a professor of epidemiology and biostatistics and editor of the journal Clinical Trials, began their study by identifying 227 meta-analyses, which are studies that combine relevant previous studies to glean data from pooled evidence. For example, a meta-analysis might collect all studies about a drug whose effectiveness in individual studies was sometimes equivocal.

The analysis would ask whether, with all results combined, the drug may seem to work. 

The 227 meta-analyses cited a total of 1,523 clinical trials. For each clinical trial, the investigators asked how many of the other trials, published before it on the same topic and included with it in the meta-analysis, were cited. 

They never expected so few. 

“It is a pretty bad situation,” Dr. Goodman said. “There were rumblings of this before, but they did not show the phenomenon like this does.” 

He adds that he and Dr. Robinson did not ask whether investigators cited prior studies in their  grant applications, nor do they know for sure why so little previous research is cited. 

One reason might be that investigators do not think many of the results from previous studies apply to theirs. 

That is why, in a recent paper in Hepatology, Dr. Stephen Harrison of Brooke Army Medical Center in San Antonio and his colleagues did not cite any of 10 clinical trials used in a meta-analysis that followed his paper. He was studying the effects of a weight loss drug, orlistat, on liver function in overweight patients with fatty liver disease. 

Explaining why he failed to cite the other studies, he said, “I limited my discussion mainly to therapies that had been studied in fatty liver disease, not just obesity or diabetes.” 

Dr. Beverly B. Green of Group Health in Seattle did not cite any of 21 prior trials in a meta-analysis on self monitoring of blood pressure in her study of home blood pressure monitoring. 

She explained that she had done a literature search for her grant application but that for her paper, published in The Journal of the American Medical Association, she narrowed her criteria. In choosing articles to cite, she selected only the best of the previous papers and the ones most relevant to the questions she was asking and the subjects she studied. If she had cited everything that preceded her study, she said, she easily could have had 200 to 300 references. 

There are several steps along the way to a published paper where researchers might be asked about already published papers on the same topic. Those who finance the research, the ethics committees that review some studies and the journals that publish the studies all could ask the investigators how they assured themselves they had found prior relevant results. 

But, Dr. Goodman said, none of those groups feel any official responsibility. 

“It’s sort of a blind spot,” he said. “People sort of assume researchers know the literature, know their own field, know the studies.” 

~~~

http://www.annals.org/content/154/1/50.abstract

A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials 

1.Karen A. Robinson, PhD; and 2.Steven N. Goodman, MD, MHS, PhD

1.From Johns Hopkins School of Medicine and Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland. 

Abstract 

Background: A randomized, controlled trial (RCT) should not be started or interpreted without accounting for evidence from preceding RCTs addressing the same question. Research has suggested that evidence from prior trials is often not accounted for in reports of subsequent RCTs. 

Objective: To assess the extent to which reports of RCTs cite prior trials studying the same interventions. 

Design: Meta-analyses published in 2004 that combined 4 or more trials were identified; within each meta-analysis, the extent to which each trial report cited the trials that preceded it by more than 1 year was assessed. 

Measurements: The proportion of prior trials that were cited (prior research citation index), the proportion of the total participants from prior trials that were in the cited trials (sample size citation index), and the absolute number of trials cited were calculated. 

Results: 227 meta-analyses were identified, comprising 1523 trials published from 1963 to 2004. The median prior research citation index was 0.21 (95% CI, 0.18 to 0.24), meaning that less than one quarter of relevant reports were cited. The median sample size citation index (0.24 [CI, 0.21 to 0.27]) was similar, suggesting that larger trials were not selectively cited. Of the 1101 RCTs that had 5 or more prior trials to cite, 254 (23%) cited no prior RCTs and 257 (23%) cited only 1. The median number of prior cited trials was 2, which did not change as the number of citable trials increased. The mean number of preceding trials cited by trials published after 2000 was 2.4, compared with 1.5 for those published before 2000 (P < 0.001). 

Limitation: The investigators could not ascertain why prior trials were not cited, and noncited trials may have been taken into account in the trial design and proposal stages. 

Conclusion: In reports of RCTs published over 4 decades, fewer than 25% of preceding trials were cited, comprising fewer than 25% of the participants enrolled in all relevant prior trials. A median of 2 trials was cited, regardless of the number of prior trials that had been conducted. Research is needed to explore the explanations for and consequences of this phenomenon. Potential implications include ethically unjustifiable trials, wasted resources, incorrect conclusions, and unnecessary risks for trial participants. 

Primary Funding Source: None. 

Article and Author Information

  • Acknowledgment: The authors thank Lisa Wilson, ScM, for her assistance in screening the search results; Ian J. Saldanha, MBBS, MPH, for his help in screening the search results and data abstraction; and Carol Thompson, MS, for her work in developing the scripts to manipulate the data. 
  • Potential Conflicts of Interest: None disclosed. Forms can be viewed at www.acponline.org/authors/icmje/ConflictOfInterestForms.do?msNum=M10-1621. 

 

  • Reproducible Research Statement: Study protocol: Not available.
  • Statistical code: Available from Dr. Robinson (e-mail, krobin{at}jhmi.edu). Data set: The purchased data set is not available; the list of meta-analyses used for the analysis is available from Dr. Robinson (e-mail, krobin{at}jhmi.edu).  
  • Requests for Single Reprints: Karen A. Robinson, PhD, Divisions of General Internal Medicine and Health Sciences Informatics, Department of Medicine, Johns Hopkins University, 1830 East Monument Street, Room 8069, Baltimore, MD 21287; e-mail, krobin{at}jhmi.edu. 
  • Current Author Addresses: Dr. Robinson: Divisions of General Internal Medicine and Health Sciences Informatics, Department of Medicine, Johns Hopkins University, 1830 East Monument Street, Room 8069, Baltimore, MD 21287. 
  • Dr. Goodman: Division of Biostatistics, Suite 1103, Johns Hopkins University, 550 North Broadway, Baltimore, MD 21209. 
  • Author Contributions: Conception and design: K.A. Robinson, S.N. Goodman.  
  • Analysis and interpretation of the data: K.A. Robinson, S.N. Goodman. 
  • Drafting of the article: K.A. Robinson, S.N. Goodman. 
  • Critical revision of the article for important intellectual content: K.A. Robinson, S.N. Goodman. 
  • Final approval of the article: K.A. Robinson, S.N. Goodman. 
  • Statistical expertise: S.N. Goodman. 
  • Administrative, technical, or logistic support: K.A. Robinson. 
  • Collection and assembly of data: K.A. Robinson.

No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.