Biased Reporting of Trials: This Is Why People Are Losing Trust in the Value of the Scientific Process

You can’t review what is never submitted to you. If you are an editor for a journal you can neither accept nor reject, on its merits, work that is never presented to you for evaluation. It’s an obvious point: what isn’t so obvious is the amount of data that is suppressed and never submitted for review and publication. There is a flurry of activity about this topic and we hope to review it at some point in the near future.

Absence of evidence: Do drug firms suppress unfavourable information about new products? This article deserves a commendation for a good strong opening and use of Feynman (the Feynman Chaser is an phenomenon in its own right).

RICHARD FEYNMAN, a Nobel-prize-winning physicist, declared in a speech in 1974 that science requires “a kind of utter honesty”. He insisted that researchers must publicise all the outcomes of their work, and “not just the information that leads to judgment in one particular direction or another”. To judge by the mounting evidence of publication bias involving studies on new drugs, his words have not yet reached the pharmaceuticals industry.

A study published this week in PLoS Medicine, an online journal, confirms what many have suspected and what previous studies have hinted at: drug companies try to spin the results of clinical trials. If this were done merely in marketing materials, it might be tolerable. What Lisa Bero of the University of California, San Francisco, and her colleagues found, however, was troubling evidence of suppression and manipulation of data in studies published in (or often withheld from) peer-reviewed medical journals.

The article refers to two recent papers in PLoS.

Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation[1]

Evidence-based clinical medicine relies on the publication of high-quality data to determine standards of patient care. Publication bias occurs when some types of results (e.g., those that are statistically significant) are reported more frequently or more quickly than others [1–4]. Publication bias favors the dissemination of information about clinical interventions showing statistically significant benefit. Publication bias, therefore, may lead to preferential prescribing of newer and more expensive treatment choices and may underestimate the harms of drugs that have been in use for only a limited time, and clinical decisions may be based on erroneous information [5]…

Publication bias not only limits the number and scope of studies available for review by clinicians, but also affects the results of systematic reviews and meta-analyses. Researchers may estimate spuriously large treatment effects in early meta-analyses of the available evidence if there is publication bias [4]…

If no publications were identified for a trial within a specific NDA, authors of other publications identified for that drug were contacted to request assistance identifying the publications of the other trials. Trials were referred to as protocol numbers when available from the NDA, and otherwise by name or any other specific identifying information (such as location of study, number of participants, or intervention groups). If further assistance was required, the drug company was contacted…

Publication bias can occur in several ways, including not publishing data at all, selectively reporting data, or framing data. We found evidence of both lack of publication and selective reporting of data. Seventy-eight percent of the trials submitted to the FDA were published, and trials with active controls or statistically significant outcomes favoring the test drug were more likely to be published…

Our findings extend those of others by demonstrating that reporting bias occurs across a variety of drug categories and that statistical significance of reported primary outcomes sometimes changes to give a more favorable presentation in the publications [10–12]. These changes occur primarily in peer-reviewed, moderate impact factor journals that disclose funding sources and other financial ties. Thus, publication of trial data in peer-reviewed publications appears to be inadequate, supporting the need for reporting of full protocols and findings in a trial registry [31–34]…

Responses from investigators to our inquiries about unpublished studies suggest that studies were not published because they were not submitted to journals. Several other studies, based on self-reports from authors with unpublished studies, suggest that authors’ decisions not to submit manuscripts account for the majority of unpublished studies…

Our study has several limitations…

The goal of this study was to determine whether information that is available to the FDA is readily accessible to clinicians, and whether it is presented in the same way. As we hypothesized, not all data submitted to the FDA in support of a new drug approval were published, and there were discrepancies between original trial data submitted to the FDA and data found in published trials. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.

Bias, Spin, and Misreporting: Time for Full Access to Trial Protocols and Results[2]

Although randomized trials provide key guidance for how we practice medicine, trust in their published results has been eroded in recent years due to several high-profile cases of alleged data suppression, misrepresentation, and manipulation [1–5, 39]. While most publicized cases have involved pharmaceutical industry trials, accumulating empiric evidence has shown that selective reporting of results is a systemic problem afflicting all types of trials, including those with no commercial input [6]. These examples highlight the harmful potential impact of biased reporting on patient care, and the violation of ethical responsibilities of researchers and sponsors to disseminate results accurately and comprehensively.

Biased reporting arises when two main decisions are made based on the direction and statistical significance of the data—whether to publish the trial at all, and if so, which analyses and results to report in the publication. Strong evidence for the selective publication of positive trials has been available for decades…

These papers lend strong support and fuller elaboration to Ben Goldacre’s argument for:

a compulsory international trials register. Give every trial an ID number, so we can all see that a trial exists, they can’t go quietly missing in action, and we know when and where to look if they do…

For example, sometimes companies will publish flattering data two or three times over, in slightly different forms, as if it came from different studies, to make it look as if there are a lot of different positive findings out there: registers make this instantly obvious.

Worse than that, companies often move the goalposts, and change the design of a trial after the results are in, to try and massage the findings. This, again, is impossible when the protocol is registered before a trial begins.

The UK Government has announced plans to ensure that all drug company trial data is registered and disclosed but it remains to be seen how the legislation will be drafted and implemented.

This is not just a problem for trials involving the pharmaceutical industry, as Chen[2] indicates, the problem persists even in those trials without a direct commercial input. Similarly, researchers from the Research Council for Complementary Medicine assessed reported research findings from several countries: Do certain countries produce only positive results? A systematic review of controlled trials.[3] The authors concluded:

Some countries publish unusually high proportions of positive results. Publication bias is a possible explanation. Researchers undertaking systematic reviews should consider carefully how to manage data from these countries.

A Cochrane Review of Grey literature in meta-analyses of randomized trials of health care interventions[4] reported that:

published trials tend to be larger and show an overall greater treatment effect than grey trials. This has important implications for reviewers who need to ensure they identify grey trials, in order to minimise the risk of introducing bias into their review.

As a recent article in the NYT detailed, it is difficult enough to change clinical practice even when the evidence is comparatively clear-impeding such action through selective distortion or manipulation just makes matters worse: The Minimal Impact of a Big Hypertension Study.

The Allhat experience is worth remembering now, as some policy experts and government officials call for more such studies to directly compare drugs or other treatments, as a way to stem runaway medical costs and improve care.

The aftereffects of the study show how hard it is to change medical practice, even after a government-sanctioned trial costing $130 million produced what appeared to be solid evidence.

A confluence of factors blunted Allhat’s impact. One was the simple difficulty of persuading doctors to change their habits. Another was scientific disagreement, as many academic medical experts criticized the trial’s design and the government’s interpretation of the results.

There is already some lively discussion about the scope and limitations of the PLoS papers. Nonetheless, the fundamental point remains that detrimental practices such as the ones described in the PLoS papers distort the research evidence and pollute the research canon creating needless uncertainty and confusing treatment decisions. We can not tolerate the suppression of these data: treatment decisions and the quality of life of too many people depend upon the appropriate and full reporting of these trials.


21:30, Neuroskeptic draws our attention to Registration: Not Just For Clinical Trials. Neuroskeptic argues that:

it’s not just clinical trials which would benefit from registration. Registration is a way to defeat publication bias, wherever it occurs, and any field in which there are “negative results” is vulnerable to the risk that they won’t be reported. In some parts of science there are no negative results – in much of physics, chemistry, and molecular biology, you either get a result, or you’ve failed. If you try to work out the structure of a protein, say, then you’ll either come up with a structure, or give up. Of course, you might come out with the wrong structure if you mess up, but you could never “find nothing”…

But in many other areas of research there is often genuinely nothing to find. A gene might not be linked to any diseases. A treatment might have no effect. A pollutant might not cause any harm. Basically, if you’re looking for a correlation between two things, or an effect of one thing upon another, you might get a negative result…

Registration would put an end to most of this nonsense, because when you register your research – before the results are in – you would have to publically outline what statistical tests you are planning to do. Essentially, you would need to write the Methods section of your paper before you collected any results.

Nov 29: There is some discussion about whether it is essential that all data are published in peer-reviewed journals as long as all data are submitted for review. The difficulty is that although the USA insists that data should be submitted for review, few other countries do. If a company or other enterprise does not wish to be obliged to submit these data, it is possible to circumvent this by conducting research in a country that does not insist upon this. In this scenario, the USA loses research funding and is still affected by the need to purchase drugs for which the research evidence may be skewed. Bad Science Forum.


[1] Rising K, Bacchetti P, Bero L (2008). Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation. PLoS Med 5(11): e217
[2] Chan AW (2008). Bias, Spin, and Misreporting: Time for Full Access to Trial Protocols and Results. PLoS Med 5(11): e230
[3] Vickers A, Goyal N, Harland R, Rees R. Do certain countries produce only positive results? A systematic review of controlled trials. Control Clin Trials. 1998 Apr;19(2):159-66
[4] Hopewell S, McDonald S, Clarke M, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007 Apr 18;(2):MR000010 (Full access to paper.)

Related Reading

One of the best explanations of ‘fair tests’ is Evans, Thornton and Chalmers’ Testing Treatments (available for free download). We strongly recommend this generalist and entertaining book to interested parties as it explains all the ins-and-outs of what makes a fair test and the fair way in which to interpret the results. Print version: Testing Treatments: Better Research for Better Healthcare

Prof Trisha Greenhalgh has a permanent place in our blog links: How to read a paper series. However, it is also available as a book and worth buying as it has now run through several editions: How to Read a Paper: The Basics of Evidence-Based Medicine (EvidenceBased Medicine)




Filed under clinical trials

3 responses to “Biased Reporting of Trials: This Is Why People Are Losing Trust in the Value of the Scientific Process

  1. I say here that we need compulsory registration not just for clinical trials but for basic science too (at least in biomedical research). Logistically it would not much harder than for clinical trials so far as I can see

    Admin edit: corrected link.

  2. Wulfstan

    I read material like this and I despair and want to throw up my hands with “A plague on all your houses”. But, the issue is too important for that.

    I don’t know what the solution is, I can see the attraction of compulsory registration and reporting of even negative results. I have nothing useful to offer about how the legislation/guidance should be written or how appropriate regulatory bodies might implement such legislation/guidance.

    This is one of those things that is never going to win the attention of the wider public but it is important to all kinds of spending decisions, health provision planning and healthcare.

  3. Pingback: Patrick Holford on Science Friction and the Limitations of RCTs and Meta-analyses « Holford Watch: Patrick Holford, nutritionism and bad science

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s