Former Visiting Professor Patrick Holford is Head of Science and Education at Biocare. As such, he knows the value of quote-mining to enlist others as if in support of oneself and therefore bask in their reflected competence. The March 09 Newsletter to his subscribing faithful has a strong example of this.
The big problem in medicine today is the overreliance on what are called randomised controlled placebo trials (RCTs). RCT’s might give you small pieces of information, for example that exercise helps reduce the risk of diabetes, but they can’t really process the big questions like what is it that makes a person tip into diabetes, and what total change in circumstances can reverse this process?
Yet, unfortunately, there is a wave of science fundamentalism that naively believes that we’ll find the solution to humanity’s health issues by pooling together all the information derived from RCTs into ‘meta-analyses’. If only life were that simple. Mistakenly, some place meta-analyses of RCTs at the top of a hierarchy of evidence. Professor Sir Michael Rawlins, Chairman of the National Institute for Health and Clinical Excellence (NICE), disagrees, as do I. “The notion that evidence can reliably be placed in hierarchies is illusory,” he says. “Hierarchies place RCTs on an undeserved pedestal for, although the technique has advantages, it also has significant disadvantages.” To understand us humans, you have to look at the whole picture, which is what systems theory is all about.
And this coming from a man who offered his own anecdata of 2 people taking supplements to counterbalance or even contradict the results of the Cochrane Review of Anti-oxidants (pdf) that involved more than 230,000 people.
We think you’ll find that what Rawlins said was considerably more nuanced than that. In the 2008 Harveian Oration Rawlins made the following points.
Randomised controlled trials (RCTs), long regarded at the ‘gold standard’ of evidence, have been put on an undeserved pedestal. Their appearance at the top of “hierarchies” of evidence is inappropriate; and hierarchies, themselves, are illusory tools for assessing evidence. They should be replaced by a diversity of approaches that involve analysing the totality of the evidence-base.
Sir Michael outlines the limitations of RCTs in several key areas:
Impossible – with treatments for very rare diseases where the number of patients is too limited
Unnecessary – when a treatment produces a “dramatic” benefit – imatinib (Glivec) for chronic myeloid leukaemia
Stopping trials early – interim analyses of trials are now commonly undertaken to assess whether the treatment is showing benefit and if the trial can be stopped early…Although the desire to stop trials early is understandable, the possibility that an interim analysis is a “random high” may be difficult to avoid – especially as there is no consensus among statisticians as to how best to handle this problem
Resources – the costs of RCTs are substantial in money, time and energy – a recent study of 153 trials completed in 2005 and 2006 showed a median cost of over £3 million and with one trial costing £95 million. One manufacturer has estimated that the average cost per patient increased from £6,300 in 2005 to £9,900 in 2007
Generalisability – RCTs are often carried out on specific types of patients for a relatively short period of time, whereas in clinical practice the treatment will be used on a much greater variety of patients – often suffering from other medical conditions – and for much longer. There is a presumption that, in general, the benefits shown in an RCT can be extrapolated to a wide population; but there is abundant evidence to show that the harmfulness of an intervention is often missed in RCTs.
Sir Michael argues that observational studies are also useful and, with care in the interpretation of the results, can provide an important source of evidence about both the benefits and harms of therapeutic interventions. These particularly include historical controlled trials and case-control studies but other forms of observational data can also reveal important issues…
Sir Michael believes that arguments about the relative importance of different kinds of evidence are an unnecessary distraction. What is needed instead is for “investigators to continue to develop and improve their methodologies; for decision makers to avoid adopting entrenched positions about the nature of evidence; and for both to accept that the interpretation of evidence requires judgement.”
Holford is, of course, failing to address the idea that Rawlins’ strictures might apply to trials of supplements: “harmfulness of an intervention is often missed in RCTs”. Oddly enough, that is something that a meta-analysis, such as the Cochrane Review of Anti-oxidants, that Holford deprecated, can sometimes highlight.
Rawlins’ challenge has little resemblance to Holford’s straw man. Holford’s straw man more closely resembles the PoMo A Go Go that one associates with Murray, Holmes and Rail: On the constitution and status of ‘evidence’ in the health sciences.
Murray et al assert (pg. 273) that EBM “denigrates the evidentiary value of clinical experience” although EBM discourses often emphasise its value. E.g., Sackett et al state in Evidence based medicine: what it is and what it isn’t:
Because EBM requires a bottom-up approach that integrates the best external evidence with individual clinical expertise and patient-choice, it cannot result in slavish, cook-book approaches to individual patient care…External clinical evidence can inform, but can never replace, individual clinical expertise.
Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a diagnostic test, we need to find proper cross sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial. For a question about prognosis, we need proper follow up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm. However, some questions about therapy do not require randomised trials (successful interventions for otherwise fatal conditions) or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient’s predicament, we must follow the trail to the next best external evidence and work from there.
It really is important to read the full speech rather than attempt to quote mine it and hijack support that it is doubtful the speaker would lend.
Commentators such as Holmes et al. and those who really don’t understand the area have a tendency to caricature EMB as a unidirectional flow from basic research to translational research and clinical trialsL the results of which are then converted into protocols that are ‘fetishised’ by EBM researchers and clinicians. However, practising clinicians aspire to treat evidence from Cochrane Reviews and guidelines as elements in Haack’s “crossword puzzle” model of knowledge: something that assists with and intersects with other elements in an individual’s clinical history and a clinician’s experiential knowledge that points towards and appropriate and individualised treatment.
Rawlins is doubtless correct in calling for greater ingenuity and endeavours to improve methodologies. Sadly, however, it is far from clear that it is scientists, medics or people with any understanding of healthcare that are placing undue emphasis on purported hierarchies of evidence at the expense of clinical judgment rather than people who wish to supplant expertise and experience with their own uninformed observations of simulacrum of an assessment process.
The biased reporting of trials is undermining the general confidence in the scientific process. Dr Aled Edwards recently called for greater collaboration and open access between academia and the pharmaceutical industry to improve the quality of products and reduce the risk of development. Professors Garattini and Chalmers recently published an excellent discussion of the many travails and issues associated with the lack of transparency of drug trials: Patients and the public deserve big changes in evaluation of drugs. Garattini and Chalmers are not criticising forms of evidence, they are calling for increased rigour in conducting studies and trials as well as greater transparency: these are better solutions than people who do not show any understanding of evidence making facile contributions such as Holford’s and calling for ” analysing the totality of the evidence-base” when they fail to understand the nature of appropriate evidence.[a]
[a] One of the best explanations of ‘fair tests’ is Evans, Thornton and Chalmers’ Testing Treatments (available for free download). We strongly recommend this generalist and entertaining book to interested parties as it explains all the ins-and-outs of what makes a fair test and the fair way in which to interpret the results. Print version: Testing Treatments: Better Research for Better Healthcare
Prof Trisha Greenhalgh has a permanent place in our blog links: How to read a paper series. However, it is also available as a book and worth buying as it has now run through several editions: How to Read a Paper: The Basics of Evidence-Based Medicine (EvidenceBased Medicine)