Patrick Holford on Science Friction and the Limitations of RCTs and Meta-analyses

Former Visiting Professor Patrick Holford is Head of Science and Education at Biocare. As such, he knows the value of quote-mining to enlist others as if in support of oneself and therefore bask in their reflected competence. The March 09 Newsletter to his subscribing faithful has a strong example of this.

Science Friction
The big problem in medicine today is the overreliance on what are called randomised controlled placebo trials (RCTs). RCT’s might give you small pieces of information, for example that exercise helps reduce the risk of diabetes, but they can’t really process the big questions like what is it that makes a person tip into diabetes, and what total change in circumstances can reverse this process?
Yet, unfortunately, there is a wave of science fundamentalism that naively believes that we’ll find the solution to humanity’s health issues by pooling together all the information derived from RCTs into ‘meta-analyses’. If only life were that simple. Mistakenly, some place meta-analyses of RCTs at the top of a hierarchy of evidence. Professor Sir Michael Rawlins, Chairman of the National Institute for Health and Clinical Excellence (NICE), disagrees, as do I. “The notion that evidence can reliably be placed in hierarchies is illusory,” he says. “Hierarchies place RCTs on an undeserved pedestal for, although the technique has advantages, it also has significant disadvantages.” To understand us humans, you have to look at the whole picture, which is what systems theory is all about.

And this coming from a man who offered his own anecdata of 2 people taking supplements to counterbalance or even contradict the results of the Cochrane Review of Anti-oxidants (pdf) that involved more than 230,000 people.

We think you’ll find that what Rawlins said was considerably more nuanced than that. In the 2008 Harveian Oration Rawlins made the following points.

Randomised controlled trials (RCTs), long regarded at the ‘gold standard’ of evidence, have been put on an undeserved pedestal. Their appearance at the top of “hierarchies” of evidence is inappropriate; and hierarchies, themselves, are illusory tools for assessing evidence. They should be replaced by a diversity of approaches that involve analysing the totality of the evidence-base.
Sir Michael outlines the limitations of RCTs in several key areas:
Impossible – with treatments for very rare diseases where the number of patients is too limited
Unnecessary – when a treatment produces a “dramatic” benefit – imatinib (Glivec) for chronic myeloid leukaemia
Stopping trials early – interim analyses of trials are now commonly undertaken to assess whether the treatment is showing benefit and if the trial can be stopped early…Although the desire to stop trials early is understandable, the possibility that an interim analysis is a “random high” may be difficult to avoid – especially as there is no consensus among statisticians as to how best to handle this problem
Resources – the costs of RCTs are substantial in money, time and energy – a recent study of 153 trials completed in 2005 and 2006 showed a median cost of over £3 million and with one trial costing £95 million. One manufacturer has estimated that the average cost per patient increased from £6,300 in 2005 to £9,900 in 2007
Generalisability – RCTs are often carried out on specific types of patients for a relatively short period of time, whereas in clinical practice the treatment will be used on a much greater variety of patients – often suffering from other medical conditions – and for much longer. There is a presumption that, in general, the benefits shown in an RCT can be extrapolated to a wide population; but there is abundant evidence to show that the harmfulness of an intervention is often missed in RCTs.
Sir Michael argues that observational studies are also useful and, with care in the interpretation of the results, can provide an important source of evidence about both the benefits and harms of therapeutic interventions. These particularly include historical controlled trials and case-control studies but other forms of observational data can also reveal important issues…
Sir Michael believes that arguments about the relative importance of different kinds of evidence are an unnecessary distraction. What is needed instead is for “investigators to continue to develop and improve their methodologies; for decision makers to avoid adopting entrenched positions about the nature of evidence; and for both to accept that the interpretation of evidence requires judgement.”

Does Rawlins sound as if he would be impressed by the usual standards of Holford’s scholarship or his idea of running a study or writing up the results?

Holford is, of course, failing to address the idea that Rawlins’ strictures might apply to trials of supplements: “harmfulness of an intervention is often missed in RCTs”. Oddly enough, that is something that a meta-analysis, such as the Cochrane Review of Anti-oxidants, that Holford deprecated, can sometimes highlight.

Rawlins’ challenge has little resemblance to Holford’s straw man. Holford’s straw man more closely resembles the PoMo A Go Go that one associates with Murray, Holmes and Rail: On the constitution and status of ‘evidence’ in the health sciences.

Murray et al assert (pg. 273) that EBM “denigrates the evidentiary value of clinical experience” although EBM discourses often emphasise its value. E.g., Sackett et al state in Evidence based medicine: what it is and what it isn’t:

Because EBM requires a bottom-up approach that integrates the best external evidence with individual clinical expertise and patient-choice, it cannot result in slavish, cook-book approaches to individual patient care…External clinical evidence can inform, but can never replace, individual clinical expertise.

Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a diagnostic test, we need to find proper cross sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial. For a question about prognosis, we need proper follow up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence we need will come from the basic sciences such as genetics or immunology. It is when asking questions about therapy that we should try to avoid the non-experimental approaches, since these routinely lead to false positive conclusions about efficacy. Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the “gold standard” for judging whether a treatment does more good than harm. However, some questions about therapy do not require randomised trials (successful interventions for otherwise fatal conditions) or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient’s predicament, we must follow the trail to the next best external evidence and work from there.

It really is important to read the full speech rather than attempt to quote mine it and hijack support that it is doubtful the speaker would lend.

Commentators such as Holmes et al. and those who really don’t understand the area have a tendency to caricature EMB as a unidirectional flow from basic research to translational research and clinical trialsL the results of which are then converted into protocols that are ‘fetishised’ by EBM researchers and clinicians. However, practising clinicians aspire to treat evidence from Cochrane Reviews and guidelines as elements in Haack’s “crossword puzzle” model of knowledge: something that assists with and intersects with other elements in an individual’s clinical history and a clinician’s experiential knowledge that points towards and appropriate and individualised treatment.

Rawlins is doubtless correct in calling for greater ingenuity and endeavours to improve methodologies. Sadly, however, it is far from clear that it is scientists, medics or people with any understanding of healthcare that are placing undue emphasis on purported hierarchies of evidence at the expense of clinical judgment rather than people who wish to supplant expertise and experience with their own uninformed observations of simulacrum of an assessment process.

The biased reporting of trials is undermining the general confidence in the scientific process. Dr Aled Edwards recently called for greater collaboration and open access between academia and the pharmaceutical industry to improve the quality of products and reduce the risk of development. Professors Garattini and Chalmers recently published an excellent discussion of the many travails and issues associated with the lack of transparency of drug trials: Patients and the public deserve big changes in evaluation of drugs. Garattini and Chalmers are not criticising forms of evidence, they are calling for increased rigour in conducting studies and trials as well as greater transparency: these are better solutions than people who do not show any understanding of evidence making facile contributions such as Holford’s and calling for ” analysing the totality of the evidence-base” when they fail to understand the nature of appropriate evidence.[a]

Notes

[a] One of the best explanations of ‘fair tests’ is Evans, Thornton and Chalmers’ Testing Treatments (available for free download). We strongly recommend this generalist and entertaining book to interested parties as it explains all the ins-and-outs of what makes a fair test and the fair way in which to interpret the results. Print version: Testing Treatments: Better Research for Better Healthcare

Prof Trisha Greenhalgh has a permanent place in our blog links: How to read a paper series. However, it is also available as a book and worth buying as it has now run through several editions: How to Read a Paper: The Basics of Evidence-Based Medicine (EvidenceBased Medicine)

Advertisements

11 Comments

Filed under patrick holford

11 responses to “Patrick Holford on Science Friction and the Limitations of RCTs and Meta-analyses

  1. Pingback: Drink and Drugs News Reproved By Its Well-Informed Readers « Holford Watch: Patrick Holford, nutritionism and bad science

  2. “Randomised controlled trials (RCTs), long regarded at the ‘gold standard’ of evidence, have been put on an undeserved pedestal”

    Not sure what Mr Holford would replace RCTs with. At a seminar in London last year he invited he quoted a Cochrane review of RCTs that proved the diet he was promoting was the best diet on the block.

    • I imagine that Wulfstan’s descriptions are fairly near the mark. I offer these quotations in default of any original thought.

      Irrationally held truths may be more harmful than reasoned errors. [Thomas Huxley]

      (With reference to a correspondent) The young specialist in English Lit, …lectured me severely on the fact that in every century people have thought they understood the Universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern “knowledge” is that it is wrong. … My answer to him was, “… when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.” [Isaac Asimov, The Relativity of Wrong, Kensington Books, New York, 1996, pg 226.]

  3. Wulfstan

    There are some variations on this but here is an extract from the Holford Memorandum.

    I read a press release about a paper. In it I learned that scientists [that it is convenient for me to admire] have just [wrongly] learned about X [mischaracterised] [which I have been saying for years]. The implications of X [mischaracterised in a different way] are revolutionary in overturning current [allopathic and EBM] paradigms P [mischaracterised] and Q [mischaracterised], particularly those aspects of P [mischaracterised in a way oddly unrelated to mischaracterisation of P] that are central [wrong] to holistic nutrition [mischaracterised] and materialist science [mischaracterised].

    Both that and the excellent Will Wilkinson summary of David Brooks:

    Scientists have discovered X. Mostly X vanquishes my intellectual bugbears and confirms me in my prejudices. To the extent it doesn’t, science isn’t really an authoritative source of wisdom, now is it?

    • Nice. I did see the Will Wilkinson when I was following links around the Chaospet comic on his ‘Death of philosophy article and thought it was the pithiest summary of the widely-blogged horror.

      The other interesting item that I came across was Brooks alluding to epistemological modesty.

      The correct position is the one held by self-loathing intellectuals, like Isaiah Berlin, Edmund Burke, James Madison, Michael Oakeshott and others. These were pointy heads who understood the limits of what pointy heads can know. The phrase for this outlook is epistemological modesty, which would make a fine vanity license plate.

      The idea is that the world is too complex for us to know, and therefore policies should be designed that take account of our ignorance.

      It’s good, enjoyable phrasing but I have a sneaking suspicion that it’s a bit more complicated than that and that it doesn’t stand much close scrutiny. I am a sucker for a well-turned phrase and have to be wary that it doesn’t deceive me as to the merits of an idea.

      Human language is like a cracked kettle on which we beat out tunes for bears to dance to, when all the time we are longing to move the stars to pity. [Flaubert]

  4. Predictably, other enthusiasts for alternative reality have also homed in on Rawlins’ remarks.

    See, for instance, this latest piece of laughable crowing from Homeopathic Quantum Flapdoodler- Supreme Lionel Milgrom

    • I’d be willing to bet that remarkably few of them have read Rawlins’ remarks in their full context.

      We’ve also heard an interesting rumour that Holford recently gave a lecture to students in which double-blind, placebo-controlled, randomised trials were acknowledged as the gold standard, desideratum of clinical evidence. So, it seems to be one set of beliefs for one group and another for another.

      Adapting to your audience is one thing, inconsistency on such an important matter is something else entirely.

      • On the consistency issue – Holford Biocare seminar.

        The idea that we can solve today’s health issues by assembling together all the evidence created from randomised controlled trials on drugs, nutrients and other treatments is redundant. We are complex adaptive organisms and the kind of scientific process we use need to reflect this.

  5. Pingback: Patrick Holford and the Vitamins for Asthma That Become All About Food Intolerance and YorkTest « Holford Watch: Patrick Holford, nutritionism and bad science

  6. Pingback: Green Party Health Policy « Holford Watch: Patrick Holford, nutritionism and bad science

  7. Pingback: Patrick Holford, Shark Liver Oil and Walnuts « Holford Watch: Patrick Holford, nutritionism and bad science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s