Food for the Brain Results on Trevor Macdonald: Part 3

Bronze sculpture of a figure with a migraine by Jose SacalPatrick Holford seems to have the same effect on numbers that some people claim to have on electronic gadgets; they scramble in his presence or even that of people associated with him. That is the only possible explanation for one of the worse data summaries/commentaries that I have ever seen: Dr David Woodhouse on the subgroup analysis performed as part of the Chineham Park Primary School FFTB project that will be featured in this evening’s Trevor Macdonald programme. I kept expecting a Shirley Bassey audio file to blare out a bravura performance of:

Nobody does it like me! If there’s a wrong way to say it – A wrong way to play it – Nobody does it like me! – If there’s a wrong way to do it – A right way to screw it up – Nobody does it like me


It is even more extraordinary that Patrick Holford thinks that this explanation is so helpful that he sent it out as part of a self-congratulatory email that announced: SCHOOL GRADES SUCCESS WITH FOOD FOR THOUGHT
ITV reveals Food for the Brain results

It is clear that there were 10 children in a sub-group and that they had “behavioural and learning difficulties”. FFTB used the Test of Variables of Attention (TOVA) with the children. TOVA is a computerised Continuous Performance Test (CPT) that measures inattention and impulsivity. Baseline testing established that seven children tested on the TOVA had scores that were in the ADHD range and three children were within normal limits (it should be emphasised that the TOVA is a tool and not a stand-alone diagnostic instrument).

The summary does not justify the use of CPT nor explain why TOVA was used rather than the Conner’s CPT that might have complemented the wider use of the Conner’s Rating Scales. For one discussion of some of the issues, see ADHD: Is Objective Diagnosis Possible?

One would expect precise measures of sustained attention, like the CPT, to be the ‘gold standard’ for ADHD diagnosis, but this is not the case. The correlation between CPT performance and parent or teacher rating scales is modest at best.[28, 31] The Conners CPT and the TOVA, two of the more commonly used CPTs, are equally sensitive (85%) in demonstrating attention dysfunction in children who have been diagnosed with ADHD.[32, 33] The TOVA, however, generates unacceptably high false positive rates (30%) in normal controls and children with other psychiatric disorders (28%).[28,34] Because the CPT is such a sensitive measure of CNS dysfunction, there are always multiple reasons why someone’s performance on the test is impaired.[35]…
Another problem with the CPT is its unreliability. Even the commercially available CPTs fail to report acceptable levels of test-retest reliability. The TOVA, for example, does not even report test-retest reliability, but rather, split-half reliability coefficients.[37] Split-half reliability may be appropriate for a test that is given to a strong practice effect, but that is hardly true of the TOVA. Test-retest reliability is essential to consider in a test that is likely to be used serially (e.g., to evaluate an ADD patient’s response to stimulant medications). The Conners CPT-II does report test-retest reliability, but only on 23 normal subjects, and not on raw scores, but on derived scores.[38]

The CPT is thought to be useful as an adjunct to clinical diagnosis, as its popularity attests. But it is not a ‘diagnostic’ instrument, and, in the absence of acceptable levels of reliability, it may not even be appropriate for monitoring treatment effects.[39] [My emphases.]

TOVA may meet the researchers’ needs but in the absence of an explanation it is difficult to judge its appropriateness.

I should add that the researchers need to detail their hardware configuration and testing to make it explicit that response times for each element (e.g., monitor, keyboard/input device and CPU) remain constant and that variations do not skew the results for the children.

However, so far, so good, for the first two sentences. It is after this point that the summary is very confusing.

First Post test scores at three months indicated that seven improved their scores on the test with only one child showing no improvement and two not appearing for the test. All the children showed improvement in impulsivity. Out of the eight tested only four remained in the ADHD classification all with all children moving towards normal limits. On the final Post test of all of the four that had still scores within the ADHD classification were now within the normal range. However two of the remaining children did not perform as well as they had on previous tests and had scores that fell within the ADHD range.

?!? Unfortunately, for the first post test results, we don’t know whether the child who didn’t improve and the 2 who didn’t show up, were within the ADHD range or not, so this rather skews the results. 4 of the 8 children tested were within the ADHD range, however, we don’t know the make-up of those numbers. 4/8 seems like an improvement on 7/10 . However, if the 2 who didn’t show up were previously in the ADHD range and would have remained there, then the results would have returned 6/10 children. The status of the other 4 children is not stated explicitly but we have to assume that it is the normal range. There were originally 3/10 children in the ‘normal’ range. Were 3/8 children in this test those who had always been in the normal range, so only 1 of the previously ADHD range children had moved down rather than the implied substantial shift here? We needed the detail on the children.

Unfortunately, again, for the final post test, this is some confusion about how many children were tested. The results imply a minimum of 6 but there is nothing to tell us how they had performed in the original baseline so it was difficult to interpret their performance. The other 4 are not mentioned.

The 4 children who were in the ADHD range after the first post test moved into the normal range after their final post test. However, 2 of the children in the ‘normal’ range moved into the ADHD range. There is no indication of the baseline status of these children.

The subgroup analysis seems to end with 2 children in the ADHD category which looks like an improvement on 7/10 but in the absence of further information about the other children, this figure might be 6/10.

The above results may say very little about the FFTB programme but more about the known tendency of TOVA testing to return false positives in both ‘normal’ children and those likely to have co-morbid psychiatric conditions (and Chineham Park Primary does have double the national percentage of children with SEN according to the FFTB). If the children had received confirmed diagnoses that explain the use of TOVA then this should be stated, along with a discussion of any additional medical or behavioural interventions that the children might be receiving in conjunction with the FFTB project. [Edited post-broadcast: none of the children had a formal diagnosis of ADD, ADHD or ASD.]

It is difficult to understand the basis for Woodhouse’s conclusions. There seems to have been a drop-out rate of 40% for the subgroup testing and the scores seem to verge on the erratic (which, again, might be more of a reflection of the hardware or test than the children).

Overall the TOVA testing would appear to indicate that there were beneficial aspects to the Food for the Brain intervention conducted at the school and, although with limited numbers, there was a significant reduction in overall impulsivity and a general improvement in alertness and responsiveness suggested by the test results.

It does seem a little odd that Prof. Refsum was asked to comment before she had seen the full report.

These preliminary results are really quite striking and I look forward to the full details. With such a study, one cannot identify the cause of the improvements. Ideally, further controlled studies should be done in which changes in nutrition are separated from changes in physical activity and the attitudes of children, teachers and parents. These scientific questions are important, but perhaps what matters most of all is that the overall approach of ‘Food for the Brain’ has had such benefical [sic] results in practice. [My emphasis.]

In the absence of appropriate details it is difficult to comprehend Refsum’s optimism and confidence that FFTB was solely responsible for any beneficial results. It would have been practical to claim that this is plausible in the context of a relevant review of the literature and similar studies but in the absence of such a review any such suggestion is premature at best.

It will be interesting to see whether any of these issues are explored in the Trevor Macdonald programme. Oh, look, a flying pig. Must go, we’re expecting a phone call from the People’s Medical Journal to check some details for their forthcoming searing analysis of this study.

Flickr detail.

Advertisements

1 Comment

Filed under Food for the brain, Food for the brain foundation, Holford, patrick holford, supplements, TOVA

One response to “Food for the Brain Results on Trevor Macdonald: Part 3

  1. Pingback: Food for the Brain Child Survey: The Promotion « Holford Watch: Patrick Holford, nutritionism and bad science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s