Tuesday, July 8, 2014

The Chowdhury Meta-Analysis

In March this year, another meta-analysis (by Chowdhury, et al) was published that looked at the relationship between fats and coronary heart disease (CHD) [1].  Its conclusions were that Current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats”.  As expected, the mainstream was very critical of the paper, but mainly regarding the inclusion/interpretation of observational studies and those related to omega 3 supplementation.  Anyway, I’m going to ignore all that and focus on their interpretation of the clinical trials that replace SFA with omega 6 PUFA 

They included randomised controlled trials with 50 or more total coronary outcomes.  This criterion excluded the unfavourable Rose Corn Oil Trial (18 events), but didn’t exclude the favourable STARS trial (7 by their assessment).  Not like that matters much if you’re simply going to do a quantitative assessment as these two trials are too small to have a noticeable effect on the outcome (a relative risk (RR) assessment).  Their description of the trials they included and the risk of bias assessments are shown below (click to enlarge)

*Mixed poly-unsaturated intervention with linoleic acid as the primary fatty acid
Note: the age of the men in the FMHS was 34-54 (not 34-44)
NFMI = non-fatal myocardial infarction; FMI = fatal myocardial infarction; FCHD = fatal coronary heart disease; SCD = sudden cardiac death
Based on the trials, they calculated the RR for omega 6 supplementation to be 0.89 with a confidence interval of 0.71 to 1.12, making it not statistically significant.  But when they excluded the Sydney Diet Heart Study (SDHS) in a sensitivity analysis the RR was 0.81, which was significant

There are some problems with the meta-analysis: 

Under omega 6 supplementation, they included trials that did much more than simply replace SFA with omega 6.  The Oslo Diet Heart Study, Finnish Mental Hospital Study (FMHS) and STARS are the best examples of this as there were many other differences between the control group and the high omega 6 group that were almost always favorable to the high omega 6 group (such as less trans fats, more omega 3, fruit and vegetables, weight loss, etc).  But you also have other trials like the LA Veterans Administration Trial, which was mostly well done, except the control group was eating reheated butter as a main fat source and consequently had an insufficient intake of vitamin E (2.6 mg and the RDI is 10 mg) 

They only reported CHD outcomes and didn’t include total mortality.  Previous meta-analyses by Hooper, et al and Mozaffarian, et al suggest some benefit for CHD, but find no difference in total mortality.  What good are these interventions if CHD mortality decreases, but total mortality stays the same? 

Supportive of DHH for CHD Events
Supportive of DHH for CHD Mortality
Supportive of DHH for Total Mortality
Hooper (Cochrane)
RR = 0.82
CI = 0.66 to 1.02
RR = 0.92
CI = 0.73 to 1.15
RR = 1.02
CI = 0.88 to 1.18
(+5% PUFA)
RR = 0.81
CI = 0.70–0.95
RR = 0.80
CI = 0.65–0.98
RR = 0.98
CI = 0.89–1.08
* Significant difference 

Ultimately the problem with all the meta-analyses of this type is that they treat the trials as if they’re same, ignore the other differences between the groups and the overall quality of the trial, and then just run some statistics.  When you have such (mostly) poor and variable trials (with no further ones being done) I think a qualitative approach is the best way to discuss these trials and arrive at a conclusion 

Walter Willett made a similar point (if only he tell this to his colleagues): 

The controversy should serve as a warning about meta-analyses, Willett adds. Such studies compile the data from many individual studies to get a clearer result. "It looks like a sweeping summary of all the data, so it gets a lot of attention," Willett says. "But these days meta-analyses are often done by people who are not familiar with a field, who don't have the primary data or don't make the effort to get it." And while drug trials are often very similar in design, making it easy to combine their results, nutritional studies vary widely in the way they are set up. "Often the strengths and weaknesses of individual studies get lost," Willett says. "It's dangerous." [2]

No comments:

Post a Comment