Nissen and colleagues reported on a re-analysis of clinical trial data on the new drug muraglitazar (available now only on the web here. The citation begins Nissen SE, Wolski K, Topol EJ. Effect of muraglitazar on death and major adverse cardiovascular events in patients with type 2 diabetes mellitus. JAMA 2005; 294:) Muraglitazar is a peroxisome proliferator-activated receptor (PPAR) agonist meant to both decrease blood sugar in patients with type 2 diabetes (i.e., those who do not necessarily require insulin therapy) and to increase HDL cholesterol ("good" cholesterol).
Nissen and colleagues used data from five clinical trials that were reported to the US Food and Drug Administration (FDA), but were mostly unavailable in published form. An FDA advisory committee had reviewed this data, and recommended that the drug be approved for treatment of type 2 diabetes.
When Nissen et al re-analyzed the data, however, they found that treatment with muraglitazar seemed to result in more clinically important adverse events than did treatment with an older PPAR-agonist, pioglitazone, or placebo. In particular, the rate of the commonly used combined endpoint, death or nonfatal myocardial infarction (heart attack), or stroke was higher (1.47%) in patients treated with muraglitazar than it was (0.67%) in those treated with either pioglitazone or placebo. Analyses using different combinations of endpoints, or single endpoints all showed higher rates for patients who received muraglitazar (sometimes statistically significant, that is, unlikely to be due to chance alone, and sometimes nearly statistically significant).
Thus, it appeared that the way the data on muraglitazar was analyzed and presented to the FDA made the drug appear less hazardous than would have other analyses.
In an accompanying commentary (also available only on the web here, with a citation beginning Brophy JM. Selling safety - lessons from muraglitazar. JAMA 2005; 294:), Brophy noted:
Generically, there are specific methodological decisionsSo based on on Nissen's work, Brophy has provided us with yet another catalog of how research sponsors may manipulate design, analysis, and reporting of results to make them more likely to be favorable to their interests.
in the sponsor’s FDA application that may foster an illusion
of safety. The following are several, perhaps unintended
but nevertheless disingenuous, methods observed
in the application that may have contributed to an overestimate
of the safety profile:
1. Selecting a study population unlikely to have adverse
outcomes but nonrepresentative of potential future users (eg,
exclusion of elderly patients, even though more than one
third of type 2 diabetes occurs in this group)
2. Conducting underpowered studies increasing the failure
rate to detect meaningful safety differences (ie, maximizing
rather than minimizing type II errors)
3. In contrast to efficacy determinations, reporting individual
rather than composite safety outcomes to decrease
the likelihood of establishing statistical significance (eg, separate
cardiovascular events from CHF)
4. Limiting preapproval peer-review publication of results
so as to minimize scrutiny and debate of both methods
and results (eg, of all submitted data only 1 study of 340
patients has been published8)
5. Evoking biological implausibility of safety concerns
by the use of surrogate measures (eg, treatment reduces
C-reactive protein [CRP]) implying safety, despite no proof
that CRP reduction is clinically correlated with improved
6. Recording outcomes only in patients who are fully
compliant with prescribed treatment because this selfselected
group will likely have fewer adverse events (eg,
unknown impact of the nonanalysis of the 15% discontinued
7. Ignoring the totality of the evidence by excluding consideration
of confirmatory safety signals seen in studies of
similar molecules (eg, CHF and bladder cancer outcomes
8. Diverting attention to unproven but potential benefits
by concentrating on reductions in surrogate laboratory
values (eg, hemoglobin A1C) rather than in meaningful
patient health outcomes.
This list should be compared to the one developed by Richard Smith in his article in PLoS Medicine (see our post here, and the article here.)
There are some important lessons here:
- Manipulation of research design, analysis and reporting may be more widespread than we realized.
- We need better ways to detect such manipulation, and to then take it into account when using research results to make clinical or policy decisions.
- We need to find ways to discourage such manipulation.