The most recent issue of the British Medical Journal included an important commentary by Dr David Healy [Healy D. Did regulators fail over selective serotonin reuptake inhibitors. Brit Med J 2006; 333: 92-95.] It included a catalog of sophisticated techniques used by pharmaceutical companies to manipulate studies, in this case of selective serotonin reuptake inhibitors (SSRIs), so that the results would tend to favor their products. In particular, these included:
- Attributing adverse events that occurred during the placebo run-in period prior to a trial to the group that received the placebo during the trial, thus inflating the adverse event rate for the placebo group, and making the adverse event rate in the group treated with the active drug (manufactured by the pharmaceutical company that sponsored the trial) look comparatively lower. Healy noted that both GlaxoSmithKline and Pfizer, "faced with the claim made here about the way in which data had been presented to regulators, have not denied what happened.... Pfizer makes it clear that: 'Pfizer's 1990 report to FDA plainly shows ... 3 placebo [suicide] attempts as having occurered during single blind placebo phases.'"
- Attributing adverse events that occurred after the conclusion of a trial again to the placebo group, with a similar effect on interpretation of the results. "Crucially until GlaxoSmithKline's recent letter, the publicly available figures for suicides among paients on placebo in trials of paroxetine contained three suicides, all of which occurred after the active treatment phase of trials had finished."
- Analyzing adverse events that are most likely to occur soon after a drug has been started using time-dependent statistical methods that assume a "constant hazard" over time, which may dilute out the statistical significance of the events.
- Analyzing data that compare rates of adverse events in different arms of controlled trials using multi-variate statistical approaches, which may dilute out the statistical significance of differerences in the rates across patient groups. Such techniqutes ought to be superfluous in a succesfully randomized trial. Randomization should make it very unlikely that there would be differences between the groups that need to be corrected by such statistical techniques. Randomization so unsuccesful that it would require such statistical correction to analyze adverse effects ought to raise doubts whether any of the trial's results are valid. "Imbalances in these variables should be contained in the confidence interval that lies clearly in the region of the adverse effect."
We have previously posted about other, sometimes simpler, techniques that research sponsors with vested interests may use to increase the likelihood they will get the results they want (see posts here and here).
The increasing evidence that clinical research may be manipulated by sponsors with vested interests to increase the likelihood of results favorable to them, and now that the techniques used to perform such manipulation may be very sophisticated, suggests:
- Physicians (and other health professionals, and patients, and policy-makers) need to be increasing skeptical about research sponsored by those with vested interests.
- Physicians really need to be familiar enough with research design, implementation, and analysis, that is, with the concepts underlying evidence-based medicine, to critically and skeptically review important research studies well enough to discover such manipulation.
- Since no physician has the time and expertise to review but a small fraction of the available studies, we need to develop watchdog organizations that will skeptically review such research, independent of any influence from vested interests.
No comments:
Post a Comment