Tuesday, December 21, 2004

The End of Blockbuster Drugs?

In "The End of Blockbuster Drugs", Adam Feuerstein of TheStreet.com writes:

The FDA approved Iressa in 2003 under a program that allows drugs for serious, life-threatening diseases like cancer to be approved based on certain surrogate markers for clinical benefit ... Under the so-called subpart H regulatory guidelines, the FDA approved Iressa with the condition that AstraZeneca would run another clinical study to show that Iressa's ability to shrink tumors would lead to improved survival, which is the gold standard in cancer drug efficacy ...

AstraZeneca ran that confirmatory study, and it failed. Now, the FDA has to decide whether to pull Iressa off the market ... The Iressa mess could have been avoided back in 2003 if AstraZeneca had conducted a randomized, controlled study (Randomized Clinical Trial or RCT) of the drug, such as comparing Iressa with a placebo or best supportive care to see if the drug improved survival. If that study had been conducted and the results had turned out as those released Friday, the FDA probably wouldn't have approved Iressa, and therefore, wouldn't be facing a difficult decision today. AstraZeneca could have put together a controlled study of Iressa; after all, that's exactly what OSI and Genentech did.

My point here is not to be a Monday morning quarterback. Instead, looking ahead, I think that some top FDA officials will use the Iressa situation to argue even more strongly against the use of single-arm, uncontrolled studies as the basis for future drug approvals. While everyone wants to see the FDA work quickly to approve life-saving drugs, we should also be concerned when the agency approves drugs that turn out to be nothing more than expensive placebos, and often with dangerous side-effects, to boot.

Caution should be exercised along this line of thought regarding use of non-RCT studies. There is a marked asymmetry between evaluating a drug for positive effects vs. evaluating a drug's risk.

When a successful RCT shows a therapeutic drug effect at p<.05, this means that there is only 1 chance in 20 (5/100) or less that the drug is actually not useful, the downside being that there's 1 chance in 20 (or less) that the drug is simply a waste of money. This is the basis for using the RCT and setting the confidence level at p<.05 in the drug approval process.

However, if some other non-RCT (e.g., a retrospective analysis of insurance company records) suggests adverse effects, but p>.05 , dismissing such an analysis is quite cavalier, as the downside is potentially quite serious. This is especially true in the setting of "blockbusters" with millions of users, where the absolute number of affecteds could be relatively large (e.g., 100,000 MI's and CVA's).

The FDA analysis of a Kaiser Permanente database reportedly showing that 27,785 heart attacks and sudden cardiac deaths might have occurred due to VIOXX was controversial. In an interview in the Boston Globe, Merck CEO Raymond Gilmartin refutes the study's findings because it was based on a review of medical records, not a clinical trial. "You can't take a study like this and take a patient population and extrapolate those kinds of numbers," he said. "It's just not valid to do that." Such retrospective studies are subject to confounding factors in the data.

This does not make such studies automatically wrong, however, and pharmaceutical companies and regulatory agencies that ignore such studies on a dogmatic basis (due to a belief in the absolute ascendancy of controlled clinical trials) do so at their own peril. (In fact, in this case the retrospective studies were shown to be correct by later RCT's and the resultant VIOXX withdrawal has significantly harmed a company with a century's reputation for excellence.)

Such non-RCT, retrospective studies need to be factored into the overall risk portfolio of drugs to be consumed by millions, where even a small risk of major side effects could lead to catastrophe due to volume. In recent testimony to the a U.S. Senate committee, Dr. David Graham of the FDA said the FDA's Office of New Drugs "unrealistically maintains a drug is safe unless reviewers establish with 95 percent certainty that it is not" [via clinical trials]. That rule does not protect consumers, Graham told the Senate committee. "What it does is it protects the drug," he said.

Let's hope FDA understands the asymmetry between testing drugs for efficacy via small RCT vs. monitoring them for adverse effects in large populations.


Roy M. Poses MD said...

Unfortunately, it gets even more complicated, and there really is no easy way to assess adverse effects of drugs outside of controlled trials. The biggest problem with observational studies of treatments (like drugs) is selection bias. Physicians are trained to select the most suitable treatment for each individual patient. Therefore, patients they treat with a particular drug for a specific problem may systematically differ from patients they treat with another drug. In an observational study, such systematic differences in patients treated with different therapeutic ooptions may have as much, or more effect, on the outcomes of interest than the options themselves. This isn't a reason to completely dismiss all observational studies of treatments, but it is a reason to interpret the results very carefully. Multi-variate analyses can adjust for such selection bias, but it is often difficult to figure out and then measure what patient characteristics influenced physicians' decisions to use the treatments of interest.

InformaticsMD said...
This comment has been removed by a blog administrator.
InformaticsMD said...

Comes down to the issue of how sure does society want to be about the safety of a drug. I, for one, believe that long-term, systematic surveillance via widespread EMR -- if and when that goal is acheived -- will enable superior surveillance to what exists today, if only to serve as an earlier warning flag to adverse events. However, I think much more than that is possible with large datasets from EMR's on hundreds of thousands (or millions) of patients that could contain comprehensive patient histories and data for analysis.

As another example of the need for better methods of drug surveillance, in "Naproxen study halted by NIH" (Washington Post, http://www.philly.com/mld/philly/living/health/10465103.htm?1c ) it is reported regarding Naproxen that:

*** "This is a very confusing situation," said Sandra L. Kweder, deputy director of the Food and Drug Administration's Office of New Drugs, speaking to reporters at a hastily convened telephone news conference last evening. Naproxen has been on the market since 1976, Kweder noted, and "this is the first evidence we've seen that suggests there is a risk." She and other officials acknowledged, however, that no one seems to have studied the long-term safety of naproxen or, for that matter, any of the other popular painkillers known as nonsteroidal anti-inflammatory drugs. ***

There's likely little interest in making the significant investments to study OTC drugs and generics, indicating some other mechanisms to accomplish surveillance of "consumer medicines" would be very helpful.