Saturday, April 07, 2007

Annals of Internal Medicine Shows Appropriate Skepticism About a Commercially Sponsored Clinical Trial

The Annals of Internal Medicine just published a clinical trial that compared a new type of anti-diabetic drug, exenatide, (Byetta, from Eli Lilly & Co, and Amylin) an incretin mimetic, to placebo for patients already being treated for Type 2 diabetes. [Zinman B, Hoofwerf BJ, Garcia SD et al. The effect of adding exenatide to a thiazolidinedione in supoptimally controled type 2 diabetes: a randomized trial. Ann Intern Med 2007; 146: 477-485.]

The article was typical of the commercially sponsored clinical trials that appear in many journals. The Annals, however, made a publishing decision that was atypical of how many journals handle such articles. It included an accompanying editorial that was appropriately critical of the study design and methods, and appropriately attuned to how commercial research sponsors, such as pharmaceutical companies, biotechnology companies, and device manufacturers, may manipulate how research is designed, carried out, analyzed, and reported to serve their vested interests.

We have blogged about such research manipulation before. See our posts about Richard Smith's catalog of manipulation strategies, about specific strategies used in a study of muraglitazar, about David Healy's description about manipulation strategies used for studies of SSRIs (selective serotonin reuptake inhibitors), and Wally Smith's summary of the creation of pseudoevidence. But commercially sponsored articles whose design, execution, and analysis have been manipulated to serve their sponsors interests slip into print regularly, unaccompanied by any notice of how they serve commercial purposes.

The Clinical Trial of Exenatide

In summary, the trial by Zinman et al compared exenatide to placebo in patients already received either rosiglitazone (Avandia, by GlaxoSmithKline) or pioglitazone (Actos, by Takeda Pharmaceuticals), sometimes also metformin, but who were not necessarily trying to modify their lifestyles (either their diet or exercise schedule). A total of 233 patients were randomized, and followed for 16 weeks. Patients in the exenatide group had a statistically significantly lower hemoglobin A1c level, a measure of overall control of blood sugar, at the end of the trial. They also had statistically significantly greater rates of nausea and vomiting, and greater rates of dropping out due to adverse events.

The Editorial

Accompanying the trial was an editorial [Malazowski S. Exenatide in combination therapy: small study, big market, and many unanswered questions. Ann Intern Med 2007; 146: 527-528.] The editorial amounted to a rigorous critical review of the trial. It pointed out most of the trial's many design and methodologic problems, and explained why its results were less than earth-shaking.

These problems included:

  • Difficulty determining to whom the results would apply - The patient population consisted of people who did not have their diabetes under excellent control, but were not on optimal medication regimens, and were not trying to improve their diets or increase their exercise. Thus, "we simply don't know whether patients optimally treated with diabetes education, diet, TZDs, [thiazolidinediones] and metformin will receive as much benefitfrom exenatide as the paper reports," since such patients were not included in the study. Nor could the study predict whether adding exenatide would work better than simply increasing the doses of conventional medication, or starting a diet modification or exercise program.
  • Short duration - Although diabetes is a chronic problem, the study did not assess the new medication in long-term use. Thus, it was not designed to tell whether it would work in the long run, whether it would affect the patients' longevity, development of complications, symptoms or health status in the long run, and whether it would lead to adverse effects in the long run that did not occur in the short run.
  • Small sample size - "Small and short studies provide a false sense of safety, because common severe drug reactions may not occur in the condensed timeline and the limited number of patients." Even so, patients who received exenatide had a relatively high rate of nausea and vomiting, often leading to discontinuation of the drug. So the study did not suggest that any benefits of the drug clearly outweighed its harms. [Note that the editorial did not mention that patients on exenatide also had higher (but not statistically significantly higher) rates of hypoglycemia, low blood sugar, than did patients given placebo. Hypoglycemia, which can be serious, is the main hazard of aggressive treatment of blood sugar. One patient on exenatide also developed an unusual, possibly allergic reaction, allergic alveolitis. The study was clearly too small to predict the rates of these effects that would occur if the drug were used in large numbers of patients. Thus, there is suspicion that the benefit/ harm ratio of this drug is even less favorable.]

Furthermore, the editorial pointed out the relationship between the study's commercial sponsorship and its flawed design. Malozowski stated, "the study was designed, conducted, and analyzed by employees of the manufacturer in collaboration with academicians...." (Note also that the three academicians who contributed to the study all worked part-time for the manufacturers as consultants and/or speakers.) So, "the design and reporting of Zinman and colleagues' study reminds us that the manufacturers control the flow of information about its product. By virtue of FDA approval for the combination of exenatide and TZDs, the data obtained in the study can lead to enormous financial benefits to the sponsor. Millions of patients received TZDs and metformin - now physicians may consider adding exenatide. Great power requires great responsibility. Physicians and patients need answers to the many questions raised by this small study."

In my experience, rarely have I seen a commercially sponsored study whose flaws all seemed to increase the likelihood of finding results favoring the sponsor's vested interests published with an accompanying commentary that pointed out these flaws and their relationships to these vested interests.

Given the great efforts commercial sponsors make to get studies with results favorable to their products published, it would be nice if all journals accompanied such articles with appropriately critical accompanying commentaries.

Kudos to the Annals of Internal Medicine for showing the way forward.

5 comments:

. said...

This posting is near and dear to my heart. It is very brave for the Annals of Internal Medicine to come out in this way about PHARMA research. I am sure the article was reviewed by Glasgow in the context of his Refit Model for generalizability. Dr. Glasgow is such a good soul and a brilliant man, that I am sure he was behind the editorial. I have not seen the original source or the article or the editorial. I wanted to write this UNBIASED by the truth, just from my experience.

I actually tried Byetta myself! I found the side effects unbearable and the MD did not involve herself in my treatment using this drug in a best practices manner, so I stopped the medication at large monetary cost AFTER INSURANCE REIMBURSEMENT.

The Annals is a very good journal, but there are many other PHARMA advertising journals that would take the article without many changes. So these papers can be peer reviewed and accepted in low level journals. Thus they become "truth" without Editorials.

This study has many obvious flaws that were well stated. But there are several OTHER major statistical problems with this paper.

Due to the differential drop out rate (Byetta makes most people nauseous) statistics need to be performed that account for the fact that people who "survive" Byetta are fundamentally different in some way from those who DO NOT complete the trial for ANY reason. This is a very tricky issue, THAT COULD HAVE BEEN ANTICIPATED, and the study could have been designed to account for the nausea rule out.

HEY PHARMA ANALYSTS, WHEN ARE U GOING TO REALIZE THAT THE CHARACTERISTICS (covariates, risk factors, endogenous factors) OF THOSE WHO ARE COMPLETERS (patients who complete the trial) are FUNDAMENTALLY DIFFERENT FROM THOSE WHO DROP OUT FOR ANY REASON (e.g., those who refuse treatment, can not be contacted, can't tolerate medication). This is one of the dirty little secrets that they don't want to address. Everyone has some lifestyle changes or baseline level of diet, self-care, and feelings about the role of medication in their control of blood sugar. These variables need to be covariates in the analyses. Randomization doesn't necessarily control this.

I went to the FDA meeting for PHASE 1 - 3 trials, and they almost kicked me out.
The reason being, the FDA would only approve baseline severity as a covariate. I told them that was ludicrous in real life.

To do the RIGHT THING, non-response propensity weights needed to be¬¬ used. In addition, covariates also need to be accounted for in the model to evaluate the treatment effects. Did they use random coefficient regression techniques? In my opinion, and this could be debated, is that this type of modeling procedure (weighted and random slopes and intercepts) would be the only appropriate statistical analysis methodology for this design. GEE is not the optimum model, however, this is what the FDA, in their ADVANCED classes recommend. My methods (and those of many others – Hat Tip to Hedeker) would allow the researchers to look at people as having different baseline characteristics (e.g., HbA1c, BMI, age) or INTERCEPTS and different trajectories or slopes over the study period. It would also allow the use of all the data, even those with only baseline data to be used in the analyses (in this case weighting is not needed).

What percentage of data was collected (Number of questionnaires you actually have divided by the (# patients randomized * #assessments))? What is the coefficient of acceptability of treatment (how many folks were approached and how many agreed to be consented? How about the tolerability of the treatment? That seemed pretty bad.

I would be interested in knowing WHO are they attempting to generalize to? All races? All ages? All socioeconomic levels? All levels of baseline type 2 diabetes severities?
What are their coefficients of generalizability or their REFIT numbers?

The reason these types of PHARMA studies are designed and analyzed in a sub par method is because the people designing and conducting and analyzing these studies are UN-INFORMED. I don't think they actually do this on purpose. I believe it is due to EGO and IGNORANCE. Being part of MANY PHARMA studies like this one, I can honestly say that the crappy design usually is the brain child of the PI of content. That MD usually has had NO training WHAT-SO-EVER in experimental design or statistics, as these are courses that are NOT taught as part of medical school. The MD designs the study and it is PAID FOR by a drug company, who is very DEFERENTIAL to the MD design.

For the statistics, they usually hire an individual who is a strict biostatistician and not a PhD with content in the area WHO IS ALSO a statistician (like me).

The Stat person doesn't get to HAVE ANYTHING TO DO AT ALL with the design of the data, and only gets it when the study is completed (sometimes its all washed by then). Then this Biostatics expert does what is asked of him/her AND DOES NOT QUESTION AUTHORITY. This is because the VAST majority of Biostat people are not English as a first language, are usually from India or China or Korea and have little diabetes or psychiatry content knowledge, and to the analyst, the data could be numbers from ANY study.

I teach Biostat PhDs how to analyze data for psychiatric health services research. Many of my publications are in the field of diabetes and its intersection with psychiatry. Universally, not only do the PhDs know NOTHING about the terminology, theories or DSM coding for psychiatry but IN GENERAL they NEVER question the PIs. NOT AT ALL. This type of blind acceptance of authority is problematic.

Now add in the scientific REP from the DRUG Company. This individual's role is to collect a portfolio of research projects that positively promote their products. These individuals GRAB the forms or have them uploaded as soon as the patient completes them. THEY OFTEN DON'T LET THE PIs EVEN KEEP THEIR OWN DATA, let alone SEE the data from the other experimental sites (if there are ones).

I have argued often with these people and have been allowed to see the data. Then I have hours and hours of phone conferences while I try to tell them how to do the data correctly. Then it usually gets shot up the ladder in the DRUG Company to the top OLD MAN who is in charge of data. This MAN is likely to have NEVER read another stat paper in years, so is working about 20 years behind the time. SIGH!!

On the other hand, I give everyone a hard time, and submit their data to rigorous exploration and analysis. It is for that reason that RAND gets so much business at a very high price. Most researchers don't want a person like me reviewing their study. Because I tell it like it is, just like these Editors... bravo!!

Anonymous said...

Your points are fair. However, there is also the reality that the FDA will often not allow many of the things clinicians would like to have incorporated into the study. Granted, the industry has done a pretty good job of destroying its own credibility with the medical community, there is still the problem of what the FDA will and won't allow. Consider all the Hep C studies conducted without growth factors. FDA won't allow their use. Nadda. Now, in the real world, who doesn't use them?

Enough said.

. said...

YOU are exactly correct. I have worked on several Hep C psychiatry studies, in the role of doing the psychiatric diagnosis and assessment of distress over time.

The FDA acknowledged in the last ASA meeting in Seattle, that information was way out of their control. This is true with statistics designed to correct bad designs. I would like to see the FDA admit to considering data that was rigorously examined IN MORE THAN ONE WAY.

Thanks so much for engaging me.

Dr. BK

#1 Dinosaur said...

Great post, but please:

But by publishing it with an editorial that was appropriately critical of the study design and methods, and appropriately attuned to how commercial research sponsors, such as pharmaceutical companies, biotechnology companies, and device manufacturers, may manipulate how research is designed, carried out, analyzed, and reported to serve their vested interests.

Make this into a sentence. All you have now is a clause:

"But by publishing it with an editorial that was...."

Everything from "appropriately critical" to "vested interests" are merely modifiers of the editorial. What exactly was it they did?

All you have to do is add the words, "they done good."

Roy M. Poses MD said...

The introductory paragraphs of the post have been rewritten to fix the problem noted by #1 Dinosaur, and to frame the post better.

Thanks for the editing help.

Note that this is a blog, written by volunteers. We do not have editors, and write on our own time. So spelling, grammatical, and syntax errors do get by us.