Physicians spend a lot of time trying to figure out the best treatments for particular patients' problems. Doing so is often hard. In many situations, there are many plausible treatments, but the trick is picking the one most likely to do the most good and least harm for a particular patient. Ideally, this is where evidence based medicine comes in. But the biggest problem with using the EBM approach is that often the best available evidence does not help much. In particular, for many clinical problems, and for many sorts of patients, no one has ever done a good quality study that compares the plausible treatments for those problems and those patients. When the only studies done compared individual treatments to placebos, and when even those were restricted to narrow patient populations unlike those patient usually seen in daily practice, physicians are left juggling oranges, tomatoes, and carburetors.
Comparative effectiveness studies are simply studies that compare plausible treatments that could be used for patients with particular problems, and which are designed to be generalizable to the sorts of patients usually seen in practice. As a physician, I welcome such studies, because they may provide very useful information that could help me select the optimal treatments for individual patients.
Because I believe that comparative effectiveness studies could be very useful to improve patient care, it upsets me to see this particular kind of clinical study get caught in political, ideological, and economic battles.
The Comparative Effectiveness Kerfuffle
Therefore I have been dismayed to see the concept of comparative effectiveness studies get trashed in the main stream media. It is, of course, possible that some people are promoting comparative effectiveness studies for their own political, economic, or ideological reasons. And maybe such promotion deserves rebuttal. But just because some people may promote comparative effectiveness studies for the wrong reasons does not mean that such studies are a bad idea. In my humble opinion, as a physician, stifling such studies will stifle an opportunity to improve clinical decision making and improve patients' outcomes.
So when an op-ed piece in the Washington Times asserted that comparative effectiveness studies "can kill," and then based that argument on complete misunderstanding of several clinical issues and misinterpretation of clinical research studies, I felt that I should respond.
Rebutting the DrugWonks Rebuttal
Robert Goldberg, Vice President of the Center for Medicine in the Public Interest, has just rebutted my response, in a comment on my original post, and on his own blog, DrugWonks.com.
His reply, like his op-ed, seemed to be based on misunderstandings of the clinical context, and misinterpretation of the clinical research literature.
His first main point was:
1. You treat congestive heart failure with anti-hypertension drugs. Ask any doctor.
It is true that many drugs used to treat congestive heart failure (CHF) are also used to treat hypertension. These include diuretics, used to treat all forms of CHF, and angiotensin-converting enzyme inhibitors (ACEIs), beta-blockers, and angiotensin-receptor blockers (ARBs), used to treat CHF with systolic dysfunction (decreased pump function of the heart, as opposed to diastolic dysfunction, increased wall stiffness). But many drugs used to treat hypertension have not been demonstrated to benefit patients with CHF. For example, calcium-channel blockers have not been shown to be of benefit in CHF with systolic dysfunction.
Some treatments of CHF are not useful for hypertension, most notably digoxin and similar drugs, which seem to decrease symptoms and improve physical functioning (although they don't improve longevity) for CHF and systolic dysfunction.
In any case, just because some drugs work both for hypertension and for CHF does not mean that studies of the drugs of patients with one of these conditions provide results that can be extrapolated to patients with the other condition.
So Goldberg's sentence is at best partially true, but even so, is not relevant to the arguments he made in his original Washington Times op-ed.
Regarding his second sentence: 1 - I am a doctor, licensed in multiple states, and board-certified in Internal Medicine. 2 - I have written several research reports on CHF that were published in major journals. (1-3)
His second main point was:
I will refer you to the A-HeFT study and it's design which included BiDIl with OTHER anti-hypertensives to prolong survival from congestive heart failure.
I agree that the study compared BilDil, a fixed combination of isosorbide dinitrate and hydralazine, to placebo for CHF among African-American patients. Patients continued to take other CHF medications, including diuretics, ACEIs, ARBs, beta-blockers digoxin, and spironolactone.(4) But that is irrelevant to Goldberg's argument in his Washington Times article, which seemed to be based on the notion that BilDil should be used to treat hypertension, not CHF.
His third main point was to provide a quote, out-of-context and whose origin was unclear, attacking the ALLHAT study with adjectives and generalities. However, this quote does not support the specific criticisms of the ALLHAT study that Goldberg made in his Washington Times article. He asserted that patients were allowed to switch from the original assigned therapy only if they suffered a severe adverse effect, and that the primary result of the study was that diuretics were the most cost-effective drug. Both these assertions are not supported by the published reports of the ALLHAT study, as I discussed in detail in my previous post.
Goldberg concluded with a personal attack on me, one that is fairly ridiculous given my background and experience as briefly mentioned above. Goldberg also mounted a shrill attack on the comparative effectiveness movement, asserting that it arises from people "hating drug companies, "places the cost of drugs over the quality of human life," is based on hostility to "corporate capitalism," and is an example of "the end justifies the means."
Perhaps some people advocate comparative effectiveness research for these reasons. I certainly don't.
Summary: the Real Reason to Do Comparative Effectiveness Research
I advocate comparative effectiveness research because it has the potential to improve the decisions that us doctors make on behalf of patients, increasing the likelihood that patients will have good effects from treatment, and minimizing the possibility of side-effects.
If we allow clinical studies that realistically compare plausible therapies of conditions for various sorts of patients to become a political football, we will lose a major opportunity to improve patient care.
The current kerfuffle suggests it is time to return more control of medical research to physicians, who, after all, swore oaths to put the needs of patients ahead of other concerns, and let those with vested political, ideological, or economic interests argue over something else.
ADDENDUM (1 October, 2007) - On 27 September, I attempted to add comments on the DrugWonks.com post to which I referred above. As of today, those comments have not appeared.
Also, another post on DrugWonks.com on 27 September included "I guess taking money from organizations that switch people from one molocule to another without telling patients is okay for bloggers like Health Care Renewal." I have never, and I don't think any Health Care Renewal blogger has ever suggested they approve of switching patients without their (or their physicians') permission from one medication to another. Furthermore, a quick perusal of Health Care Renewal would indicate that we are as skeptical and critical of health care insurers and managed care organizations as we are of pharmaceutical companies, and we have been critical of conflicts of interest related to either type of company.
References
1. Poses, RM, Smith WR, McClish DK, Huber EC, Clemo FLW, Schmitt BP, Alexander-Forti D, Racht EM, Colenda CC, Centor RM. Physicians’ survival predictions for patients with acute congestive heart failure. Arch Int Med 1997;157:1001-1007.
2. Poses RM, McClish DK, Smith WR, Huber EC, Clemo FLW, Schmitt BP et al. Results of report cards for patients with congestive heart failure depend on the method used to adjust for severity. Ann Intern Med 2000;133:10-20.
3. Smith WR, Poses RM, McClish DK, Huber EC, Clemo FLW, Schmitt BP et al. Prognostic
judgments and triage decisions for patients with acute congestive heart failure. Chest 2002;121:1610-1617.
4. Taylor AL, Ziesche S, Yancey C et al. Combination of isosorbide dinitrate and hydralazine in blacks with heart failure. N Eng J Med 2004; 351: 2049-2057. Link here.
5 comments:
I am asking this not to be contrary, but because I am sincerely interested in your thoughts. What about the studies that compare treatment modalities but continually never test the efficacy of the standard care?
Obviously, it's one thing to favor the general idea of comparative effectiveness studies, and its another thing to figure out the details of doing such studies.
I strongly favor the concept of comparative effectiveness studies, but for such studies to help patients and be useful to doctors, we will have to deal with many devils in the details.
To address the question, ideally the study should compare all treatment options for the particular clinical problem and patient population that are currently deemed plausible, but for which there is no good evidence that one is overall better or worse than the others.
So this would involve comparing the various current version of "standard care."
WOW! This is incredible. I was going to leave a comment on DrugWonks, but I've attempted that on several prior occasions and they have never posted a single one of them. Of course, given the site's tie to a PR firm, I'm not surprised that they don't publish dissenting views.
"In any case, just because some drugs work both for hypertension and for CHF does not mean that studies of the drugs of patients with one of these conditions provide results that can be extrapolated to patients with the other condition." -- Well said.
"Regarding his second sentence: 1 - I am a doctor, licensed in multiple states, and board-certified in Internal Medicine. 2 - I have written several research reports on CHF that were published in major journals. (1-3)" -- Ouch. Well said, again.
"Goldberg also mounted a shrill attack on the comparative effectiveness movement, asserting that it arises from people "hating drug companies, "places the cost of drugs over the quality of human life," is based on hostility to "corporate capitalism," and is an example of "the end justifies the means." -- Yes, it appears that one of Goldberg's main weapons is the rant. I've been know to rant on my site, so I can't diss ranting per se, but I can't respect a rant when it is based on ad hominem attacks that are based on little to no evidence. When all else fails, either call your opponents Scientologists or claim that they hate capitalism, drug companies, innovation, and/or success. This playbook appears to have become standard.
Should Goldberg bother to read this comment, I refer to my earlier point -- allow dissenting comments on your site. If you are really into debate on your site, you should probably allow more of it on the blog.
Wish I added more new to the debate here, but I think Dr. Poses handled the debate rather well without any help from his reader. Nicely done.
I'd be curious to get your perspective on this call for how to run a comparative effectiveness research organization, which appeared in JAMA two weeks ago (http://jama.ama-assn.org/cgi/content/full/298/11/1323). Specifically, Ezekiel Emanuel of NIH and two prominent health care economists say:
"Critical to ensuring independence, objectivity, relevance, wide dissemination, and especially legitimacy of the process is a permanent stakeholder advisory board that includes representatives of patients, insurers, employers, physicians, other clinicians, and federal agencies, as well as drug and device manufacturers. Important stakeholders must be engaged in selecting technologies for evaluation, designing studies, and interpreting and disseminating results. Having key stakeholders involved in a transparent process, even one that may generate research results contrary to their interests, will foster greater support for the process, methods, and results."
Do you think drug companies should have a role in "selecting technologies for evaluation, designing studies, and interpreting and disseminating results"?
I am for independent, unconflicted, methodologically sound comparative effectiveness research.
I don't think that it is absolutely necessary that the research be entirely or substantially funded by government. But right now, I'm not sure who else might want to fund it.
If the government is going to fund it, I think the agency responsible (which ought to be AHRQ, in my humble opinion), ought to seek advice from all possible stake-holders. However, the people most knowledgeable about which tests or treatments ought to be evaluated are physicians, who have to decide what tests or treatments to do, and who are sworn to put their patients' interests first.
However, I don't think that any person or organization with a vested interest in comparative effectiveness studies coming out in a certain way, e.g., to favor their product, service, or ideology, should have ultimate responsibility for how such research is done.
That means that I don't favor having employees of or people with financial relationships with pharma/ biotech/ device companies, or with insurers or managed care companies, on the board of an organization that sponsors or runs comparative effectiveness research. And I am wary of anyone purporting to represent "employers," although such roles may be difficult to define, since employers often most want to reduce health care costs.
Post a Comment