Physicians spend a lot of time trying to figure out the best treatments for particular patients' problems. Doing so is often hard. In many situations, there are many plausible treatments, but the trick is picking the one most likely to do the most good and least harm for a particular patient. Ideally, this is where evidence based medicine comes in. But the biggest problem with using the EBM approach is that often the best available evidence does not help much. In particular, for many clinical problems, and for many sorts of patients, no one has ever done a good quality study that compares the plausible treatments for those problems and those patients. When the only studies done compared individual treatments to placebos, and when even those were restricted to narrow patient populations unlike those patient usually seen in daily practice, physicians are left juggling oranges, tomatoes, and carburetors.
Comparative effectiveness studies are simply studies that compare plausible treatments that could be used for patients with particular problems, and which are designed to be generalizable to the sorts of patients usually seen in practice. As a physician, I welcome such studies, because they may provide very useful information that could help me select the optimal treatments for individual patients.
Because I believe that comparative effectiveness studies could be very useful to improve patient care, it upsets me to see this particular kind of clinical study get caught in political, ideological, and economic battles.
In particular, we have discussed a number of high profile attacks on comparative effectiveness research, which often have featured arguments based on logical fallacies. While some of the people making the attacks have assumed a conservative or libertarian ideological mantle, one wonders whether the attacks were more driven by personal financial interests. For example, see our blog posts here, here, here, and here.
Therefore, it was refreshing to see this defense of comparative effectiveness research in the opinion pages of the New York Times, which demonstrated that the issues here are really not ideological.
Drawing upon the ideas of the Harvard economist David Cutler, the Obama administration talks of empowering an independent board of experts to judge the comparative effectiveness of health care expenditures; the goal is to limit or withdraw Medicare support for ineffective ones. This idea is long overdue, and the critics who contend that it amounts to 'rationing' or 'the government telling you which medical treatments you can have' are missing the point. The motivating idea is the old conservative chestnut that not every private-sector expenditure deserves a government subsidy.
This was written by Tyler Cowen, a well known academic economist, a Professor in that department at George Mason University with impeccable libertarian credentials. (Prof Cowen also blogs on Marginal Revolution.) Prof Cowen reminded us how the current health care reform debate could benefit from some clear thinking that eschews ideological posturing.
The weak link in this argument is the panel of so called "experts". Such experts may not have ordered/used the therapies the are critiquing. Are these self proclaimed experts? Are they selected by the scratch your back method?
ReplyDeleteAlthough this is medical technical, for instance, the fact that established guidelines for treating diastolic heart failure had ever considered digitalis (very cheap drug) as a class 2b therapy, is enough reason to believe that the "expert" panel did not take care of patients with that condition.
That drug causes the problem yet it had been listed as a treatment.
It was not worth any effort to complain, but that would be different if it were to be considered a "comparatively effective" therapy by the envisioned expert panel.
Having sat with a friend, who knew he was terminal, and watched him fight the oncologist for pain medications over additional "treatments" we should welcome this type of guidance. End of life expenses are one of our greatest financial burdens, often out of proportion to preventative measures, or treatments, up to reaching that end stage.
ReplyDeleteIt really would not hurt to take a page from the UK's NICE and look at what we are paying for and what we are getting in terms of treatment and results.
In reading the business news, it seems that outside of the US, drug and device companies meet these financial guidelines with their newest drugs and devices.
Steve Lucas
All good points if/when and only if/when the CER is conducted carefully, is thoroughly peer reviewed and widely criticized, and is never employed on a given issue until the above has been achieved.
ReplyDeleteNot much to ask, eh?
David Smith (whichever David Smith you might be),
ReplyDeleteOf course, and so should any clinical research study be conducted carefully, and be thoroughly peer reviewed and widely criticized.
He have posted again and again about studies that were, in contrast, manipulated to make sponsors' products appear in a more favorable light; ghost-written by professional writers hired by the sponsors, and then fronted by "key opinion leaders" also hired by the company; disseminated in fora that appeared to be impartial and academic, but really were run according to sponsors' wishes; and worse, suppressed because their results did not show the sponsors' products in a favorable light.
If our clinical research evidence base had not been so thoroughly corrupted by those with vested interests in having the research turn out in their favor, maybe we wouldn't have such a pressing need to for impartial, well-conducted, and honestly reported comparative effectiveness research.
I'm pretty sure we're "on the same page" here...
ReplyDeleteFor the record I'm the elderly David Smith who has been designing and developing software to support Federally-funded behavioral research for far too many years to be sanguine about either software-as-a-solution or research.
So, what can we do to heal the problems of research quickly enough that the Emergency Legislation That Must Be Enacted Yesterday Or We'll All Die doesn't start killing us with those corrupted CER results?
That may be a rhetorical question.
One other thing - IMHO there's plenty of corruption that has nothing to do with businesses pushing products. Lazy and/or ambitious academics and/or journal editors/journalists are quite capable of poisoning the well on their own.
ReplyDeleteAnd yes, I am aware of the risk of seeming an embittered old man...but lives are at stake.
Comparative effectiveness research can work, if done scientifically. The latest calls for use of (uncontrolled) EHR data for such studies leaves me dubious. I wrote about this issue here (PDF).
ReplyDelete-- SS