Thursday, December 30, 2004

Pitfalls of Single-Disease Solutions

It has been fashionable for health care managers and health policy types to foist "single-disease solutions" on doctors. The best examples are single-disease management strategies, based on single-disease management guidelines. Such solutions may be attractive in their simplicity, especially to managers who may not understand that patients and health care are actually very complex.
To me as a clinician, single-disease management programs make little sense for most of my patients. Patients with one chronic disease usually have multiple chronic diseases. Yet, single-disease management strategies often fail to include adjustments to their management of the target disease that take into account the patient's other diseases, and their treatments.
The intellectual heritage of these single-disease solutions seems to be the practice variation studies of the last century. They showed again and again that the rates of use of specific management options varied from place to place. Many of their authors quickly concluded that these variations were due to physicians' capriciousness, not physicians' appropriate responses to variability in patients' clinical characteristics and preferences and values. The unproven argument that practice variation reflected "seemingly random variations" in physicians' strategies was a touchstone for managed care advocates in the 1980's. (The quote was from Paul Ellwood, founder of the Jackson Hole Group, and one of the most vocal managed care proponents. See: Ellwood PM. Shattuck lecture -- outcomes management: a technology of patient experience. N Engl J Med 1988; 318: 1549-1556.)
Two important recent articles question single-disease guidelines, and disease management programs and other quality improvement schemes based on them. In today's New England Journal of Medicine is "Potential Pitfalls of Disease-Specific Guidelines for Patients with Multiple Conditions," by Mary E. Tinetti et al. This article elegantly, if somewhat too politely, "raises the question of whether what is good for the disease is always best for the patient." Its starting premise is that many patients, particularly the elderly, have multiple chronic diseases, e.g., 20% of Medicare patients have more than five chronic conditions. However, even the most rigorous evidence-based disease-specific guidelines barely take into account the applicability of the evidence to patients with multiple co-morbid disease, or how the presence of other diseases, and their treatments, might modify the benefits and harms of treatments for the target diseases.
Thus, whether the guidelines' recommendations really should apply to patients with other chronic diseases is usually arguable. Nonetheless, such recommendations may be rigidly applied to such patients, because "one of the hallmarks of quality assurance programs is a reduction in the variation of practice patterns among providers," whatever the cause of such variation might be. Again, the notion that variation is bad and must be stamped out at all costs arises from the early practice variation studies, and Ellwood's argument above that practice variation is due to physicians' "randomness."
Furthermore, in the second article, Kravitz and colleagues note the problem of "guideline creep: the evolution of genuinely flexible clinical recomendations into more rigid practice standards," (see: Kravitz RL, Duan N, Braslow J. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Quarterly 2004; 82: 661-687). They described how US Department of Veterans Affairs (VA) guidelines advocate screening even for patients "with severe comorbid illnesses or strong preferences against screening, [for whom] the risks of colorectal cancer screening outweigh the benefits." Worse, the VA financially penalizes hospitals with low screening rates, no matter how many of their patients would not benefit from screening. Kravitz et al concluded with sensible recommendations about practice guidelines, notably that guidelines should be "promulgated in a spirit of humility, generally eschewing strong incentives or punitive sanctions, at least until compelling evidence for the absence of significant HTE [heterogeneity of treatment effects] is acquired."
One can only hope that the health care managers pause their headlong rush into disease management and other single-disease solutions to adopt such "a spirit of humility."

4 comments:

InformaticsMD said...

The rigid thinking about "practice variation elimination" and "single-disease solutions" stems in part from the invasion of medicine by non-medical "process engineers" and the ideology of TQM as exemplified by Deming (and abused by those such as McNamara):

Henry Mintzberg, professor of management at McGill University, wrote an article called "Managing Government, Governing Management" for Harvard Business Review in May-June, 1996. In it, he said:

"Next, consider the myth of measurement, an ideology embraced with almost religious fervor by the Management movement. What is its effect in government? Things have to be measured, to be sure, especially costs. But how many of the real benefits of government activities lends themselves to such measurement? Some rather simple and directly delivered ones do -- especially at the municipal level -- such as garbage collection. But what about the rest? Robert McNamara's famous planning, programming, and budgeting systems in the U.S. federal government failed for this reason: Measurement often missed the point, sometimes causing awful distortions.
(Remember the body counts of Vietnam?) How many times do we have to come back to this one until we finally give up? Many activities are in the public sector precisely because of measurement problems: if everything was so crystal clear and every benefit so easily attributable, those activities would have been in the private sector long ago."

Unfortunately for the TQM aficionados, "Black Belt" Six-Sigma gurus, and other representatives of "management by magazine", the human being is not an assembly line.

Data-driven medicine is a good thing, as exemplified by judicious use of EMR's, clinical trials and clinical data analysis, and measures used properly and wisely by those who have appropriate backgrounds.

Just as the writing of symphony orchestras cannot be performed very well by committees of non-musicians, so quality of medicine cannot well be determined by legions of non-medical process engineers.

Silverstein's rule: a thousand generic 'specialists' following the finest of process will always be outperformed by one person who knows what the hell they're doing.

Anonymous said...

I have linked to this comment on my blog - db's Medrants. This issue is the fundamental issue for health care evaluation today. Nicely written and important!!!

Anonymous said...

Single-disease solutions are best "defended" by profiling patients as "somatics", demeaning them with disrespect and indifference, but billing the heck out of the involved insurances. You might want to ponder a post on Medpundit in Sept '04. A deaf patient-on Medicaid-required a translator-and "presented many problematic" matters(money?Medicaid referrals?more marginal referrals?)and was vented on unmercifully as a "somatic" Had/have all kinds of socio-economic(cultural) questions concerning it.

Rupa said...

As an informatics researcher, I can see how condition-specific results which are so common in the literature are difficult to apply. Practically speaking, informatics researchers are often constrained by what is measurable and how many physicians and patients they can recruit to be able to report an effect. This is why (as I commented on the Greenlagh post) I think it's important to acknowledge the value of multiple-setting and/or multiple-condition studies that are not able to control so many variables, but can instead offer observations and hopefully insight into how medical problem solving can be supported through well-designed tools and processes.