Thursday, April 09, 2009

Have we suffered a complete breakdown in the scientific method with regard to EHR and clinical IT?

Have we suffered a complete breakdown in the scientific method with regard to EHR and other clinical IT?

I read announcements like this with trepidation:

http://govhealthit.com/articles/2009/03/31/sebelius-confirmation.aspx
“The goal,” Sebelius said, “is to provide every American with a safe, secure electronic health record by 2014." The nominee also endorsed efforts to use data gleaned from electronic medical records to conduct “comparative effectiveness research" (CER) to provide information on the relative strengths and weaknesses of alternative medical interventions to health providers and consumers.”


Recovery Act funds have been allocated to NIH specifically for comparative effectiveness research. NIH has further specified the definition of CER as:

"[A] rigorous evaluation of the impact of different options that are available for treating a given medical condition for a particular set of patients. Such a study may compare similar treatments, such as competing drugs, or it may analyze very different approaches, such as surgery and drug therapy."

NIH states that such research may include "the development and use of clinical registries, clinical data networks, and other forms of electronic health data that can be used to generate or obtain outcomes data as they apply to CER."

The problems I foresee concern the word "rigorous" as in the above definition.

The use of EHR data to reliably detect uncommon (but strong, discrete) early warning signals from a single drug or treatment -- to then be subject to more rigorous study with reasonable controls -- is itself a Medical Informatics "Grand Challenge." An example would be finding VIOXX's association with myocardial infarction earlier than we did, via an EHR-based automated postmarket surveillance process.

Doing this is a "grand challenge" due to the nature of EHR data, which is as far from "clinical trials clean" as possible. It is what might be called highly uncontrolled. The statistical methods needed to reliably pull signals out of the muck for even a single drug are still exploratory, the problems formidable if one wants to stay scientifically sound. I wrote about the experimental nature of such efforts a few years ago here, and believe an effort got underway at U. Indiana/Regenstrief to test such methodologies for postmarket surveillance about the same time.

Now we have had what appears to be a leap of faith and logic of irrationally exuberant proportions, and probably a deviation from sound science as well. The government has announced enthusiasm for EHR data-based comparative effectiveness research (CER) not to aid science, but to cut costs (implying skipping the rigorous confirmatory phases) through elimination of more costly drugs and treatments deemed less effective or at effectiveness parity compared to less expensive choices. Following this thinking, perhaps in the future a metric will be developed for an "acceptable" improved benefit/cost ratio for expensive drugs that are better than cheaper alternatives?

This overconfidence in EHR data is of concern. To detect relatively less concrete (i.e., than major ADE) "outcomes differences" between two or more drugs or treatments
via EHR data - did treatment A lower blood pressure more than drug B, did drug C lessen depression more than drug D - rises to the level of "grand overconfidence in computing." To accomplish this task with reasonable scientific certainty from reams of EHR data, originating from different vendor systems, input by myriad people of different backgrounds with differing interpretations of terminologies (students/MD's/RN's etc) under different pressures (time, reimbursement maximization), and so forth, seems a stretch. What will the p values and predictive values be for such studies? Yet our incoming HHS secretary touts such methods?

Ironically, the gold standard in medical science is the controlled clinical trial, yet EHR-based comparative effectiveness research itself as a research methodology, now touted by our government, seems to have gotten a pass.

Even what I would consider minimum requirements for scientific treatment comparisons, such as well designed and reasonably controlled registries as developed here for interventional cardiology, with hundreds of granular, finely defined and "tuned" data elements, appear to be bypassed in EHR "miracle claims." Such precise registries take months or years to develop, implement, and train users to interact with properly. Further, such registries are not portable and
must be created for individual medical domains and subdomains. Uncontrolled EHR data is no substitute for such efforts.

The following question arises:

Where are the comparative effectiveness studies that compare 1) EHR-based comparative effectiveness studies of drugs and treatments to 2) controlled clinical trials-based comparative effectiveness studies?

In other words, where are the meta-clinical trials that compare EHR data mining-based comparative effectiveness research as a methodology, vs. the "traditional" gold standard methodology of controlled clinical trials to compare drugs or treatments? How do we know EHR-based CER studies will not produce GIGO that will cause harm through ham-fisted elimination or defunding of useful treatment options?


While there are initial efforts underway to increase understanding of CER, e.g., "Broad Challenge Area 5" (PDF) of the NIH RC1 Challenge Grants in Health and Science Research, ominously, there is a lot of potential advantage to be had with terabytes of uncontrolled data and a political agenda.

I fear that what will come from "comparative effectiveness research" that draws upon uncontrolled EHR data will be politics masquerading as comparative effectiveness research.
Good luck to private practitioners and medical innovators. Good luck, pharma. Good luck, patients.

This movement towards EHR uncontrolled data alchemy represents a further deviation from medical science towards the Syndrome of Inappropriate Over-Confidence in Computing (a.k.a. SICC Syndrome) writ large.

It seems the IT industry has now rendered a scientific approach to HIT and its use obsolete. We see this "post scientific era" phenomenon in the takeover of clinical IT by vendors who contractually demand suppression of sharing of problems, we see it in a remarkably uncritical push for EMR's by 2014 now involving force of government (only financial at present, but will punitive licensure issues and other measures be off the table?) despite a growing body of literature advising caution, we see a consortium of big business/payers/vendors/myriad secondary feeder organizations gunning full blast for this technology without consideration of the possible downsides.

Biomedical informatics, a scientific discipline (at least those parts of it not yet compromised by conflicts of interest), as a relevant field is very much a minority player in today's health IT.

Even the contributions from experts and pioneers in the field of Biomedical Informatics, in the form of the Jan. 2009 National Research Council's report that "Current Approaches to U.S. Health Care Information Technology are Insufficient" (here) has not had much impact.

I see
Biomedical Informatics' death as a relevant discipline that anyone of importance pays attention to, not too far down the road as well.

-- SS

Addendum April 20:

We've seen this phenomenon in our economy. WSJ "Information Age" writer L. Gordon Crovitz notes:

... In a paper for the scientific journal of the Royal Society back in 1994, Harvard economist Robert Merton wrote that "any virtue can become a vice if taken to extreme, and just so with the applications of mathematical models in finance practice." We know even better now that some risks can be calculated and thus reduced, while some unknowns cannot be turned into probabilities. "The mathematics of the models are precise, but the models are not, being only approximations to the complex, real world."

I believe EMR data at best is a very loose approximation to the real world. It contains many "unknowns" regarding quality and reliability that cannot be turned into probabilities no matter how fancy the math. Asking too much of EHR data becomes a vice, not a virtue.

-- SS

4 comments:

Anonymous said...

Driving this in part are the tech companies who are now providing ghostwriters for grant applications for stimulus funds. These companies include Cisco, Microsoft and Oracle with companies such as Apple and Dell also benefiting.

All of this is outlined in the April 7, WSJ article Tech Giants Help Clients Tap Stimulus Funds. Quotes such as: "Technology suppliers are eyeing the stimulus package as an elixir to keep revenue flowing" one does not have to imaging the focus will be on profits not on performance.

Many years ago I was taught to sell the sizzle not the steak. The product becomes secondary to the expectation of the customer. Sure, for $80B we can solve all your medical questions, or try until the money runs out.

Steve Lucas

MedInformaticsMD said...

Driving this in part are the tech companies who are now providing ghostwriters for grant applications for stimulus funds.

That sounds like it borders on government fraud. Where are the entrepreneurial lawyers?

Ben Hansen said...

Be sure to read Dr. Grace Jackson's "Open Letter to the Federal Coordinating Council for Comparative Effectiveness Research" posted at the Institute for Nearly Genuine Research web site:
www.bonkersinstitute.org/jacksonletter.html

marie said...

Hi,

We have just added your latest post "Have we suffered a complete breakdown in the scientific method with regard to EHR and clinical IT?" to our Directory of Science . You can check the inclusion of the post here . We are delighted to invite you to submit all your future posts to the directory and get a huge base of visitors to your website.


Warm Regards

Scienz.info Team

http://www.scienz.info