I wrote earlier at "BLOGSCAN - Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality
More on the primary article later. I'm busy managing the medical care of my relative's EHR-related 2010 injuries.
Now that I have some time, here are my thoughts on the article, and on a critique of that article published at the same time.
The article itself is at:
Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality
Max J. Romano, BA; Randall S. Stafford, MD, PhD
Arch Intern Med. Published online January 24, 2011. doi:10.1001/archinternmed.2010.527
and the critique is at:
Clinical Decision Support and Rich Clinical Repositories: A Symbiotic Relationship: Comment on "Electronic Health Records and Clinical Decision Support Systems"
Clement McDonald and Swapna Abhyankar
Arch Intern Med. 2011;0(2011):20105181-2
[note - I know Dr. McDonald personally - ed.]
I restrict my comments to commercially available healthcare IT from traditional for-profit health IT merchants. These comments may not apply, or may not apply as directly, to open source EMR's (such as VistA and VistA-based products e.g., WorldVista).
First, I find the article's first major result not very surprising:
In only 1 of 20 indicators was quality greater in EHR visits than in non-EHR visits (diet counseling in high-risk adults, adjusted odds ratio, 1.65; 95% confidence interval, 1.21-2.26)
This is consistent with my belief that the primary problems in healthcare quality are not in the domain of record keeping, whether paper or computerized. Thus, as I wrote at "Is Healthcare IT a Solution to the Wrong Problem?
", EMR's alone are a 'solution to the wrong problems.' They may solve bookkeeping issues, but they don't help clinicians, and in fact probably impair more than aid them due to the mission hostile user experience
they often present.
The second major conclusion is more debatable:
Among the EHR visits, only 1 of 20 quality indicators showed significantly better performance in visits with CDS [clinical decision support -ed.] compared with EHR visits without CDS (lack of routine electrocardiographic ordering in low-risk patients, adjusted odds ratio, 2.88; 95% confidence interval, 1.69-4.90)
This result is more debatable due to the nature of, and limitations within, the study. CDS "done well" (two simple words behind which lay massive, perhaps wicked problem
-level sociotechnical complexity) might actually improve guideline adherence by ambulatory care physicians.
The major challenge is in "doing it well" (both EHR and CDS) and getting the good data required to do CDS well under the substantial impediments posed by a dysfunctional health IT ecosystem as I wrote about here
, and an oppressive environment for medical practitioners forcing them to "do more in less time" in the interests of money, often due to governmental interference in care.
These observations suggest that before tackling national EMR and CDS, we should be tackling the dysfunctions of the health IT industry and ecosystem.
The article did utilize some of the best data available to researchers:
We used the most recent data available from the National Ambulatory Medical Care Survey (NAMCS, 2005-2007) and the National Hospital Ambulatory Medical Care Survey (NHAMCS, 2005-2007), both conducted by the National Center for Health Statistics (NCHS, Hyattsville, Maryland). These surveys gather information on ambulatory medical care provided by nonfederal, office-based, direct-care physicians (NAMCS)21 and provided in emergency and outpatient departments affiliated with nonfederal general and short-stay hospitals (NHAMCS).22 These federally conducted, national surveys are designed to meet the need for objective, reliable information about US ambulatory medical care services.23 These data sources have been widely used by government and academic research to report on patterns and trends in outpatient care.
I don't believe the findings can be challenged on the basis of faulty data.
The quality of care indicators chosen also were well thought out:
Our analysis of quality of care used a selected set of 20 quality indicators that had previously been used to assess quality using NAMCS/NHAMCS26 but that had been updated to reflect changes in clinical guidelines. Each indicator represents a care guideline whose adherence can be measured using the visit-based information available from NAMCS/NHAMCS visit records. The indicators were developed using broad criteria established by the Institute of Medicine (clinical importance, scientific soundness, and feasibility for indicator selection) and specific criteria based on the NAMCS/NHAMCS data sources. The indicators fall into 5 categories: (1) pharmacological management of common chronic diseases, including atrial fibrillation, coronary artery disease, heart failure, hyperlipidemia, asthma, and hypertension (9 indicators); (2) appropriate antibiotic use in urinary tract infection and viral upper respiratory infections (2 indicators); (3) preventive counseling regarding diet, exercise, and smoking cessation (5 indicators); (4) appropriate use of screening tests for blood pressure measurement, urinalysis, and electrocardiography (3 indicators); and (5) inappropriate prescribing in elderly patients (1 indicator).
It should be recalled that this article is in many ways a follow up to the article "Electronic Health Record Use and the Quality of Ambulatory Care in the United States
(Arch Intern Med. 2007;167:1400-1405, link to abstract here
). The article's authors:
... performed a retrospective, cross-sectional analysis of visits in the 2003 and 2004 National Ambulatory Medical Care Survey. We examined EHR use throughout the United States and the association of EHR use with 17 ambulatory quality indicators. Performance on quality indicators was defined as the percentage of applicable visits in which patients received recommended care.
That article's authors reached what to many was a counterintuitive conclusion. The authors examined electronic health records (EHR) use throughout the U.S. and the association of EHR use with 17 basic quality indicators. They concluded that “as implemented, EHRs were not associated with better quality ambulatory care.” (To medical informaticists, the key phrase that explains these findings is “as implemented”, to which I would also add “as designed”, i.e., badly.)
In the latest article, obvious confounding variables appear to have reasonably been taken into account:
Performance on each quality indicator was defined as the proportion of eligible patients receiving guideline-congruent care so that a higher proportion represents greater concordance with care guidelines. Attention was paid to excluding those patients with comorbidities that would complicate guideline adherence (eg, asthma in assessing the use of β-blockers in coronary artery disease). Also, in some instances, care was adherent to the quality indicator if a similar therapy was provided (eg, warfarin rather than aspirin in coronary artery disease).
With regard to the authors' comments:
In a nationally representative survey of physician visits, neither EHRs nor CDS was associated with ambulatory care quality, which was suboptimal for many indicators.
As I mentioned, the first part of this statement represents a more solid a conclusion than the second.
However, one must ask why the first conclusion (EHR's without CDS not associated with better ambulatory care quality) might be so.
- Was the study flawed in some way? As per the previous paragraphs, I don't think so.
- Were factors in the real world clinical environment not amenable to cybernetic intervention responsible? This seems likely, such as harried and/or poorly trained physicians, pressured visit time limitations, and other factors that make EHR's a band aid at best.
- Were EHR's a solution to the wrong problem? If documentation issues are not a significant factor in ambulatory care quality (physicians and patients do speak to one another, after all), then EHR's would not be expected to have much additional impact. This also seems likely.
- Were the EHR's suboptimal? Mission hostile IT certainly should not be expected to have a large positive effect on the behavior of users.
Regarding the finding that even EHR's with CDS do not make a significant difference in compliance with treatment guidelines, one might also ask (as the authors of the critique did) if:
- the CDS of the clinical IT in use did not cover the indicators measured. The indicators, however, are not particularly unusual or esoteric. It would therefore surprise me if there were few or no major intersections. (If this was the case, it would speak poorly of the commercial HIT merchants and their products.)
- The CDS implementation itself were mission hostile, making it difficult for users to carry out the recommendations in their harried, time limited visits.
Either issue goes back to my point about correcting the IT ecosystem before rolling out this technology at a cost of hundreds of billions of dollars.
The authors state:
While our findings do not rule out the possibility that the use of CDS may improve quality in some settings, they cast doubt on the argument that the use of EHRs is a "magic bullet" for health care quality improvement, as some advocates imply.
Yes, right up to the POTUS and the HHS ONC office such as:
... The widespread use of electronic health records (EHRs) in the United States is inevitable. EHRs will improve caregivers’ decisions and patients’ outcomes. Once patients experience the benefits of this technology, they will demand nothing less from their providers. Hundreds of thousands of physicians have already seen these benefits in their clinical practice. (ONC Chair Blumenthal in the NEJM).
“We know that every study and every professional consensus process has concluded that electronic health systems strongly and materially improve patient safety. And we believe that in spreading electronic health records we are going to avoid many types of errors that currently plague the healthcare system,” Blumenthal said when unveiling new regulations in Washington on July 13.
As I wrote at "Huffington Post Investigative Fund: FDA, Obama Digital Medical Records Team at Odds over Safety Oversight
", no, we don't
know that. The assertion about "every study and consensus process" is demonstrably false.
I think anyone still believing in IT as any type of "magic bullet" needs to be disqualified from involvement in healthcare.
On a matter of my own critique of the article, there's this:
Several anecdotal articles describe how CDS can disrupt care and decrease care quality; however, further empirical research is needed.35-36 In the absence of broad evidence supporting existing CDS systems, planned investment should be monitored carefully and its impact and cost evaluated rigorously.
As in my posts "The Dangers of Critical Thinking in A Politicized, Irrational Culture
", "EHR Problems? No, They're Merely Anecodotal
" and "Health IT: On Anecdotalism and Totalitarianism
", there's that shibboleth term "anecdotal
The "anecdotal" articles do not include the work of Koppel
or others who did extensive empirical research and found major problems and increased risks of error created by CPOE and bar coding, to name two variations of clinical IT thought to be "slam dunks" for improved care delivery. Further, the "anecdotes" of HIT malfunction in sites such as the FDA's MAUDE database
are alarming to me due to the obvious risk
to patients they reflect - whether injuries occurred or not (there is a patient death account
in MAUDE as well), but apparently not to the academic community or ONC, which is chaired by an academic:
[2011 Addendum: A definitive response to the "anecdotes" issue is at this Aug. 2011 post: "From a Senior Clinician Down Under: Anecdotes and Medicine, We are Actually Talking About Two Different Things", http://hcrenewal.blogspot.com/2011/08/from-senior-clinician-down-under.html]
It has occurred to me that this "what - me worry?" polyanna attitude may reflect an academic bias or over-zealotry regarding peer review and retrospective event descriptions - as opposed to risk
By way of career history, I often lunched with the Director of System Safety of the regional transit authority I once worked for, and accompanied him on site visits; it was amazing how fast he could identify potential risks
in sites we visited, both within and outside the authority (e.g., external drug testing laboratories). He identified risks based on personal expertise and experience, and did not seek peer review of his assessments. He sought (and received) action.
He thought, and I, partially as a result of this exposure, think proactively in terms of risk
, not retrospectively in terms of confirmed, peer reviewed accident reports
. On that basis, the 1999 appearance of my site on health IT problems and the commentary initially received, still online at this link
, should have resulted in significant concern from the HIT academic community, were it not for the aforementioned "lensing effect" of their stations in the domain. Its being passed off as of little more than "anecdote" (a critique I still hear about the modern site
) was a disappointment for someone of my heterogeneous background.
I expressed my views on this issue in a comment
I am quite fed up with the positivist-extremist academic eggheads whose views are so beyond common sense regarding 'anecdotes' of health IT harm from qualified users and observers, that they would find 'anecdotal stories' of people being scalded when opening hot car radiators as merely anecdotes, and do likewise.
These people have been part of the crowd that's led to irrational exuberance on health IT at a national level.
As Scott Adams put it regarding the logical fallacy known as:
IGNORING ALL ANECDOTAL EVIDENCE
Example: I always get hives immediately after eating strawberries. But without a scientifically controlled experiment, it's not reliable data. So I continue to eat strawberries every day, since I can't tell if they cause hives.
I think that summarizes the "academic lensing effect" regarding risk of HIT.
I can also critique the near lack of mention of EHR quality issues:
At the same time, our findings may suggest a need for greater attention to quality control and coordinated implementation to realize the potential of EHRs and CDS to improve health care.
My comment to this is: you don't say? As I've often written, healthcare will not be reformed until health IT itself is reformed.
Finally, on the published critique of the article, I find this passage remarkable:
Regardless of the differences, we know from multiple randomized controlled trials that well-implemented CDS systems can produce large and important improvements in care processes. What we do not know is whether we can extend these results to a national level. The results of Romano and Stafford's study suggest not. However, we suspect that the EHR and CDS systems in use at the time of their study were immature, did not cover many of the guidelines that the study targeted, and had incomplete patient data; a 2005 survey of Massachusetts physicians supports this concern.5 On the other hand, we are not surprised that EHRs without CDS do not affect guideline adherence, because without CDS, most EHRs function primarily as data repositories that gather, organize, and display patient data, not as prods to action.
It's remarkable in several aspects:
- First, the refrain of "immature systems" or, expressed more colloquially, a "versioning problem" seems to come up when health IT is challenged. One survey of physicians in one state aside, the fact that more fundamental issues of health IT fitness for purpose and usability are usually not mentioned is striking, as here.
- If the systems were indeed "immature" just several years ago, this speaks poorly for the health IT merchants and their products (as well as the buyers), and back we go to the issue of remediating the HIT ecosystem before we attempt to remediate medicine with its products.
- The final statement about EHR's lacking CDS not being "prods to action" raises the question: why were extraordinary claims made for EHR's in the past several decades, and why were so many organizations buying EHR's without equally extraordinary evidence?
Regarding the following assertion in the critique:
Although EHRs without CDS may not improve adherence to clinical guidelines, they are (1) a necessary precondition for having CDS (without electronic data, there can be no electronic support functions); (2) valuable for maintaining findable, sharable, legible, medical records; and (3) when they are amply populated (ie, they contain at least 1 or 2 years of dictations, test results, medications, and diagnoses/problems), physicans love them because there are no more lost charts or long waits on the telephone for laboratory results.
I make the following observations:
- Re (1): could extra support staff markedly increase "decision support" at the point of care using paper methods, at a fraction of the cost of health IT?
- Re (2): legible, yes; useful, perhaps not. The near 3000 pages generated by a fraction of my relative's long hospitalization due to HIT-related injury, covering only the first two and a half week hospitalization, were very legible. Very legible; very filled with legible gibberish, unfortunately; and very useless to most humans needing to review her case and render additional care. This problem once again goes to the need to address problems within the HIT ecosystem.
- Re: (3) I would like a reference for the statement about "physicians loving their EHR." Due to the extra efforts and expense involved, it would seem even under the best of circumstances to be a love-hate relationship. Surveys like this one (link) support that notion.
Still more on physicians "loving" EHR's:
Survey: Docs Skeptical of EHRs, Hate Reform
Health Data Management, January 20, 2011
A recent survey of nearly 3,000 physicians shows high levels of displeasure with the Affordable Care Act--and a lot of them don't like electronic health records either.
Of the 2,958 physicians surveyed in September, only 39 percent believe EHRs will have a positive effect on the quality of patient care. Twenty-four percent believe EHRs will have a negative effect on quality, and 37 percent forecast a neutral factor.
HCPlexus, publisher of the The Little Blue Book reference guide for physicians, developed and conducted the survey with content vendor Thomson Reuters. The survey sample came from physicians in HCPlexus' database. The fax-based survey was done in September 2010, with additional information directly gathered via phone or e-mail from hundreds of the surveyed physicians in December and January.
In conclusion, the new Archives
article represents yet another data point challenging uncritical assertions of automatic EHR-created medical improvement.
I agree with both the article's authors and those who wrote the critique that more reserch is needed (not more fast-paced implementation).
As concluded in 2009 by the National Research Council in a study led by several HIT pioneers
Current efforts aimed at the nationwide deployment of health care information technology (IT) will not be sufficient to achieve medical leaders' vision of health care in the 21st century and may even set back the cause ... In the long term, success will depend upon accelerating interdisciplinary research in biomedical informatics, computer science, social science, and health care engineering.
These words should not be ignored.