It is available at the hyperlink above, but may not be publicly accessible.
The authors all are, or were, ONC officials:
Melinda Beeuwkes Buntin (Melinda.firstname.lastname@example.org) is director of the Office of Economic Analysis, Evaluation, and Modeling, Office of the National Coordinator for Health Information Technology (ONC), Department of Health and Human Services, in Washington, D.C. Matthew F. Burke is a policy analyst at the ONC. Michael C. Hoaglin is a former policy analyst at the ONC. David Blumenthal is the national coordinator for health information technology.
The abstract is as follows:
An unprecedented [indeed- ed.] federal effort is under way to boost [coerce? - ed.] the adoption of electronic health records and spur innovation in health care delivery.We reviewed the recent literature on health information technology to determine its effect on outcomes, including quality, efficiency, and provider satisfaction.We found that 92 percent of the recent articles on health information technology reached conclusions that were positive overall. We also found that the benefits of the technology are beginning to emerge in smaller practices and organizations, as well as in large organizations that were early adopters. However, dissatisfaction with electronic health records among some providers remains a problem and a barrier to achieving the potential of health information technology. [Some? That sounds like an understatement - ed.] These realities highlight the need for studies that document the challenging aspects of implementing health information technology more specifically and how these challenges might be addressed.
I have long stated, at least since 1999, that:
Healthcare information technology (HIT) holds great promise towards improving healthcare quality, safety and costs.
The new ONC review article is certainly pointing in this direction. Perhaps Health Affairs will release it to general circulation.
However, I also wrote:
As we enter the second decade of the 21st century, however, this potential has been largely unrealized. Significant factors impeding HIT achievement have been false assumptions concerning the challenges presented by this still-experimental technology, and underestimations of the expertise essential to achieve the potential benefits of HIT. This often results in clinician-unfriendly HIT design, and HIT leaders and stakeholders operating outside (often far outside) the boundaries of their professional competencies. Until these issues are acknowledged and corrected, HIT efforts will unnecessarily over-utilize precious healthcare resources, will be unlikely to achieve claimed benefits for many years to come, and may actually cause harm.
Whether the new ONC article demonstrates that these issues are starting to approach resolution, or is just another opinion paper not fully supported by facts, is not certain.
Two charts are presented that summarize the findings (click to enlarge):
There are several caveats. The first has to do with possible selection bias that can be present in any review article.
On article selection for the review:
... we decided that to be included in this review, an article had to address a relevant aspect of health IT, as listed in the Appendix ; examine the use of health information technology in clinical practice; and measure qualitative or quantitative outcomes. Analyses that forecast the effects of a health IT component were included only if they were based on effects experienced during actual use. ... Using this framework, the review team removed 2,692 articles based on their titles. An additional 1,270 articles were determined to be outside the study’s scope after the team examined the article abstracts. For example, 269 abstracts focused solely on health IT adoption.
[What, exactly, does that mean? Were the problems with HIT adoption potentially significant towards causation of lack of benefit, and/or presence of harm, for instance? - ed]
By the third review stage, the review team had 231 articles. An additional forty-three were excluded after further review because they did not meet the criteria, and thirty-four review articles were dropped from the analyses because they did not present new work.
[What does that mean? Did they drop highly comprehensive articles showing uncertainty in the literature about HIT, such as Greenhalgh's "Tensions and Paradoxes in Electronic Patient Record Research: A Systematic Literature Review Using the Meta-narrative Method" from University College London? That article appeared in the Dec. 2009 Milbank Quarterly. I wrote about it at this link.]
This left 154 studies that met our inclusion criteria, 100 of which were conducted in the United States. This is comparable to the 182 studies found over a slightly longer time period that were evaluated by Goldzweig and colleagues.
I should also note that no mention is made of independent reviewers of the article corpus and elimination process. It appears the entire effort was conducted within ONC itself, where a bias towards finding positive results is likely present (and understandable).
Another caveat is the the Health Affairs/ONC article appears to bypass a body of literature, both peer-reviewed and non-peer reviewed, that sheds doubt on health IT in its present form from a number of angles such as I recently aggregated at "An updated reading list on HIT" and at "2009 a pivotal year in healthcare IT." Bypassing literature such as this is a possible major weakness.
Further, in the current political environment, it is not hard to imagine that articles highly critical of health IT, or revealing major mishaps and possibly exposing organizations to litigation, are scarce.
I addressed that in a paper "Remediating an Unintended Consequence of Healthcare IT: A Dearth of Data on Unintended Consequences of Healthcare IT." The paper itself was initially found needing revisions, largely in format, by a small group of blinded reviewers in the Medical Informatics community (with the exception of one faux-newshound who commented that "material like this could be read in any major newspaper", a rather perplexing comment considering the topic). Rather than revise, and not being on the tenure treadmill, I chose instead to publish at the Scribd link above.
The ONC paper does acknowledge this:
A recent study found that for clinical trials, studies with positive results are roughly four times more likely to be published than those without positive findings. Because the articles were limited to health IT adopters, we anticipated that authors more often approached studies looking for benefits rather than adverse effects.
They do, however, then issue a value judgment:
It is important to note that although publication bias may lead to an underestimation of the trade-offs associated with health IT, the benefits found in the published articles are real.
I"m not sure a dear relative of mine would find that value judgment heartening. They suffered a crippling injury that would likely not have occurred if paper had been used in the ED rather than an EHR.
I note that if a pharmaceutical company were to issue such a value judgment about a drug as justification for national marketing, they'd likely be nailed to the cross...
The ONC paper also ignores accounts of "near misses" and actual patient injury from impeccable sources, such as in "Health informatics — Guidance on the management of clinical risk relating to the deployment and use of health software. UK National Health Service, DSCN18 (2009), formerly ISO/TR 29322:2008(E)":
Examples of potential harm presented by health software
GP prescribing decision support
In 2004 the four most commonly used primary care systems were subjected to eighteen, potentially serious, realistic scenarios including an aspirin prescription for an eight year old, penicillin for a patient with penicillin allergy and a combined oral contraceptive for a patient with a history of deep vein thrombosis. Using dummy records, all eighteen scenarios failed to produce appropriate alerts by all of the systems, most of the time. The best score was a system that flagged up seven appropriate alerts. The health organization clearly has, in such a system deployment, a key responsibility to ensure that knowledge bases used within a design are correctly populated and aligned with clinical practice within their organization.
Inadvertent accidental prescribing of dangerous drugs (such as methotrexate)
This incident occurred when a user of a primary care system attempted to issue two repeat items. The items were highlighted and instead of the issue selected repeats button, the prescribe acute issues from the formulary button was pressed. This brought up the formulary dialogue which contained the high risk items. Either the issue button was then pressed or the particular items were double clicked. When the warning messages came up, they were all ignored and proceed and issue selected. The user chose the first item presented on the formulary list, which just so happened to be a methotrexate injection. In this particular case, it was determined that patient risk was minimal as the treatment was rarely used in primary care and would, in practice, be rejected by the pharmacist. To preclude any recurrence of the problem, access to the high risk formulary was removed from the formulary part of the acute drug issue dialogue. This example again demonstrates the need to align clinical practice and authority levels with the knowledge and rule bases within the system. Wherever possible, design and implementation of health software systems should be undertaken to improve control and accuracy, note introduce new exposures. Furthermore the hazard and risk assessment of this situation may well not apply in other settings, e.g. prescription issue by nursing staff on a ward versus a pharmacist in a retail store.
Incorrect patient details retrieved from radiology information system
This incident arose from the fact that medical reference numbers (MRNs) are usually prefixed by an alpha code. Some hospitals however do not use these prefixes and identical MRNs can be generated. This gave rise to the creation of shared MRNs and subsequent confusion of records in the central datastore when retrieval key is the Medical Record Number. Four specific instances were found where a patient number had been entered in the radiology information system and incorrect patient details had been retrieved. The manufacturer could have built in an appropriate format check during development. Alternatively, the problem could have been spotted by the health organization if a structured risk assessment had been undertaken.
Drug mapping error
Sodium valproate 200 mg slow release was incorrectly mapped to sodium valproate 200 mg in a formulary encoded into a health software system. These are anti-epilepsy drugs and thus the implications for patient safety could be significant. This particular incident is just one of many that have been reported in relation to drug mapping.
An initial investigation indicated that 35 prescriptions had been generated using the incorrect map. Corrective action included contacting the relevant primary care practices to check upon patient health and the supplier to correct the mapping process to ensure no further incorrect prescriptions were generated. As before, this was a design/coding error by the manufacturer but was compounded by the health organization not checking the mappings and failing either to build in appropriate prescribing controls, or map the controls to health organization individuals with the appropriate experience and authority.
The ages of women who had undergone pre-natal screening were wrongly computed by a health software system. As a result 150 women were wrongly notified that they were at no risk. Of these, four gave birth to Down's syndrome babies and two others made belated decisions to have abortions.
A student died of meningitis because of a misspelling of her name and inadequacy in computer use. The student was admitted and a blood test proved negative for meningitis. The following day another blood test was taken and filed on a new computer entry but the letter ―p was missed in the spelling of the name. When a doctor looked up results they were presented with only the first negative tests result because of the misspelling. If the second result had been seen it would have triggered further investigations and probable diagnosis of meningitis. The investigating panel concluded that problems with the health software system had been greater than first thought and in this case there was a combination of a misspelled name and the doctor not being able to use the computer system property. The health software system could have been designed to use unique numbers either instead of the name or in addition to it.
They also missed consideration of serious IT defects reports as in the FDA's Maude database that I wrote about at my Jan. 2011 post "MAUDE and HIT Risks."
Another confounding factor is the issue of possible unreliability of the medical literature itself as expressed in a recent post on the IBM Watson supercomputer exuberance:
Consider the issue of the medical literature suffering from numerous conflict of interest and dishonesty-related phenomema making it increasingly untrustworthy, as pointed out by Roy Poses in a Dec. 2010 post "The Lancet Emphasizes the Threats to the Academic Medical Mission", at my Aug. 2009 post "Has Ghostwriting Infected The "Experts" With Tainted Knowledge, Creating Vectors for Further Spread and Mutation of the Scientific Knowledge Base?" and elsewhere on this blog.
They do state what I have been writing about for over a decade now:
... In fact, the stronger finding may be that the “human element” is critical to health IT implementation. The association between the assessment of provider satisfaction and negative findings is a strong one. This highlights the importance of strong leadership and staff “buy-in” [which will only occur if the systems are not miserable examples of poor engineering, not due to P.R. or the irrational exuberance of others - ed.] if systems are to successfully manage and see benefit from health information technology. The negative findings also highlight the need for studies that document the challenging aspects of implementing health IT more specifically and how these challenges might be addressed.
Dear ONC: see this site, and the many posts since 2004 on this blog.
I was surprised to see this:
... Taking a cue from the literature on continuous quality improvement, every negative finding can be a treasure if it yields information on how to improve implementation strategies and design better health information technologies.
As in my post at this link, does this mean that 'anecdotal' accounts of HIT problems will no longer be summarily dismissed? I wonder.
I have also written in the past that in order to truly understand a domain, one must look at both evidence of the upside, and evidence of the downside. My concern is that the latter was not well addressed in this paper. A beneficial technology with a significant downside, especially in medicine, is not ready for national rollout (cf. Vioxx).
In summary, the new ONC paper may present a glimmer of hope that health IT is starting to produce real results. On the other hand, its possible deficiencies and biases might also make it more a political statement than anything else. It will certainly be used as such from the high government perch of HHS regardless. This has already started:
... President Obama and Congress envisioned that the HITECH Act would provide benefits in the form of lower costs, better quality of care, and improved patient outcomes. This review of the recent literature on the effects of health information technology is reassuring: It indicates that the expansion of health IT in the health care system is worthwhile.
[Note the use of "is worthwhile" as opposed to "may be worthwhile", a continuation of the "it's proven, nothing else to say" style I noted at my July 2010 post "Science or Politics? The New England Journal and "The 'Meaningful Use' Regulation for Electronic Health Records." Such statements of absolute certainty are of concern; they remind me of the global warming debate - ed.]
...Thus, with HITECH, providers have an unparalleled opportunity to accelerate their adoption of health information technology and realize benefits for their practices, institutions, patients, and the broader system. [Ditto - ed.]
Does the article truly show a breakthrough, or is it a flawed review by a governmental agency that will be used for political purposes? I simply do not know which.
I am certain, however, that there will be active debate and dissection of this paper and its source articles in the months to come by those with more time, resources, and expertise than I have at my disposal.
March 10, 2011 addendum:
Trisha Greenhalgh at Barts and The London School of Medicine and Dentistry and the author of the aforementioned comprehensive review article "Tensions and Paradoxes in Electronic Patient Record Research: A Systematic Literature Review Using the Meta-narrative Method" (link to my essay), had this observation about the following passage in the ONC paper:
“Our findings must be qualified by two important limitations: the question of publication bias, and the fact that we implicitly gave equal weight to all studies regardless of study design or sample size.”
Prof. Greenhalgh relates: "Given these very fundamental acknowledged biases, I’m very surprised anyone published this paper in its present form.”
March 14, 2011 addendum:
Dr. Roy Poses had this to say:
Dr Silverstein commented on a new review of research of health care information technology whose results were mainly optimistic. The authors were from the US government agency that promotes health care IT, so its optimism is not surprising. However, its credibility was unclear, since it did not appear to be systematic. [In general, one at least assesses and takes into account the methodologic quality of studies used in a systematic review, and does not "give equal weight to all studies regardless of study design or sample size" - ed.]
Furthermore, the authors had no methodologic standards whatsoever for article inclusion. [They did, sort of, as in my paragraphs "On article selection for the review:", but one might term the standards "loose" - ed.] The review included qualitative studies that were probably not meant to be evaluative, and observational studies subject to severe methodologic bias.
The publication of this review demonstrated how the conventional wisdom is continually reinforced based on the strength of the influence of its proponents, rather than the strength of the supporting evidence. Adaptation of new drugs and devices should be based on evidence that their benefits outweigh their harms, rather than the enthusiasm and financial interests of their proponents.