Showing posts with label AMIA. Show all posts
Showing posts with label AMIA. Show all posts

Wednesday, January 28, 2015

"Meaningful Use" not so meaningful: Multiple medical specialty societies now go on record about hazards of EHR misdirection, mismanagement and sloppy hospital computing

The "Meaningful Use" program for EHRs is a mismanaged boondoggle causing critical issues of patient safety, EHR usability, etc. to be sidestepped.

This is on top of the unregulated U.S. boondoggle which should probably be called "the National Programme for IT in the HHS" - in recognition of the now-defunct multi-billion-pound debacle known as the National Programme for IT in the NHS (NPfIT), see my Sept. 2011 post "NPfIT Programme goes PfffT" at http://hcrenewal.blogspot.com/2011/09/npfit-programme-going-pffft.html.

The complaints are not just coming from me now.

As of January 21, 2015 in a letter to HHS at: http://mb.cision.com/Public/373/9710840/9053557230dbb768.pdf, they are now coming from the:

American Medical Association
AMDA – The Society for Post-Acute and Long-Term Care Medicine
American Academy of Allergy, Asthma and Immunology
American Academy of Dermatology Association
American Academy of Facial Plastic
American Academy of Family Physicians
American Academy of Home Care Medicine American Academy of Neurology
American Academy of Ophthalmology
American Academy of Otolaryngology—Head and Neck Surgery
American Academy of Physical Medicine and Rehabilitation
American Association of Clinical Endocrinologists
American Association of Neurological Surgeons
American Association of Orthopaedic Surgeons
American College of Allergy, Asthma and Immunology
American College of Emergency Physicians
American College of Osteopathic Surgeons
American College of Physicians
American College of Surgeons
American Congress of Obstetricians and Gynecologists
American Osteopathic Association
American Society for Radiology and Oncology
American Society of Anesthesiologists
American Society of Cataract and Refractive Surgery and Reconstructive Surgery
American Society of Clinical Oncology
American Society of Nephrology
College of Healthcare Information Management Executives
Congress of Neurological Surgeons
Heart Rhythm Society
Joint Council on Allergy, Asthma and Immunology
Medical Group Management Association
National Association of Spine Specialists
Renal Physicians Association
Society for Cardiovascular Angiography and Interventions
Society for Vascular Surgery


In the letter to Karen B. DeSalvo, National Coordinator for Health Information Technology at HHS, these organizations observe:

Dear Dr. DeSalvo:

The undersigned organizations are writing to elevate our concern about the current trajectory of the certification of electronic health records (EHRs). Among physicians there are documented challenges and growing frustration with the way EHRs are performing. Many physicians find these systems cumbersome, do not meet their workflow needs, decrease efficiency, and have limited, if any, interoperability.

Of course, my attitude is that we need basic operability before the wickedly difficult to accomplish and far less useful (to patients) interoperability. 
 
... Most importantly, certified EHR technology (CEHRT) can present safety concerns for patients. We believe there is an urgent need to change the current certification program to better align end-to-end testing to focus on EHR usability, interoperability, and safety.

Let me state what they're saying more clearly:

"This technology in its present state is putting patients at risk, harming them, and even killing them, is making practice of medicine more difficult, is putting clinicians at liability risk, and the 'certification' program is a joke."

... We understand from discussions with the Office of the National Coordinator for Health Information Technology (ONC) that there is an interest in improving the current certification program. For the reasons outlined in detail below, we strongly recommend the following changes to EHR certification:

1. Decouple EHR certification from the Meaningful Use program;
2. Re-consider alternative software testing methods;
3. Establish greater transparency and uniformity on UCD testing and process results;
4. Incorporate exception handling into EHR certification;
5. Develop C-CDA guidance and tests to support exchange;
6. Seek further stakeholder feedback; and
7. Increase education on EHR implementation.

Patient Safety
Ensuring patient safety is a joint responsibility between the physician and technology vendor and requires appropriate safety measures at each stage of development and implementation.

I would argue that it's the technologists who have butted into clinical affairs with aid from their government friends, thus the brunt of the ill effects of bad health IT should fall on them.  However, when technology-related medical misadventures occur, it's the physicians who get sued.

... While training is a key factor, the safe use of any tool originates from its inherent design and the iterative testing processes used to identify issues and safety concerns. Ultimately, physicians must have confidence in the devices used in their practices to manage patient care. Developers must also have the resources and necessary time to focus on developing safe, functional, and useable systems.

Right now, those design and testing processes compare to those in other mission-critical sectors employing IT quite poorly.

Considering fundamental stunningly-poor software quality that I've observed personally, such as lack of appropriate confirmation dialogs and notification messages supporting teamwork, lack of date constraint checking (see my report to FDA MAUDE at http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=1729552 and many others at http://hcrenewal.blogspot.com/2011/01/maude-and-hit-risk-mother-mary-what-in.html), and other fundamentals, I would say grade schoolers could probably have done a better job of safety testing than the vendors and IT amateur-implementers of the major systems I observed did. 

... Unfortunately, we believe the Meaningful Use (MU) certification requirements are contributing to EHR system problems, and we are worried about the downstream effects on patient safety.

In other words, computers and the government thirst for data do not have more rights than patients.  In the current state of affairs, as I have observed prior, computers do seem to have more rights than patients and the clinicians who must increasingly use them.

... Physician informaticists and vendors have reported to us that MU certification has become the priority in health information technology (health IT) design at the expense of meeting physician customers’ needs, patient safety, and product innovation. We are also concerned with the lack of oversight ONC places on authorized testing and certification bodies (ATCB) for ensuring testing procedures and standards are adequate to secure and protect electronic patient information contained in EHRs.

Not just security, but patient safety also.  See for example my Feb. 2012 post "Hospitals and Doctors Use Health IT at Their Own Risk - Even if 'Certified'" at http://hcrenewal.blogspot.com/2012/02/hospitals-and-doctors-use-health-it-at.html.

Read the entire letter at http://mb.cision.com/Public/373/9710840/9053557230dbb768.pdf.

Sadly, while on the right track regarding the problems of bad health IT, the societies take a Milquetoast approach to correction:

... In May 2014, stakeholders representing accredited certification bodies and testing laboratories (ACB & ATL), EHR vendors, physicians, and health care organizations provided feedback to ONC on the complexities of the current certification system. Two main takeaways from these comments were for ONC to host a multi-stakeholder Kaizen event and to prioritize security, quality measures, and interoperability in the EHR certification criteria. We strongly support both of these ideas...

A multi-stakeholder "Kaizen event'?  (http://en.wikipedia.org/wiki/Kaizen)

That's one recommendation I find disappointing.  The industry plays hard politics, and organized medicine wants to play touchy-feely "good change" management mysticism with that industry and their government apparatchiks.  That's how organized medicine wants patients and the integrity of the medical profession to be protected from the dysfunctional health IT ecosystem (see http://cci.drexel.edu/faculty/ssilverstein/cases/?loc=cases&sloc=ecosystem)?  

When I originally created my old website called "Medical informatics and leadership of clinical computing" back in 1998, Kaizen events were not exactly what I had in mind.

Finally, the American Medical Informatics Association (http://www.amia.org) was apparently not informed of this letter, nor did it participate in its drafting.  While this is regrettable, as the organization is the best reservoir of true Healthcare Informatics expertise, I opined to that group that this may have been due to the organization's tepid response to bad health IT and to industry control of the narrative, and the problems these issues have caused for physicians and other clinicians. The lack of AMIA leadership regarding bad health IT is an issue I've been pointing out since the late 1990s. AMIA has been largely a non-critical HIT promoter.  That stance has contributed to the need for this multiple-medical specialty society letter in the first place.

Parenthetically, and for a touch of humor about an otherwise drab topic: Here's an example of how management mysticism plays out in pharma.

It's meant to be satirical, but captures reality all too well, in fact scarily so at times:


Management mysticism and muddled thinking.  See https://www.youtube.com/watch?v=kwVjftMMCIE

In pharma, as well as in hospital IT in my days as CMIO, gibberish like this was real.  I imagine it's no different in many hospital management suites these days.

-- SS

1/28/2015  Addendum:

Per a colleague:

FierceHealthIT (1/28) reports, “It’s time for the American Medical Association and more than 30 other organizations urging change in the electronic health record certification process to be part of the solution, former Deputy National Coordinator for Health IT Jacob Reider said in a blog post.” Reider said, “So far, I don’t see much [any?] engagement from the AMA or the others who signed the letter. It’s relatively easy to write a letter saying someone else is responsible for solving problems. Time to step up to the plate and participate in the solutions, folks!"

Regarding the victims of compelled use of bad health IT, this erstwhile health IT leader opines "It's relatively easy to write a letter saying someone else is responsible for solving problems?"

That is simply perverse.

I ask: why are we in the midst of a now-compelled national rollout with Medicare penalties for non-adopters when a former government official once responsible for the technology remarks that it's apparently not the makers' problem and that it's "time to step up to the plate and participate in the solutions, folks [a.k.a. end users]!"

(One wonders if Reider believes those who step up to the plate are entitled to fair compensation for their aid to an industry not exactly known for giving its products away, free.)

It seems to me it's not up to (forced) customers to find solutions to vendor product problems, some deadly.

It's the responsibility of the sellers.

Put more bluntly, Reider's statement is risible and insulting.

I've already opined the following to the AMA contact at the bottom of the letter:

... Relatively milquetoast approaches such as multi-stakeholder Kaizens are not what I had in mind ... A more powerful stance would be to advise society members to begin to avoid conversion, report on bad health IT, and even boycott bad health IT until substantive changes are realized in this industry.

That's "stepping up to the plate" to protect patients, in a very powerful way.

-- SS

Wednesday, December 03, 2014

A new "Better EHR" book and an observation re: health IT regulation, health IT amateurs, and user centered design (UCD) - "responding to user feature requests or complaints?"

A new book has appeared on improving usability of electronic health records.  The result of government-sponsored work, the book is available free for download.  It was announced via an AMIA (American Medical Informatics Association, http://www.amia.org/) listserv, among others:

From: Jiajie Zhang [support@lists.amia.org]
Sent: Tuesday, December 02, 2014 6:00 PM
To: implementation@lists.amia.org
Subject: [Implementation] - New Book on EHR Usability - "Better EHR: Usability, Workflow, and Cognitive Support in Electronic Health Records"

Dear Colleagues,

We are pleased to announce the availability of a free new book from the ONC supported SHARPC project: "Better EHR: Usability, Workflow, and Cognitive Support in Electronic Health Records". The electronic versions (both pdf and iBook) are freely available to the public at the following link: https://sbmi.uth.edu/nccd/better-ehr/


First, this book appears to be a very good resource at understanding issues related to EHR usability.  I particularly like the discussion of cognitive issues.

However, this book also holds messages about the state of the industry and the issue of regulation vs. no regulation, and impairment of innovation:

I think it axiomatic that user-centered design (UCD) is a key area for innovation, especially in life-critical software like clinical IT.  (I would opine that UCD is actually critical to safety and efficacy of these sophisticated information systems in a sociotechnically complex setting.)

I think it indisputable that the health IT industry has been largely unregulated for most of its existence, in the manner of other healthcare sectors such as pharma and traditional medical devices.

Yet, even in the absence of regulation, the book authors found this, per Section 5 - EHR Vendor Usability Practices:

a)  A research team of human factors, clinician/human factors, and clinician/informatics experts visited eleven EHR vendors and conducted semi-structured interviews about their UCD processes. "Process" was defined as any series of actions that iteratively incorporated user feedback throughout the design and development of an EHR system. Some vendors developed their own UCD processes while others followed published processes, such as ISO or NIST guidelines.

Vendor recruitment. Eleven vendors based on market position and type of knowledge that might be gained were recruited for a representative sample (Table 1). Vendors received no compensation and were ensured anonymity.
and

b)  RESULTS
Vendors generally fell into one of three UCD implementation categories:

Well-developed UCD: These vendors had a refined UCD process, including infrastructure and the expertise to study user requirements, an iterative design process, formative and summative testing. Importantly, these vendors developed efficient means of integrating design within the rigorous software development schedules common to the industry, such as maintaining a a network of test participants and remote testing capabilities. Vendors typically employed an extensive usability staff.

Basic UCD: These vendors understood the importance of UCD and were working toward developing and refining UCD processes to meet their needs. These vendors typically employed few usability experts and faced resource constraints making it difficult to develop a rigorous UCD process.

Misconceptions of UCD: These vendors did not have a UCD process in place and generally misunderstood the concept, in many cases believing that responding to user feature requests or complaints constituted UCD. These vendors generally did not have human factors/usability experts on staff. Leadership often held little appreciation for usability.

About a third of our vendor sample fell equally into each category.

In other words, a third of health IT sellers lacked the resources to do an adequate job of UCD and testing; and a third did not even understand the concept.

Let me reiterate:

In an unregulated life-critical industry, a third of these sampled sellers thought 'responding to user feature requests or complaints constituted UCD'.  And another third neglected UCD due to a 'lack of resources'.

I find that nothing short of remarkable.

I opine that this is only possible in healthcare in an unregulated healthcare sector.

Regulation, for example, that enforced good design practices and good manufacturing practices (GMP's) could, it follows, actually improve clinical IT innovation considering the observations found by these authors, through ensuring those without the resources either found them or removed themselves from the marketplace, and by making sure those sellers that did not understand such a fundamental concept either became experts it UCD, or also left the marketplace.

I can only wonder in what other fundamental(s) other sellers are lacking, hampering innovation, that could be improved through regulation.

As a final point, arguments that regulation hampers innovation seems to assume a fundamental level of competency and good practices to start with among those to be freed from regulation. In this case, that turns our to be an incorrect assumption. 

As a radio amateur, I often use the term "health IT amateurs" to describe persons and organizations who should not be in leadership roles in health IT, just as I, as a radio amateur, should not be (and would not want to be) in a leadership role in a mission-critical telecommunications project.

I think that, inadvertently, the writers of this book section gave real meaning to my term "health IT amateurs."  User centered design is not a post-accident or post-mortem activity.

-- SS

12/4/2014 Addendum:

I should add that in the terminology of IT, "we don't have enough resources" - a line I've heard numerous times in my CMIO and other IT-related leadership roles - often meant: we don't want to do extra work, to reduce our profits (or miss our budget targets), or hire someone who actually knows what they're doing because we don't really think that the expertise/tasks in question are really that important.

In other cases, the expertise is present. but when those experts opine an EHR product will kill people if released, they find the expert 'redundant', e.g., http://cci.drexel.edu/faculty/ssilverstein/cases/?loc=cases&sloc=lawsuit.

Put in more colloquial terms, this is a slovenly industry that has always made me uncomfortable, perhaps in part due to my experience having been a medical safety manager in public transit (SEPTA in Philadelphia), where lapses in basic safety processes could, and did, result in bloody train wrecks.

Perhaps some whose sole experience with indolence and incompetence-driven catastrophe has been in discussions over coffee in faculty lounges cannot appreciate that viewpoint.

Academic organizations like AMIA could do, and could have done, a whole lot more to help reform this industry, years ago.

-- SS

Sunday, April 27, 2014

From a physician and former USAF air traffic controller/pilot on the state of healthcare IT

From a colleague, a physician and blogger and fellow AMIA member with an eclectic background, on the state of healthcare information technology.  Reposted with his permission.

Emphases in bold are mine:

To restate the old joke, the nice things about medical informatics standards, are their are so many of them too chose from… don’t think we necessarily need to invest even more time and energy on ever more sophisticated data models or ever more exhaustive standards (which are then largely ignored). 

The fact of the matter is that the EMR remains in the United States a tool for maximization of reimbursement and as such is not a  technological destination but rather a technological dead end. The driver for proliferation of this ‘dead end’ is the government being willing to fund its expansion with their fervent hope that it will be their magic bullet for finding the cheats and cheaters of Medicare. 

When I was in the USAF, I was trained to be a software and systems engineer at their great expense and at my great pleasure. Additionally, I was for several years prior to my medical school career a USAF Air Traffic Controller and so I was intimately familiar with perhaps the most perfect of all known systems engineering efforts the Worldwide Air Traffic Control System, and most remarkably (from the wellsprings of my fading memories) I have 50 hours of stick time in the F-16. 

The flying environment of the F-16 which is ‘eyes outside the cockpit’ was made possible by advanced human engineering efforts that resulted in Heads Up Displays of both intense rational and aesthetic beauty that made the machine a joy to fly. 

Fast forward 20 years and what am I given as a clinician to work with…. to keep my head out of the cockpit…… spreadsheets…… designed by engineers who like spreadsheets and think in spread sheets…..  and who don’t even take the 30 minutes it takes to articulate the logic of presentation of clinical data, i.e., present the serum salts together with the BUN/Creatinine, present the RDW with the RBC indices and the hematocrit and hemoglobin … present the last 3 d’s worth of data together aggregated by type rather than alphabetized and homogenized and distributed in clinically illogical boxes. 

The F-16 was designed by engineers, but pilots oversaw its development and the display of its information systems were always the results of intense end user interaction with the design teams. This type of intense physician interaction and veto power of poor information design efforts does not exist in [the health IT] industry. Their goal is feature proliferation and uniqueness (not commonality) of function as a market differentiation tool and to avoid suits for ‘look and feel’ viz a viz the Apple vs Microsoft suits of the 80’s. 

The reality is the train has left, those of us addicted to patient care watch in dismayed horror as our productivity plunges and we struggle to restructure not our workflows but our clinical thought processes to badly designed, logically flawed, and obscenely overpriced documentation tools that distract the expert clinician from a high quality clinical encounter. 

Quite honestly gentleman and gentlewomen of the jury, I don’t give a ‘rats a**’ about superior documentation, I am obsessed with superior outcomes, and as somebody who actually has to work with this junk, it all sucks………. and will continue to suck until such time as real world clinicians have veto power over the efforts of systems design teams with respect to their information design efforts…. What information design efforts?  My point precisely……. 

As always Acerbically Yours, 


frnk m (Frank Meissner)

“I am not a pessimist, I am an optimist who has not arrived’ 
                                                                      Mark Twain

My reply back was:

Frank, on a related note, I was at a fleamarket yesterday looking for radio & electronics stuff (my hobby) and came across a man selling copies of "Aviation Week" from 1960.  He had the entire year's set.

I looked through them and was STUNNED.

The engineering prowess described in not just the articles but in the advertisements as well - bearings, servos, instrumentation, etc. for aircraft and spacecraft was stunning.  There was even an ad for a desktop computer, a Packard Bell PB-250 with "microsecond add time, 350 transistors and 46 (or so) instructions" for a mere $30,000.

What you are describing - and what I have been trying to describe for over a decade and a half - is an intellectually impoverished industry that ignores our specialty [Medical Informatics] and good engineering practices in general.

That AMIA has made itself appear comfortable with that status quo has been a disappointment to say the least.  What you just wrote should have come from the leadership, and years ago, not from the grass roots.

If physicians could refuse use of clinical IT without sanction, I believe in 2014 that most would walk away.

The fact that this technology is now forced on clinicians is not what the pioneers in informatics intended, I am confident; our medical leadership, furthermore, should be ashamed of this set of affairs. As a former medical manager for the Southeastern PA Transportation Authority, even bus drivers and porters in public transit had more rights to determine what equipment they would or would not use in their work, and its configuration, than physicians have in theirs.

-- SS

Friday, February 28, 2014

"EHRs: The Real Story" - Sobering assessment from Medical Economics

From Medical Economics -

"EHRs: The Real Story",  pg. 18-27, Feb. 10, 2014, available here (PDF).

Full issue at http://medicaleconomics.modernmedicine.com/sites/default/files/images/MedicalEconomics/DigitalEdition/Medical-Economics-February-10-2014.pdf - it is large, 12 MB:

... "Despite the government’s bribe of nearly $27 billion to digitize patient records, nearly 70% of physicians say electronic health record (EHR) systems have not been worth it. It’s a sobering statistic backed by newly released data from marketing and research f rm MPI Group and Medical Economics that suggest nearly two-thirds of doctors would not purchase their current EHR system again because of poor functionality and high costs."

Here are other key findings from this national survey:

  • 73% of the largest practices would not purchase their current EHR system. The data show that 66% of internal medicine specialists would not purchase their current system. About 60% of respondents in family medicine would also make another EHR choice.
  • 67% of physicians dislike the functionality of their EHR systems.
  • Nearly half of physicians believe the cost of these systems is too high.
  • 45% of respondents say patient care is worse since implementing an EHR. Nearly 23% of internists say patient care is significantly worse.
  • 65% of respondents say their EHR systems result in financial losses for the practice. About 43% of internists and other specialists/subspecialists outside of primary care characterized the losses as signifcant.
  • About 69% of respondents said that coordination of care with hospitals has not improved.
  • Nearly 38% of respondents doubt their system will be viable in five years.
  • 74% of respondents believe their vendors will be in business over the next 5 years.

My own views are:

While some might dismiss such surveys as well as reports of harms as "anecdotes" (those same persons conflating scientific discovery with risk management, see http://hcrenewal.blogspot.com/2011/08/from-senior-clinician-down-under.html), I observe that such articles/surveys are increasing in frequency the past few years and are coming from reasonably capable observers - clinicians - .unlike, say, a Fox News survey of pedestrians on complex political matters.

Another physician survey is here:  http://hcrenewal.blogspot.com/2010/01/honest-physician-survey-on-ehrs.html.

Here's an interesting ad hoc survey of nurses:  http://hcrenewal.blogspot.com/2013/07/candid-nurse-opinions-on-ehrs-at.html.
.
This is not what the Medical Informatics pioneers intended, and is not due to physicians being Luddites (a topic I addressed at http://hcrenewal.blogspot.com/2012/03/doctors-and-ehrs-reframing-modernists-v.html).

In my opinion, organizations that have the expertise to change the current trajectory of this technology such as the American Medical Informatics Association (AMIA) needs to leave its tweed-jacket academic comfort zone and become more proactive - or perhaps I should say aggressive - in combating the industry status quo.  

The health IT industry trade associations such as HIMSS have no such qualms about aggressively and shamelessly pushing their version of EHR utopia, an agenda that has led to massive profits for the industry... but to clinician survey results such as above.  And to injured and dead patients.

-- SS

Saturday, March 02, 2013

JAMIA: Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems

A new article appeared online 20 February 2013 in the Journal of the American Medical Informatics Association entitled "Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems" (link to fulltext) by David C Radley, Melanie R Wasserman, Lauren EW Olsho, Sarah J Shoemaker, Mark D Spranca and Bethany Bradshaw.

The authors performed a meta-analysis of the literature on Computerized Practitioner Order Entry (CPOE) systems in inpatient settings and concluded:

"Processing a prescription drug order through a CPOE system decreases the likelihood of error on that order by 48% (or in  a range of 41% to 55% with ninety five percent confidence).  Given this effect size, and the degree of CPOE adoption and use in hospitals in 2008, we estimate a 12.5% reduction in medication errors, or ∼17.4 million medication errors averted in the USA in one year."

It is important to know the potential benefits of CPOE, as the government has been pushing this technology since the foundation of the Office of the National Coordinator (for health IT) within HHS since 2004.  Indeed, reimbursement penalties on Medicare will start in 2015 for non-adopters of government certified health IT.

It is especially important to get to the truth about CPOE specifically, and health IT in general, in terms of risks, benefits, return on investment, improvements, and alternatives.

Not long before this new JAMIA article appeared, an active study of EHR problems with voluntary reporting by members of the ECRI Institute's Patient Safety Organization (PSO) produced some concerning data.  Namely, that over a 9-week period starting April 16, 2012, and ending June 19, 2012, 171 health information technology-related problems were reported from just 36 healthcare facilities, primarily hospitals. Eight of the incidents reported involved patient harm, and three may have contributed to patient deaths.

Obviously, extrapolating those number to:  1)  a much higher number of hospitals, of which the U.S. alone has approximately 5,700 plus other facilities such as long-term care, and private physician offices; 2)  over a full year, not just 9 weeks;  3) accounting for the perhaps 5% voluntary reporting level (per Koppel) of issues such as medication errors; 4) plus accounting for (per FDA) the issue of lack of recognition of IT as contributing to medical incidents (this list is not all-inclusive) - the results are of concern.

Thus, work such as in this new JAMIA article on CPOE is important.  The article can be downloaded in its entirety as of this writing from the link above.  The article describes a methodology that is quite complex, and obviously a great deal of time and effort was put into it.  It appears to be a valiant effort to get us one step closer to the truth.  This should be applauded.

I was impressed on first reading of this literature meta-analysis and its statistical calculations. (Actually I needed to read it several times to fully grasp the methodologies involved.)

The question arose in my mind, however: can this article's conclusions be true, and the ECRI PSO Deep Dive study be true, at the same time?

Prior to going into an analysis and perhaps detailed critique of the methodology, and knowing the difficulties and contradictions the literature on this topic presents, I decided first to look at the source articles selected from the literature for inclusion in the JAMIA meta-analysis.

In doing so, issues became apparent that shed light on the difficulties of meta-analyses on topics such as this.

The only methodological issue I will mention at this time is that the study used a surrogate endpoint - medication "error" rates before, and after, implementation of CPOE, rather than patient outcomes.  (The reason I put "error" in quotes is that, as the authors describe regarding study limitations, the exact definition varies from site to site and study to study.  They also acknowledge the limitations of using such an endpoint.)  Surrogate endpoints, however, may or may not reflect the actual information being sought regarding outcomes.

As Roy Poses noted in a June 2008 post "Criticism of Surrogate Endpoints in Whose Interests?":

... The problem with surrogate endpoints is that they are surrogates for the real thing. In many cases, a treatment may appear beneficial when measured by its affect on such endpoints, but not turn out to be beneficial when measured by its affect on real clinical outcomes, e.g., alleviation of symptoms, improvement of function, and prolongation of survival. There are many reasons why this may be the case.

This weakens the present study as a basis for social re-engineering.  The authors responsibly acknowledge that via the statement in the conclusion that:

Future research in this area will be critically important to inform policy and funding decisions regarding the development and implementation of CPOE in care delivery.

When I reviewed the studies that were used for the meta-analysis, however, my enthusiasm for the results was diminished.

The authors write:

Using the search terms of Ammenwerth et al, we updated the search using PubMed in February 2009, identifying 390 studies. Each was reviewed by two study authors (MRW and DCR). After applying the a priori inclusion/exclusion criteria, 10 studies were retained. [Listed in footnotes 10–19 - ed.]

Here are the 10 studies retained, as per footnotes #10 - 19.  Short excerpts (I am trying to keep this post relatively short) and my very brief comments about each of them are as follows.  Hyperlinks to the summaries and in some cases to fulltext are present in the online study itself at the full text link at top of this post.

First, I note no randomized, controlled clinical trials, the gold standard of medical research.  That lack is not the fault of the authors; it is a general feature in the domain of healthcare information technology.

That said: 

Included study #1 (footnote 10):

Bates DW, Teich JM, et al, The impact of computerized physician order entry on medication error prevention. Brigham and Women's Hospital, J Am Med Inform Assoc 1999;6:313–21.

... During the study, the non-missed-dose medication error rate fell 81 percent, from 142 per 1,000 patient-days in the baseline period to 26.6 per 1,000 patient-days in the final period (P < 0.0001). Non-intercepted serious medication errors (those with the potential to cause injury) fell 86 percent from baseline to period 3, the final period (P = 0.0003). Large differences were seen for all main types of medication errors: dose errors, frequency errors, route errors, substitution errors, and allergies. For example, in the baseline period there were ten allergy errors, but only two in the following three periods combined (P < 0.0001).  The study periods were as follows: baseline, 51 days, Oct-Nov 1992; period 1, 68 days, Oct-Dec 1993; period 2, 49 days, Nov-Dec 1995; and period 3, 52 days, Mar-Apr 1997.

I note that this was a highly advanced setting with long-standing Medical Informatics expertise, performed by Medical Informatics experts of the highest caliber.  This was an ideal environment for the implementation of good health IT.  The results may thus not be generalizable to facilities without that level of experience.  
Also, the study was a considerable number of years ago, some of it two decades ago.  While one might assume the technology has improved, the increased commercial sector involvement since the 1990's, and especially after the HITECH incentives of 2009, may be creating an increased occurrence of bad health IT, and/or implementation in facilities with far less (if any) informatics expertise.

Thus, in my view the study's applicability to current times and to all medical organizations is not extremely strong. 

Included study #2 (footnote 11):

Medication Administration Variances Before and After Implementation of Computerized Physician Order Entry in a Neonatal Intensive Care Unit, Pediatrics 2008;121:123–8

... Data on 526 medication administrations, including 254 during the pre-computerized physician order entry period and 272 after implementation of computerized physician order entry, were collected. Medication variances were detected for 19.8% of administrations during the pre-computerized physician order entry period, compared with 11.6% with computerized physician order entry (rate ratio: 0.53). Overall, administration mistakes, prescribing problems, and pharmacy problems accounted for 74% of medication variances; there were no statistically significant differences in rates for any of these specific reasons before versus after introduction of computerized physician order entry.

Here, 'n' is very small, and there is a finding that the CPOE had no effect on administration mistakes, prescribing problems, and pharmacy problems.  Thus, a ringing endorsement for national CPOE implementation this study is (unfortunately) not. 

Included study #3 (footnote 12):

The effect of computer-assisted prescription writing on emergency department prescription errors, Acad Emerg Med 2002;9:1168–75.

Without even a summary, my concern here is that ePrescribing and CPOE are different entities.   Inclusion of ePrescibing in a study of CPOE is not entirely without some risk of conflation of results of one with the other. 

Included study #4 (footnote 13):

Impact of computerized physician order entry on clinical practice in a newborn intensive care unit, J Perinatol. 2004 Feb;24(2):88-93.

This article studies gentamicin dosing and turn around times and found that:

"...the accuracy of gentamicin dose at the time of admission for 105 (pre-CPOE) and 92 (post-CPOE) VLBW infants was determined. In the pre-CPOE period, 5% overdosages, 8% underdosages, and 87% correct dosages were identified. In the post-CPOE, no medication errors occurred. Accuracy of gentamicin dosages during hospitalization at the time of suspected late-onset sepsis for 31 pre- and 28 post-CPOE VLBW infants was studied. Gentamicin dose was calculated incorrectly in two of 31 (6%) pre-CPOE infants. No such errors were noted in the post-CPOE period.

My comments are that a NICU is a specialized environment with a high ratio of clinicians/staff to patients.  Findings in such an environment again may not be generalizable.  Also, one should ask if complex CPOE systems are really needed for dosing calculations and turn around time improvements.  Simpler and cheaper human/technological solutions might have achieved similar or better results.  Thus, again, while not demeaning the results achieved by this study's interventions in 2004, I have my concerns that this study is not strong evidence of generalizability of even the CPOE surrogate measurement, namely decrease of med "errors." 

Included study #5 (footnote 14):

A computer-assisted management program for antibiotics and other antiinfective agents. N Engl J Med 1998;338:232–8.

We have developed a computerized decision-support program linked to computer-based patient records that can assist physicians in the use of antiinfective agents and improve the quality of care. This program presents epidemiologic information, along with detailed recommendations and warnings. The program recommends antiinfective regimens and courses of therapy for particular patients and provides immediate feedback. We prospectively studied the use of the computerized antiinfectives-management program for one year in a 12-bed intensive care unit. RESULTS: During the intervention period, all 545 patients admitted were cared for with the aid of the antiinfectives-management program. Measures of processes and outcomes were compared with those for the 1136 patients admitted to the same unit during the two years before the intervention period. The use of the program led to significant reductions in orders for drugs to which the patients had reported allergies (35, vs. 146 during the preintervention period; P less than 0.01), excess drug dosages (87 vs. 405, P less than 0.01), and antibiotic-susceptibility mismatches (12 vs. 206, P less than 0.01). There were also marked reductions in the mean number of days of excessive drug dosage (2.7 vs. 5.9, P less than 0.002) and in adverse events caused by antiinfective agents (4 vs. 28, P less than 0.02).  [Several other benefits omitted for brevity - they can be seen at the JAMIA footnote hyperlink -  ed.]

My thoughts here are that, while the results were commendable, once again this study took place 15 years ago, and was in a high-staff-to-patient specialized ICU environment.  I also wonder if complex, expensive CPOE is needed to accomplish these tasks as opposed to, say, an online DSS and appropriate workflows and process. 

Included study #6 (footnote 15):

Impact of computerized prescriber order entry on medication errors at an acute tertiary care hospital. Hosp Pharm 2003;38:227–31.

The authors analyzed medication errors documented in a hospital's database of clinical interventions as a continuous quality improvement activity. They compared the number of errors reported prior to and after computerized prescriber order entry (CPOE) was implemented in the hospital. Results indicated that in the first 12 months of CPOE, overall medication errors were reduced by more than 40%, incomplete orders declined by more than 70%, and incorrect orders decreased by at least 45%. Illegible orders were virtually eliminated but the level of medication errors categorized by drug therapy problems remained significantly unchanged. The study underscores the positive impact of CPOE on medication safety and reemphasizes the need for proactive clinical interventions by pharmacists.

This study appears reasonable for inclusion in a meta-analysis, although ideally there might have been accounting for possible influence of non-intervention (computer)-related pre-post interval changes.  The transition to CPOE, training, increased awareness, etc. can influence results, especially short term. 

Included study #7 (footnote 16):

Error reduction in pediatric chemotherapy: computerized order entry and failure modes and effects analysis. Arch Pediatr Adolesc Med 2006;160:495–8.

Before-and-after study from 2001 to 2004. After CPOE deployment, daily chemotherapy orders were less likely to have improper dosing (relative risk [RR], 0.26; 95% confidence interval [CI], 0.11-0.61), incorrect dosing calculations (RR, 0.09; 95% CI, 0.03-0.34), missing cumulative dose calculations (RR, 0.32; 95% CI, 0.14-0.77), and incomplete nursing checklists (RR, 0.51; 95% CI, 0.33-0.80). There was no difference in the likelihood of improper dosing on treatment plans and a higher likelihood of not matching medication orders to treatment plans (RR, 5.4; 95% CI, 3.1-9.5).

Again, the results appear commendable.  However:  there was no difference in the likelihood of improper dosing on treatment plans, and worse, there was found a higher likelihood of not matching medication orders to treatment plans.

In fact, this article was accompanied by a letter in response entitled "Primum non nocere", David Dickens, MD; Dianne Sinsabaugh, RPh; Brenda Winger, PharmD, Arch Pediatr Adolesc Med. 2006;160(11):1185-1186 (after some digging, text found at http://archpedi.jamanetwork.com/article.aspx?articleid=486317):

Kim et al. demonstrated that in the practice of pediatric oncology, computerized physician order entry (CPOE) reduced improper dosing, missing cumulative doses, and incomplete nursing checklists.  In contrast to these benefits, however, CPOE also resulted in a 5-fold increase in “not matching medication orders to treatment plans.” Although little detail was provided on the nature of these medication order/treatment plan “mismatches,” it implies that chemotherapy ordered through CPOE deviated more often from intended protocol therapy as compared with paper-ordered chemotherapy. While CPOE ostensibly led to more precise chemotherapy dosing, it increased the risk of that chemotherapy being the wrong chemotherapy.

The author did respond: "Mismatches between treatment plan and orders at point of therapy increased, but were intercepted and clarified at POC. [By people - ed.]No incorrect meds were given.

On its face, this is not an entirely dispositive proof of CPOE beneficence and raises significant concern that, sooner or later, a person might miss the discrepanc(ies) resulting in unintended adverse consequences. 

Included study #8 (footnote 17):

Effects of an integrated clinical information system on medication safety in a multi-hospital setting. Am J Health Syst Pharm 2007;64:1969–77.

This study took place at Lifespan health care system that includes Rhode Island Hospital (RIH), a private, 719-bed, not-for-profit, acute care hospital and academic medical center that has a pediatric division, the Hasbro Children’s Hospital; and The Miriam Hospital (TMH), a 247-bed, not-for-profit, acute care general hospital.

Methods. The integrated systems selected for implementation included computerized physician order entry, pharmacy and laboratory information systems, clinical decision-support systems (CDSSs), electronic drug dispensing systems (EDDSs), and a bar-code point-of-care medication administration system. The indicators for CPOE with inherent CDSSs demonstrated a significant effect of this functionality on reducing prescribing error rates for three of the four indicators measured: drug allergy detection, excessive dose, and incomplete or unclear order. The fourth indicator measured, therapeutic duplication, did not show a significant effect on prescribing error rates. For the rules engine software CDSS, the colchicine indicator did not show a statistically significant effect on prescribing error rate, but a significant decrease in prescribing errors related to metformin use in renal insufficiency was observed after implementation of the rules engine software and integration with CPOE.

Again, these are commendable results, but on its face the technologies involved went far beyond just CPOE, and the results were not uniform.  The actual reduction figures for seven categories were mostly in 50% range, one at 86% (allergy), but the duplicates issue at 8%, not felt statistically significant.

Of more concern, there was this in the news in 2011.  A software bug at this organization led to thousands and perhaps tens of thousands of prescription errors that could have (and without definite proof, despite organization denials I would be concerned did) led to injury and death.  I wrote about the malfunction at http://hcrenewal.blogspot.com/2011/11/lifespan-rhode-island-yet-another.html.  The bug was not discovered for about a year.

Other organizations, especially those new to CPOE and/or health IT, face similar risks.

Again, this is not a caveat-free endorsement of national CPOE rollout in 2013.

Included study #9 (footnote 18):

Effect of computer order entry on prevention of serious medication errors in hospitalized children. Pediatrics 2008;121:e421–7.

627 pediatric admissions, with 12 672 medication orders written over 3234 patient-days.  The rate of non-intercepted serious medication errors in this pediatric population was reduced by 7% after the introduction of a commercial computerized physician order entry system, much less than previously reported for adults, and there was no change in the rate of injuries as a result of error. Several human-machine interface problems, particularly surrounding selection and dosing of pediatric medications, were identified.

The issues of concern here are bolded and underlined above and need not be restated.

Lastly:

Included study #10 (footnote 19):

Evaluation of reported medication errors before and after implementation of computerized practitioner order entry. J Health Inf Manag 2006;20:46–53.

While a major objective of CPOE is to reduce medication errors, its introduction is a major system change that may result in unintended outcomes. Monitoring voluntarily-reported medication errors in a university setting was used to identify the impact of initial CPOE implementation on medical-surgical and intensive care units. A retrospective trend analysis was used to compare errors one year before and six months after implementation. Total error reports increased post-CPOE but the level of patient harm related to those errors decreased. Numerous modifications were made to the system and the implementation process. The study supports the notion that CPOE configuration and implementation influences the risk of medication errors. Implementation teams should incorporate monitoring medication errors into project plans and expect to make ongoing changes to continually support the design of a safer care delivery environment.

This study appeared more a study reporting unintended outcomes than benefits. The total medication error reports increased post-CPOE but the level of patient harm related to those errors decreased.   (That decrease might have been due to human factors, or to serendipity, both of which cannot be expected to protect forever.)

The reasons for the changes were described this way:

... Contributing causes. To assist in the development of safety interventions, contributing causes were identified for reported errors. The most common contributing cause was noncompliance to policy and procedure, identified in 40 percent of errors. For example, a previous order may not have been discontinued when a new dose change was entered, resulting in two active orders for the same medication with different dosages

 The next most common contributing cause was computer entry errors, seen in 25 percent of mistakes. One example was if a medication order was placed on the wrong patient.  ["Use error" due to confusing user interfaces as recently defined by NIST - as opposed to "user error" - was likely to have contributed to at least some of these errors - ed.]  The next most common error was initial load errors (19 percent). During entry of all current medications on the day of activation, multiple category B errors were made. An example was a written order for sliding scale calcium gluconate “PRN,” which was entered into the CPOE system as “scheduled.”

There were also computer design issues that contributed to 10 percent of errors. An example was when the pharmacist received two printouts for methylprednisolone 500 mg IV. He assumed it was a duplicate order, but when he reviewed the CPOE system, he saw that one order was for today and the other was for tomorrow. The dates for these orders were not visible on the order printout from the CPOE system. [This again seems to be 'use error' - ed.]


These issues can and will occur anywhere.  Once again, this is not entirely an article, either by itself or in a meta-analysis, that I find ideal in attempted proof of CPOE effectiveness and beneficence.

In fact, after I reviewed this source, I noted that this study was eliminated from the inclusion set:

... Based on later expert reviewer feedback, we eliminated one additional study that solely used a voluntary reporting method for error detection, leaving nine studies for our final pooled analysis.

It would probably not be hard to convince critical thinkers of the possibility this study was removed for reasons other than stated.

In summary, while one should not and cannot expect perfection in the studies available in a meta-analysis, due to the difficulties in this domain the literature resources utilized were not ideal, and the surrogate endpoint also raises concern.  (I have not reviewed the entire corpus of potential literature myself.)  The authors conducted a difficult and rigorous study, but one cannot turn data "lead" (as in Pb) to gold (as in Au), no matter how hard one works or however good one's intentions are.

The authors did note the limitations of the study, although their conclusion will likely be taken by the industry as a "full steam ahead" signal.   (I do note with some irony the proximity of this article's release to the soon-to-start massive HIMSS 2013 Annual Conference & Exhibition trade show, March 3-7, 2013 in New Orleans, LA.)  While likely a coincidence, my concern is that CPOE vendors will be talking nonstop about these results.

Thus, I agree with the author's conclusion (especially in view of the recent voluntary reporting-based ECRI PSO study) that "future research in this area will be critically important to inform policy and funding decisions regarding the development and implementation of CPOE in care delivery."

From a clinical perspective, "primum non nocere" and the avoidance of gambling billions of dollars applies, at least until a better understanding of the technology's risk/benefit ratio and how to improve it occurs.

A fraction of those billions would pay for more robust, current studies on the scale needed to get closer to the truth, such as formal post-market, mandatory surveillance that measures not surrogate but primary variables - such as outcomes both positive and negative.  As noted by the authors, voluntary reporting has the least sensitivity towards uncovering error: 

... Reviewed studies used various medication error detection methods. Research suggests that the highest error rates are found through direct observation, followed by chart review, then automated surveillance, and voluntary reporting.  [Citations were made to papers by Flynn and Jha regarding these points - ed.]

Formal studies are essential from the basis of medical (e.g., safety and public health), business (e.g., ROI and liability), and social policy perspectives (e.g., are we spending the billions of dollars this technology costs wisely).

-- SS

Addendum 

I have often been the fire-breathing and über-skeptic iconoclast on matters such as this.  However, I will allow a true international expert to take on that role this time, Dr. Richard Cook, who had a guest post here yesterday (link).  Dr. Cook's opinion on this JAMIA study (again, reproduced here with permission) was this:

A meta-analysis of the literature on the nature of the universe in the mid 1500's would have concluded that the sun revolves around the earth. The data isn't fake, just worthless.  

-- SS

Wednesday, January 30, 2013

AMIA: Enhancing patient safety and quality of care by improving the usability of EHR systems, but ... no sympathy for victims of bad health IT?

A panel of experts from the American Medical Informatics Association have written a paper "Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA."

The paper is publicly available at this link in PDF.

The authors are  Blackford Middleton (Harvard Medical School),  Meryl Bloomrosen (AMIA),  Mark A Dente (GE Healthare IT),  Bill Hashmat (CureMD Corp.), Ross Koppel (Dept. of Socology, Univ, of Pennsylvania), J Marc Overhage (Siemens Health Services), Thomas H Payne (U. Washington IT Services),  S Trent Rosenbloom (Vanderbilt Informatics), Charlotte Weaver (Gentiva Health Services) and Jiajie Zhang (University of Texas Health Science Center at Houston).

The paper states what has been obvious to this author - and many others - for many years:

ABSTRACT:  In response to mounting evidence that use of electronic medical record systems may cause unintended consequences, and even patient harm, the AMIA Board of Directors convened a Task Force on Usability to examine evidence from the literature and make recommendations. This task force was composed of representatives from both academic settings and vendors of electronic health record (EHR) systems. After a careful review of the literature and of vendor experiences with EHR design and implementation, the task force developed 10 recommendations in four areas: (1) human factors health information technology (IT) research, (2) health IT policy, (3) industry recommendations, and (4) recommendations for the clinician end-user of EHR software. These AMIA recommendations are intended to stimulate informed debate, provide a plan to increase understanding of the impact of usability on the effective use of health IT, and lead to safer and higher quality care with the adoption of useful and usable EHR systems.

The paper is a respectable start at acknowledgement of the issues ... albeit years late.

That said:

I noted some typical language in the article characteristic of the reluctance of the health IT industry and its friends to directly confront these issues.  I wrote a letter to the authors that, as I indicate below, not unexpectedly went unanswered except for one individual -- not even a physician -- who's gone out on a limb professionally in the interest of patient's rights, and as a health IT "iconoclast" (i.e., patient advocate) suffered for doing so (link).  The lack of a response to the letter is itself representative, in my opinion, of a pathology that renders more rights to the healthcare computer and its makers than patients.   More on this below.

First, I note I am rarely if ever cited by the academics.  They are not prohibited from doing soI've probably been writing on these issues -- poorly done health IT, improper leadership, the turmoil created, etc., publicly for longer than anyone else in the domain.

I also note that the paper is somewhat in the form of an analytical debate.  Analytical debates are relatively ineffective in this domain.  They are like popcorn thrust against a battleship.  The paper, also, appearing as it does in a relatively obscure specialty journal (Journal of the American Medical Informatics Association), will probably get more exposure from this blog post than the entire readership of that journal.  The authors need to be relating these issues in forums that are widely read by citizens and government, not in dusty academic journals - that is, assuming they want the messages to widely diffuse.

In my review of the article, I note the following:

... In an Agency for Healthcare Research and Quality (AHRQ) workshop on usability in health IT in July 2010, all eight participating vendors agreed that usability was important and many suggested it was a competitive differentiator, although some considered that usability was in the eye of the beholder and that the discipline of usability evaluation was an imperfect science, with results that were not useful.

A paper like this should have clearly repudiated antiquated viewpoints like that, not merely made note of them.   Not taking a stand is a sign of weakness...or sympathy.

As a matter of fact, if leaders such as this had paid attention to the 'iconoclasts' and their 'anecdotes', my own mother might not have gone through horrible suffering and death, with me as sad witness as I related to them in my letter below.

... End-users of EHR systems are ultimately accountable for their safe and effective use, like any tool in clinical care.

I see a linguistic sleight of hand via use the word "tool" to describe HIT and trying to blend or homogenize this apparatus with other "tools" in clinical care.  The HIT "tool" is unlike any other since no transaction of care can occur without it going through this device, and as such, all care is totally dependent on it.  Further, unlike pharma and medical devices, this "tool" is unvetted and unregulated but its use forced upon many users.

... [AMIA] subcommittees reviewed the literature on usability in health IT, current related activities underway at various US Federal agencies, lessons learned regarding usability and human factors in other industries, and current federally funded research activities.


Did they speak with the source of the most candid information?  The plaintiff's and defendant's Bars?

Need I even ask that question?

... Recent reports describe the safe and effective use of EHR as a property resulting from the careful integration of multiple factors in a broad sociotechnical framework

This is not merely 'recent' news.  The field of Social Informatics (link), that has studied IT in its social contexts for decades now, has offered observations on the importance of considering multiple factors in a broad sociotechnical framework.   The authors all know this - or should know this, or should have made it their business to know this The statement sounds somewhat protective of the HIT and hospital industries for their longstanding negligence towards those issues.

... User error may result in untoward outcomes and unintended negative consequences. These may also occur as a result of poor usability, and may also be an emergent property only demonstrated after system implementation or widespread use.

I note the use of the term "user error" and lack of the term "use error" with significant disdain.  As I wrote here regarding the views of a HIT industry exexcutive holding the mystical "American Medical Informatics Certification for Health Information Technology" NIST itself now defines "use error" (as opposed to "user error") as follows:

“Use error” is a term used very specifically by NIST to refer to user interface designs that will engender users to make errors of commission or omission. It is true that users do make errors, but many errors are due not to user error per se but due to designs that are flawed, e.g., poorly written messaging, misuse of color-coding conventions, omission of information, etc. From "NISTIR 7804: Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records." It is available at http://www.nist.gov/healthcare/usability/upload/Draft_EUP_09_28_11.pdf (PDF).

In the article, indefinites were exchanged with what should have been stronger, declarative statements, and vice versa:

User error ... may also represent a potential health IT-related error yet to happen.

I most decidedly wish they'd stop this "may" verbiage in policy papers like this.

... Anecdotal reports suggest that these application differences [where clinicians use more than one commercial EHR system] result in an increased training burden for EHR users.

"Anecdotal"?  How about "obvious to a third grader?" 

"Anecdotal" in academic papers often is a term of derision for inconvenient truths such as reports of health IT problems.  Its use often reflects a need for authors using the term (per a senior clinician from Victoria, Australia on the 'anecdotes' issue, link) "to attend revision courses in research methodology and risk management."

... Some suggest that the expected gains sought with the adoption of EHR are not yet realized.

"Some"?  How about "credible experts?"  "Suggest?"  They merely hint at it?  How about "opine?"
 
... The design of software applications requires both technical expertise and the ability to completely understand the user’s goal, the user’s workflow, and the socio-technical context of the intent

In the meantime, AMIA has been promoting national rollout of a technology where, most often, the latter does not apply.

To ... transform our healthcare delivery system ... clinicians need to use usable, efficient health IT that enhances patient safety and the quality of care.

This is the typical hyperenthusiast mantra.  Where's the proof?  And, transform into what, exactly?  Vague rhetoric like this in allegedly scientific papers is most unwelcome.

Some experts suggest that improving the usability of EHR may be critical to the continued successful diffusion of the technology.

More weak talk.  Why not come right out and say "Credible experts opine that ...."?

... While some EHR vendors have adopted user-centered design when developing health information technologies, the practice is not universal and may be difficult to apply to legacy systems.

From the patient advocacy perspective, that's their problem...it's a risk of being in this business.  Patients should not be expected to be used as experimental subjects while IT sellers figure out what other industry sectors have long mastered.   Further, they should be held accountable for failures that result in harm.  Another risk of doing business in this sector that clinicians have long learned to live with...

... Some believe it is difficult or impossible to reliably compare one product with another on the basis of usability given the challenges in assessment of products as implemented.

Nothing is "impossible" and again, if it's "difficult", that's the industry's problem.  There is risk of being in the business of medicine or medical facilitation; nobody promised a rose garden, and a rose garden should not be expected.

... Many effects of health IT can be considered to be ‘emergent’ or only discovered after monitoring a system in use

One might ask,  where's the industry and AMIA been regading postmarket surveillance (common in other health sectors) for the past several decades?

... AMIA believes it is now critical to coordinate and accelerate the numerous efforts underway focusing on the issue of EHR usability.

Only "now?"

... Establish an adverse event reporting system for health IT and voluntary health IT event reporting

No, no, no ...voluntary reporting doesn't work.  Even mandatory reporting is flawed, but it's better than voluntary.

I am invariably disappointed by recommendations like this.  I've observed repeatedly, for example, that "volunatary reporting" of EHR problems already exists - in the form of the FDA MAUDE database - and most HIT sellers' reports are absent.  See my posts on MAUDE here, here and here(Also, the only one that seems to report may have ulterior motives, i.e., restraint of trade.)

... A voluntary reporting process could leverage the AHRQ patient safety organizations (PSO) ... This work should be sponsored by the AHRQ.

These folks clearly don't want any teeth in this.  AHRQ is a research-oriented government branch, not a regulator, nor does it have regulatory expertise.

AMIA recommends:

Research and promote best practices for safe implementation of EHR

In 2013 this is valuable information in the same sense that advice to use sterile technique during neurosurgery is valuable.

"Promoting best practices" has been done for decadesNot mentioned is avoiding worst practices.   I've long written these are not the same thing, as toleration of the inappropriate leadership by health IT amateurs (a term I use in the same sense that I am a Radio Amateur, not a telecommunications professional), politics, empire-building and other dysfunction that goes on in health IT endeavors negates laundry lists of "best practices."

What is required is to research and abolish worst practices, including the culture and dynamics of the 'health IT-industrial complex.'  I made this point in my very first website in 1998.  It appears the authors don't get it and/or won't admit to the dysfunction that goes on in health IT projects.
 
... The adoption of useful and usable EHR will lead to safer and higher quality care, and a better return on investment for institutions that adopt them.

"Will?"  With respect to my observation above about the paper's prominent misuse of indefinites vs. stronger declarative terms, the word "may" would have been the appropriate term hereAs I wrote about similar statements from ONC in the NEJM in my 2010 post "Science or Politics? The New England Journal and The 'Meaningful Use' Regulation for Electronic Health Records", I'm quite disappointed seeing speculation and PR presented as fact from alleged scientists and scientific organizations.

Finally, I wrote the following email letter to the authors, to which (except for Ross Koppel) I received no reply.  While Dr. Koppel (a PhD) graciously expressed sympathy for my me and mother, the others (many MD's) were silent.

Perhaps the silence is the best indicator of their concern for the rights of computers and HIT merchants relative to the rights of people:

Mon, Jan 28, 2013 at 1:12 PM
Dear authors,

I've reviewed the new paper "Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA" and wanted to express thanks for it.

It's a good start.  Late, but a good start at returning the health IT domain to credibility and evidence-based practice.

It's too bad it didn't come out years earlier.  Perhaps my mother would not have gone through a year of horrible suffering and death, with me as sad witness, due to the toxic effects of bad health IT. 

Perhaps you should hear how horrible it was to hear my mother in an extended agitated delirium; to hear her cry hysterically later on when the rehab people told her that her weight was 95 pounds; to have to make her a "no code" and put her on hospice-care protocols, and then to have watched her aspirate a sleeping pill when she was agitated, and die several days later of aspiration pneumonia and sepsis ... in the living room of my home ... and then watch the Hearse take her away from my driveway...as a result of bad health IT.

I will be writing more thoughts on your article at the Healthcare Renewal blog, of course, but wanted to raise three issues:

1.  The use of "may" and "will" is reversed, and conflating the term "anecdote" with risk management-relevant case reports. 


  • They may also represent a potential health IT-related error yet to happen.  --->  They likely represent a potential health IT-related error yet to happen
  • Anecdotal reports suggest that these application differences result in an increased training burden for EHR users.  ---> Credible reports indicate...
  • Some suggest that the expected gains sought with the adoption of EHR are not yet realized. ---> Credible experts opine ....
  • Some experts suggest that improving the usability of EHR may be critical to the continued successful diffusion of the technology. --->  "Credible experts opine that ..."
  • The adoption of useful and usable EHR will lead to safer and higher quality care, and a better return on investment for institutions that adopt them. ---> The adoption of useful and usable EHR may lead to safer and higher quality care

You really need to show more clarity ... and guts ... in papers like this, and drop the faux academic weasel words.

2.  You neglected to speak to the best source for information on EHR-related harms, evidence spoliation, etc... med mal attorneys.

3.  You also neglected to speak to, or cite, the writings of a Medical Informaticist on bad health IT now going back 15 years - and whose mother was injured and died as a result of the issues you write about - me.  In fact I am rarely cited or mentioned by anyone with industry interests.

An apparent contempt for 'whistleblowers' such as myself makes me wonder ... what kind of people are the leaders of health IT, exactly? 

Do they value computer's rights over patients'?


It is not at all clear to me which has been the primary motivator of many of the health IT leaders.

I think the rights which I value are quite clear.

Sincerely,

Scot Silverstein

I neglected to mention the horror of seeing my mother put in a Body Bag before being taken to the Hearse in my driveway.

-- SS