Pages

Wednesday, December 19, 2012

A Significant Additional Observation on the PA Patient Safety Authority Report "The Role of the Electronic Health Record in Patient Safety Events" -- Risk

At a Dec. 13, 2012 post "Pennsylvania Patient Safety Authority: The Role of the Electronic Health Record in Patient Safety Events" I alluded to risk in a comment in red italics:

... Reported events were categorized by their reporter-selected harm score (see Table 1). Of the 3,099 EHR-related events, 2,763 (89%) were reported as “event, no harm” (e.g., an error did occur but there was no adverse outcome for the patient) [a risk best avoided to start with, because luck runs out eventually - ed.], and 320 (10%) were reported as “unsafe conditions,” which did not result in a harmful event. 

The focus of the report is on how the "events" did not cause harm.  Thus the relatively mild caveat:

"Although the vast majority of EHR-related reports did not document actual harm to the patient, analysts believe that further study of EHR-related near misses and close calls is warranted as a proactive measure."

It occurs that if the title of the paper had been "The Role of the Electronic Health Record in Patient Safety Risk", the results might have been interpreted far differently:

In essence, from from June 2, 2004, through May 18, 2012 (the timeframe of the Pennsylvania Patient Safety Reporting System or PA-PSRS database), from a dataset highly limited in its comprehensiveness as written in the earlier post, there were approximately 3,000 "events" where an error did occur that potentially put patients at risk.

That view - risk - was not the focus of the study.  Should it have been?

These "events" really should be called "risk events."

It is likely the tally of risk events, if the database were more comprehensive (due to better recognition of HIT-related problems, better reporting, etc.) would be much higher.  So would the reports of "harm and death" events as well.

That patient harm did not occur from the majority of "risk events" was through human intervention, which is to say, luck, in large part

Luck runs out, eventually.

I have personally saved a relative several times from computer-related "risk events" that could have caused harm if I were not there personally, and with my own medical knowledge, to have intervened.  My presence was happenstance in several instances; in fact a traffic jam or phone call could have caused me to have not been present.

What's worse, the report notes:

Analysts noted that EHR-related reports are increasing over time, which was to be expected as adoption of EHRs is growing in the United States overall.

In other words, with the current national frenzy to implement healthcare information technology, these "risk events" - and "harm and death events" - counts will increase.  My concern is that they will increase significantly.

I note that health IT is likely the only mission-critical technology that receives special accommodation regarding risk events.  "If the events didn't cause harm, then they're not that important an issue" seems to be the national attitude overall.

Imagine aircraft whose avionics and controls periodically malfunction, freeze, provide wrong results, etc., but most are caught by hyper-vigilant pilots so planes don't go careening out of control and crash.  Imagine nuclear plants where the same occurs, but due to hypervigilance the operators prevent a nuclear meltdown.

Then, imagine reports of these "risk events" - based on fragmentary reporting of pilots and nuclear plant operators reluctant to do so for fear of job retaliation - where the fact of their occurrence takes a back seat to the issue that the planes did not crash, or Three Mile Island or Chernobyl did not reoccur.

That, in fact, seems to be the culture of health IT.

I submit that the major focus that needs addressing in health IT is risk - not just confirmed body counts.

-- SS

6 comments:

  1. Thank you Scot for your continuing attention to these matters.

    One of the many neglected aspects of risk is the fact that EHRs have degraded the clinical utility of the narrative note, both in the accessibility and in the credibility of its content? What further risks and costs arise when it is apparent to a clinician that a narrative record has been manufactured, not rendered as a unique record of a specific patient's care?

    Recently I reviewed records from an ambulatory clinic where the sequential records for a given patient across several visits over several months had
    identical vital signs, a physiologic impossibility emphasized by the patient's weight remaining identical to the 1/100th of a pound. Identical temp, BP, and respiratory rate, along with identical Chief Complaint and a History of Present Illness that was identical with one or two new phrases added, but sometimes including past (and now irrelevant) information clearly copied from a prior visit, along with an identical physical exam.

    Anyone who views such clinical notes as credible for clinical decision making is taking (and making) risk.

    By the way, the FBI and OIG has been entirely aware of the activities at that clinic but, to date, there is no record of any actions against them on either fraud or addressing the poor quality care paid for with taxpayer dollars.

    In the meantime, the projected anti-fraud budget is only to rise an average of 2.9% through 2021 from an already low basis of $607M in 2011, despite the fact that in the last three years anti-fraud activities have actually recovered $7 for every $1 spent. In a time of fiscal crisis, I continue to seek explanations for why these successful programs are not being more aggressively expanded.

    Keep up your work on patient safety. Perhaps between illuminating HIT-mediated risks and injuries to patients and HIT-mediated waste, fraud, and abuse, the "implement bad HIT now, fix later" Federal policies may shift from Hypothetical "meaningful" Use to demonstrably Safe, Trustworthy Use?

    RDGelzer

    Reed D. Gelzer, MD, MPH
    Advocates for Documentation Integrity and Compliance




    Again, best wishes

    ReplyDelete
  2. Reed D. Gelzer, MD, MPH writres:

    Anyone who views such clinical notes as credible for clinical decision making is taking (and making) risk.

    Agreed.

    In my view, any medical center that implements technology that permits such faulty documentation, and/or permits it to occur (for whatever reason, pecuniary advantage, simple neglect, etc.), is partly responsible for resultant bad outcomes.

    Not to overuse the word, but they are implementing the technology, which remains experimental and unvetted for safety, at their own risk.

    -- SS

    ReplyDelete
  3. Scot, you've touched on a "life or death" matter here that will weigh heavily on Health IT.
    I believe much more consideration needs to be given to the genesis & mechanisms of "mistakes" in health care. I mean, almost all errors are committed by a human hand, whether it's hitting the button to print the wrong medication, or the amputation of the wrong limb. Avionics knows this only too well & no change in cockpit routine is let loose without being researched down to the pathway from pilot's cortex to hand.
    In Australia some years ago, deaths occurred in hospital wards from the injection of concentrated KCl. The ampoules of potassium chloride, as presented to nursing staff, were remarkably similar to those of sodium chloride. The actions taken to prevent these errors included putting distinctive labelling on ampoules of conc'd KCl, and having the dispensaries prepare KCl as bags of dilute infusion fluid.
    I wonder if anyone thought to investigate the possibility that clinical staff are picking up the wrong medication and it isn't till the last few seconds, before administration, that the error is detected.
    What I mean is that unless more scientific evidence is sought on how & why the brain looks for & prevents errors, we will not know where to insert steps that strengthen the in-built checks on our actions. In the setting of IT, unless we know more about innate error-checking, it's likely that artificial over-rides will permit errors to travel right through to completion.
    A similar application of psychological pathways may apply to the prevention of school shootings. Unless there is better knowledge of the connections between crazy thoughts and actual deeds, it's not going to be possible to "stop the dangerous people" as is being touted by the NRA. What we need to know is how many boys are rehearsing their plans every day in the privacy of their bedrooms. It makes a whole of difference whether the ratio of imagined massacre to actual deed is thousands to one, or ten to one.
    That's what I meant about preventing errors in injectable medication. We don't know how to intervene unless there's better knowledge, acquired through honest & accurate surveillance, of those actual events that are essential to initiate a tragic outcome - like, the eye directs the hand to the wrong ampoule.
    In the IT setting, it could be the erroneous placement of a tick in a digital box that sets a cascade of irredeemable actions, all of which "look" correct and fly though the human brain without any chance of being trapped, reviewed and revised.

    ReplyDelete
  4. Trevor3130 writes:

    We don't know how to intervene unless there's better knowledge, acquired through honest & accurate surveillance, of those actual events that are essential to initiate a tragic outcome - like, the eye directs the hand to the wrong ampoule. In the IT setting, it could be the erroneous placement of a tick in a digital box that sets a cascade of irredeemable actions, all of which "look" correct and fly though the human brain without any chance of being trapped, reviewed and revised.

    I agree completely. In HIT, though, the attitude seems to have been (sorry for the metaphor) - ready, shoot, aim. Here is the U.S. several years into a 'national program for health IT in the HHS', and only now are studies of health IT safety, usability, error modes, etc. being seriously undertaken. Our NIST has already acknowledged "use error" as opposed to "user error", search the blog for the earlier term. Also, there is no real postmarket surveillance program.

    We must start at the cerebral cortex in studying medical errors and especially medical errors related to IT, but we cannot stop at the fingertips that are pressing the keyboard, the situation we largely have now.

    That's why I point out that risk events need to be a primary focus of study, not just harmed patient incidents.

    -- SS

    ReplyDelete
  5. Indeed, there has been zero after market surveillance of these devices. It may be shocking to some that they have been deployed at the behest of the US Government even though there has been no pre-market vetting for safety and efficacy, either.

    Several thousand near misses is five plus significant, especially considering the fact that most hospitals ignore Act 13 in Pa. I know of several deaths caused by the EHR and CPOE devices that were not reported nor were the families issued letters as required by the Act.

    This safety authority seems to be for show since it never looks at the deaths in hospitals or shortly after discharge, and, it fuses to take reports from families and physicians.

    Keep in mind that if physicians report the deaths and injuries to anyone, they better guard their neck for the hospital enforcers will sham peer review them, ie blame the user.

    ReplyDelete
  6. Anonymous December 19, 2012 9:31:00 PM EST said...

    Several thousand near misses is five plus significant

    As I pointed out to the Authority, Ross Koppel, PhD, probably the most pre-eminent health IT issues expert in the world today, at the Feb. 25, 2010 ONC Policy Committee Adoption/Certification Workgroup on IT safety said: "We don't know 99 percent of the [computer-related] medication ordering errors that are made [due to difficulty in recognition, lack of proper studies and other factors]. If 100 percent of the known errors were reported, that would be 1 percent of the [true] total. But the data suggests that the maximum on voluntary reporting is about 5 percent. So 5 percent of 1 percent that is what we know is reported...."

    If those figures are anywhere near true (and/or the figures in my posted "thought experiment" I linked to in my aforementioned first post on this report), then that 3,000 "estimate" could potentially reflect on the order of 3000/.05/.01 real world incidents (let's just say, a much, much larger number than 3K. Do the math).

    The message is, we need more study.

    -- SS

    ReplyDelete