Sunday, February 28, 2010

FDA on Health IT Adverse Consequences: 44 Reported Injuries And 6 Deaths In Two Years, Probably Just 'Tip of Iceberg'

The Office of the National Coordinator for Health IT held a meeting of the HIT Policy Committee, Adoption/Certification Workgroup on February 25, 2010. The topic was "HIT safety." The agenda, presenters and presentations are available at this link.

At this meeting FDA testimony was given by Jeffrey Shuren, Director of FDA’s Center for Devices and Radiological Health. Dr. Shuren noted several categories of health IT-induced adverse consequences known by FDA. This information was striking:

He wrote:

... In the past two years, we have received 260 reports of HIT-related malfunctions with the potential for patient harm – including 44 reported injuries and 6 reported deaths. Because these reports are purely voluntary, they may represent only the tip of the iceberg in terms of the HIT-related problems that exist.
[I'd call that a likely understatement - ed.]

Even within this limited sample, several serious safety concerns have come to light. The reported adverse events have largely fallen into four major categories: (1) errors of commission, such as accessing the wrong patient’s record or overwriting one patient’s information with another’s; (2) errors of omission or transmission, such as the loss or corruption of vital patient data; (3) errors in data analysis, including medication dosing errors of several orders of magnitude; and (4) incompatibility between multi-vendor software applications and systems, which can lead to any of the above.


This is a technology almost universally touted as inherently beneficial, right up to our most senior elected leaders, who are now pushing this unproven technology under threat of penalty for non-adopters - certainly a precedent, especially in a supposedly democratic country. I have given examples on this blog about how this belief about the universal goodness of healthcare computing is itself inherently idealistic - and unrealistic.

Now, here are some very striking, discrete examples of HIT-related adverse consequences, the tip of a larger iceberg, size unknown but quite possibly very large (where's Kate Winslet when you need her?):


(1) Errors of Commission

Example 1: An error occurred in software used to view and document patient activities. When the user documented activities in the task list for one patient and used the “previous” or “next” arrows to select another patient chart, the first patient’s task list displayed for the second patient.

Example 2: A nuclear medicine study was saved in the wrong patient’s file. Investigation suggested that this was due to a software error.

Example 3: A sleep lab’s workstation software had a confusing user interface, which led to the overwriting and replacement of one patient’s data with another patient’s study.
[I covered other examples of confusing or "mission hostile" interfaces at an eight part series here - ed.]


(2) Errors of Omission or Transmission


Example 1: An EMR system was connected to a patient monitoring system to chart vital signs. The system required a hospital staff member to download the vital signs, verify them, and electronically post them in the patient’s chart. Hospital staff reported that, several times, vital signs have been downloaded, viewed, and approved, and have subsequently disappeared from the system.

Example 2: An operating room management software application frequently “locked up” during surgery, with no obvious indication that a “lock-up” was occurring. Operative data were lost and had to be re-entered manually, in some cases from the nurse’s recollection. [I experienced similar problems: a decade ago - ed.]

Example 3: An improper database configuration caused manual patient allergy data entries to be overwritten during automatic updates of patient data from the hospital information system.


(3) Errors in Data Analysis


Example 1: In one system, intravenous fluid rates of greater than 1,000 mL/hr were printed as 1 mL/hr on the label that went to the nursing / drug administration area.

Example 2: A clinical decision support software application for checking a patient’s profile for drug allergies failed to display the allergy information properly. Investigation by the vendor determined that the error was caused by a missing codeset.

Example 3: Mean pressure values displayed on a patient’s physiological monitors did not match the mean pressures computed by the EMR system after systolic and diastolic values were entered.


(4) Incompatibility between Multi-Vendor Software Applications or Systems


Example 1: An Emergency Department management software package interfaces with the hospital’s core information system and the laboratory’s laboratory information system; all three systems are from different vendors. When lab results were ordered through the ED management software package for one patient, another patient’s results were returned.

Example 2: Images produced by a CT scanner from one vendor were presented as a mirror image by another vendor’s picture archiving and communication system (PACS) web software. The PACS software vendor stipulates that something in the interface between the two products causes some images to be randomly “flipped” when displayed.



The above is from FDA (2010 internal memo here) and by their own admission under-represents the problems, perhaps massively. Most of these errors are inexcusable from an engineering and quality perspective.

Unfortunately, there actually is no comprehensive data on the true magnitude of the problems.


44 injuries and 6 deaths: tip of the iceberg?
Merely writing about this scarcity can bring opposition. For instance, a recent paper I wrote about lack of data on unintended adverse consequences of HIT and remediation of the data paucity was rejected. (The paucity itself might be considered an 'unintended consequence' of HIT, since secrecy about pros/cons of HIT was not the intention of the medical informatics pioneers.)
One of the reasons given by an anonymous reviewer justifying rejection of the paper was itself striking:

The paper "adds little that is new or that goes beyond what a reader might find in a major city newspaper", the reviewer wrote.

Little that is new to whom, exactly? Where are the extant papers on the scarcity of such data?
I am also highly uncertain as to which "major city newspapers" the reviewer was referring to, as I've rarely if ever read articles in newspapers about the paucity of data on adverse consequences of HIT or the remediation of the paucity.

One reason for writing the paper was due to the fact the "major newspapers" - and the medical and medical informatics journals - largely avoid such issues entirely.

(The scarcity was noted by organizations such as the Joint Commission, however, in their 2009 Sentinel Events Alert on HIT - "There is a dearth of data on the incidence of adverse events directly caused by HIT overall.")

This reviewer continued with the frivolous comment that "proposing a classification of sources of unintended consequence and analysis of reasons for undereporting of each type in the resulting classification could be a useful addition to the field."
Ironically, I actually devoted an entire section of the paper to reasons for under-reporting and scarcity of unintended consequences, broadly speaking, although that this wasn't the paper's main purpose. Its purpose was to point out the dangers inherent in such an information scarcity. It is also hard to granularly classify variants of a phenomenon on which there is scarce data to begin with.

Rather than revise the paper, I may simply put it in the public domain and send it to my elected representatives involved in healthcare IT policy. I'd done this with another paper I'd written in 2007 on EMR's and postmarketing drug surveillance that had received a mysteriously similar "could have read this in any newspaper" critique.

In summary, the light is starting to shine on HIT dangers. It is also increasingly recognized by regulators such as FDA that "data scarcity" is a problem of major significance ("tip of the iceberg"), although there are those in this sector who would prefer to keep physicians and patients in the dark on this issue and keep such data scarce.
Finally, while Shuren presented a number of options regarding FDA involvement in HIT regulation, he wrote that "in light of the safety issues that have been reported to us, we believe that a framework of federal oversight of HIT needs to assure patient safety." This itself represents a major change in the culture of HIT.
Addendum: the Huffington Post Investigative Fund wrote about this meeting in an article entitled "Experts: Safety Oversight Needed as Patient Records Go Digital" here.
-- SS


For more on HIT challenges see "Contemporary Issues in Medical Informatics: Common Examples of Healthcare Information Technology Difficulties" - http://www.ischool.drexel.edu/faculty/ssilverstein/cases/

14 comments:

Anonymous said...

Patients are exposed to high risk en masse due to server failures. Neither the incidence of such failures nor the adverse events caused by them are recorded as all hands come on deck to search for forms and pray that patients do not die from neglect during the oft more than 4 hours of delays.

There is also prayer that someone knows what medications are to be given or have been given.

I have yet to read a post crash statement by a hospital administrator admitting patients died because of this, nor has any authority come in and demanded an investigation of all deaths, medication delays, incorrect medications, and more that occurred during and in the 96 hour period after a server crash. Heck, they deny that there was a crash, actually.

De Nile is a river in Africa.

moviedoc said...

Don't you have to compare IT vs no IT? No system will eliminate all errors, but IT can be, and should be, improved based on surveillance.

commoncents said...

THANK YOU for posting this! I really like your blog!!

Steve
Common Cents
http://www.commoncts.blogspot.com

ps. Link Exchange??

MedInformaticsMD said...

Don't you have to compare IT vs no IT?

I'm not sure where that meme started, but it did not start in medicine.

The answer is "no."

That type of comparison might be minimally appropriate if one were willing to accept preventable death and injury due to IT misdesign and misimplementation, factors extant largely out of preserving convenience for healthcare IT personnel and preserving the HIT industry's profit margins, leadership egos and other pathologies.

Unfortunately, acceptance of preventable death and injury due to IT is not consistent with medical ethics as we know them in the western world.

For instance, we don't accept the use of Phen-fen even though it allowed a lot of people to lose weight and improve their health; we don't compare Phen-fen use to attendance at weight watchers. We don't compare certain antidepressant use in young people that benefit some (but cause others to commit suicide), to psychotherapy alone.

The appropriate strategy is to compare bad HIT to good HIT, and develop scientific and regulatory policies and procedures to eliminate the former and promote the latter.

-- SS

moviedoc said...

Maybe I'm getting into this too late, but I don't believe anyone "accepts" preventable death and injury from any cause, and it seems important to define IT precisely. Would you include use of the telephone for example? What about the typewriter? Both are technologies used in the past for health care info. Your comparison to association of antidepressants and suicide also raises begs for a definition of causality in this debate. I don't recall hearing of any case in which a psychiatrically "normal" individual took an antidepressant, and, zombie-like, rigged a noose with which to hang herself. Suicide requires a complex series of thoughts and behaviors. I have seen no evidence that any drug has ever "caused" that. IT may be almost as complex.

Roy M. Poses MD said...

Moviedoc,

Actually, there is at least anecdotal evidence that anti-depressant drugs may be related to suicidal behavior when given to patients who did not start off depressed.

See this article by Jeanne Lenzer in Slate in 2005 about what happened when duloxetine (Cymbalta when used as an anti-depressant) underwent preliminary trials as a treatment of urinary incontinence (as Yentreve). Note that the drug was not finally approved or marketed for this purpose.
See:
http://www.slate.com/id/2126918/

MedInformaticsMD said...

Moviedoc, you are arguing about analogies, not the fundamentals.

Defective technologies, especially technologies that are experiemtnal and where the full impact of the problems and defects is unknown ("tip of the iceberg"), have no place IMO in calls for rapid national rollouts with penalties for nonadopters.

That we have gotten to that point shows an industry out of control, as well as promoters and regulators out of touch with reality IMO.

If you, on the other hand, feel national rollout of an experimental technology with an unknown rate of adverse consequences to be ethically justifiable, I'm all for letting you put your reasoning in a comment here.

-- SS

MedInformaticsMD said...

Moviedoc wrote:

I don't recall hearing of any case in which a psychiatrically "normal" individual took an antidepressant, and, zombie-like, rigged a noose with which to hang herself.

Is this a straw argument of some kind?

My point was this, in response to the question "don't you have to compare IT vs no IT": No, you don't have to compare defective technology interventions with pre-technology norms in an attempt to justify rollout of defective technology.

To believe otherwise is an even more acutely unethical position when the domain is medicine and the magnitude of the defects' effects on patients is unknown, and rigorous preventive and remediation practices are not well understood or in place.

On arguing about analogies, why would a young person be put on antidepressants if they were medically normal? Why would anyone write about such a scenario?

I clearly wasn't referring to psychologically "normal" people in my analogy, but to young people put on antidepressants for a reason, situations of the type a google search on "antidepressant suicide young adult" retrieves. The analogy was to studying defective IT vs. paper.

Now, help us out here - please help us understand how you took my argument to mean that I suggested normal people placed on antidepressants for some mysterious reason might commit suicide, and therefore health IT was OK?

Further, you claim nobody "accepts" preventable death and injury from any cause.

I've been involved in HIT projects where clear patient endangerment and clinician disruption (that can cause the former) had been ignored based on excuses such as territorial disputes between executives, IT department convenience, costs for remediation and other highly unethical positions.

So have numerous colleagues.

If that is not "acceptance" than what is it?

-- SS

MedInformaticsMD said...

I should probably add that studies comparing paper and IT-based medical record keeping have not been universally positive. As in pharma, ignoring the studies critical of HIT is unethical.

e.g.,

Electronic Health Record Use and the Quality of Ambulatory Care in the United States. Arch Intern Med. 2007;167:1400-1405

Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors. Ross Koppel, PhD, et al, Journal of the American Medical Association, 2005;293:1197-1203

Finding a Cure: The Case for Regulation And Oversight of Electronic Health Records Systems, Harvard Journal of Law & Technology 2008 vol. 22, No. 1, by Hoffman and Podgurski

and others.

moviedoc said...

I should have said no one SHOULD accept preventable death related to IT, and I actually agree that the powers to be are moving too fast. I am wary of EMR's. On the other hand we should not throw the baby out with the bath water. I believe rational use of IT in medicine can prevent skyrocketing costs, but errors leading to morbidity and mortality must be weighed against other costs. As for the antidepressants and suicide analogy, I believe it is misapplied here. The suicides in such cases may indeed be "related," but I have not seen demonstration that they were "caused" by antidepressants. I don't know the details, but I can easily imagine that deaths may have been clearly caused by IT muck ups.

I believe it will be quite a challenge to eliminate error from IT, but we must try.

I would like to hear more on how ethics applies to this question, not just in a general way, but if someone could cite particular applicable ethical principles. We always weigh risk vs. benefit in medicine. If we never did harm we would likely rarely help anyone either. Is the only ethical approach to abandon IT completely?

Tell me what you think of this example. I have conducted a few patient contacts via Skype. Risk managers argue that because I could not smell the alcohol on a patient's breath it's too risky. I argue that I would prefer an intoxicated patient not drive a car to my office.

MedInformaticsMD said...

I would like to hear more on how ethics applies to this question, not just in a general way, but if someone could cite particular applicable ethical principles.

The Nuremberg Code comes to mind.

Is the only ethical approach to abandon IT completely?

No. It's using medical technology in a cavalier fashion that is the problem. Read the main essay at my HIT academic website.

Tell me what you think of this example. I have conducted a few patient contacts via Skype. Risk managers argue that because I could not smell the alcohol on a patient's breath it's too risky. I argue that I would prefer an intoxicated patient not drive a car to my office.

That certainly does not sound unethical, although assuming a (reasonable level of) risk, and maintaining ethics, are not the same issues.

Your example is a bit of a red herring, though. Consider: what aspects of information technology and its information retrieval, information processing, decision making and programmatic/algorithmic capabilities did this interaction involve, exactly?

Computing, like medicine, has evolved into subspecialty areas, and telecommunications via computers (where IT is used as a fancy Dick Tracy wristwatch) is a very different subspecialty use of computing than, say, the uses that led to the errors at the VA here.

In sum, as I've written many times, health IT can achieve the promises made of it for the past few decades...but only if done well. The inattention to the massive complexity behind those two words "done well" is at the root why these achievements have not occurred.

It is my concern they will never occur under the current leadership, organizational and regulatory structures found in the healthcare IT sector. As I've written before, healthcare cannot be 'reformed' or even improved by IT, until IT and its culture are themselves reformed.

moviedoc said...

I disagree with little you say in your essay. As far as using "paper" goes, though, my handwriting is so bad, at least partly because of essential tremor that I would have to revert to a typewriter, or is that what you mean? Here's the evolution of my "EHR:" http://behavenetopinion.blogspot.com/2010/03/ehrs-and-apa.html
I don't see Nuremberg ethics is quite applicable here. I suspect the too early adoption of IT is well meaning, but ignoring the failures probably does transgress some other ethical principle.
My "red herring": If you don't include telemedicine as I described it under HIT at least consider that I may have adopted it in my practice before it was generally accepted. Does that make it an experiment a la Nuremberg? It also points up a problem in drawing boundaries. My malpractice carrier could care less about the patient killing 5 people driving drunk to my office because the courts thus far have not held physicians liable for such damages. There's something wrong with that picture, too.

Keep up the good work.

BTW: You might appreciate my thoughts on the application of another bit of technology to health care (yes, 2 words): Cell Phones and Emergencies Don't Mix

MedInformaticsMD said...

I don't see Nuremberg ethics [code for human experimentation - SS] is quite applicable here

As the NIH Office of Human Subjects Research lists them in their pages "Office of Human Subjects Research
Regulations and Ethical Guidelines
", I will have to disagree. You can see other human research codes at their page here. They are there as a reference towards ethical conduct in human subjects research.

you don't include telemedicine as I described it under HIT at least consider that I may have adopted it in my practice before it was generally accepted.

I don't consider telemedicine a form of virtual medical device, much as I don't consider a telephone in that manner. However, if you obtained informed consent for its use, then there was no problem.

My malpractice carrier could care less about the patient killing 5 people driving drunk to my office because the courts thus far have not held physicians liable for such damages.

The courts tried to hold my father, a pahramcist, liable in part for a patient who'd just left the store with a script for heart meds after a doctor's visit for chest pains, and died right outside the pharmacy entrance.

We do not live in a rational world, but all I can do is try to keep my tiny part of it as tidy as possible.

-- SS

Scot M Silverstein MD said...

"Buy generic viagra" wrote:

Comparing paper work with IT based system is only useless.Everyone knows IT system is much better than paper work.This system may help a lot to improve health care system.deaths and injuries are really bad stuff.FDA shouls take care of this.

December 1, 2010 7:05:00 AM EST


I deleted the actual comment due to a "commercial" link being present, but reproduced it exactly as received above.

My response is simple:


No, everyone does not know IT systems are much better than paper. [1,2,3]

[1] http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=12572

[2] http://www.ischool.drexel.edu/faculty/ssilverstein/cases/?loc=cases&sloc=2009

[3] http://www.ischool.drexel.edu/faculty/ssilverstein/cases/?loc=other