Sunday, October 18, 2009

Our Policy Is To Always Have Unabashed Faith In The Computer ... Except When It Screws Up, And Then It's The Doctor's Fault

Healthcare IT such as electronic medical records (EMR/EHR) and computerized physician order entry (CPOE) systems are a perfected, safe technology, one might think, from the almost hagiographic and entirely uncritical P.R. about HIT, all the way up to the President of the United States. (Not bad for an entirely unregulated industry. Pharma IT should only have it so well.)

Paraphrasing Capt. Chesley Sullenberger in a recent WSJ article, however, I'm a long term optimist about HIT, but a short term realist. Healthcare cannot be 'transformed' by a technology that itself has major problems and needs transformation. These issues should have been substantially remediated before forced national rollouts, I feel.

That's especially true given the inherent dangers in medicine. You can't be a wishful thinker. You have to know what you know, and perhaps more importantly in medicine, what you don't know (this is especially true both for those in IT who lack formal clinical or biomedical backgrounds, and for those in medicine who lack formal biomedical informatics or computer science backgrounds). You also need to know what your tools can and can't do. Sticking one's head in the sand is no way to approach HIT.

That said, here are some sample voluntary reports on HIT malfunctions from just one vendor, taken from the FDA Medical Devices database, the Manufacturer and User Facility Device Experience (MAUDE) database, links included. These are production systems used on real, live patients, not prototypes:

MAUDE data represents reports of adverse events involving medical devices. The data consists of voluntary reports since June 1993, user facility reports since 1991, distributor reports since 1993, and manufacturer reports since August 1996. MAUDE may not include reports made according to exemptions, variances, or alternative reporting requirements granted under 21 CFR 803.19.

Emphases below are mine. My comments are in [red italic].

Case 1: (note - added to this post 7/2010)
Event Date 11/19/2006
Event Type: Death
Patient Outcome: Death
The medication review screen of the subject device does not specify the exact dose in milligrams of combination medications. For example, narcotics are combined with tylenol in at least two strengths. Liquid narcotic tylenol-oxycodone combination is reported in ml, not mg. The exact dose of tylenol is not specified and requires knowledge of the combination medication dose in the volume specified. Certain fields of the grid do not specify the volume, but rather state "date/time" requiring another click or pop up screen. The immediate knowledge of tylenol dosage in mg is directly related to understanding and preventing excessive doses. In the subject, 10 ml of acetaminophen-oxycodone is indicated as having been given 3 times over 4 hours. That means that 1950 mg of tylenol was administered in 4 hours while the patient was in a state of starvation and receiving other medication that increase the effects of tylenol. This dose would equate to 11,700 mg of tylenol over 24 hours, nearly 3 times the maximum daily dose in otherwise health people. In the ensuing days, the patient developed acute renal failure, presumably acute tubular necrosis, and died. In the absence of other etiology, the excess tylenol was the culprit. This was not considered as etiology ante-mortem. The counterintuitive screen impaired the professionals. The pharmacist did not recognize and stop the medication, the nurses administered it, and the excessive dose, clinically meaninglessly listed as a volume of 10 ml -given 3 times in 4 hours- of acetaminophen-oxycodone, was missed by the physicians. Adverse events have been ascribed to "user error" by vendors. The device offers a potent propensity to life endangering oversights. There are other screens on this device which present information that interfere with clinically useful visualization of data. [Who designed these screens, I ask? Clinicians, or business IT personnel used to designing inventory systems for widget control? - ed.] The data does not flow to the professionals. It is not represented in a meaningfully useful manner. The professionals need to hunt for it. As such, the user unfriendly screens [see this link on mission hostile HIT - ed.] impair safe medical care consistent with the impediment to expedient professional understanding of what, exactly, is the dose of medication and how much was administered to the patient. This sentinel case of death is directly attributed to user unfriendly screens on this device.

Case 2:
Cerner Millennium RadNet Auto Launch Study and Auto Launch Report software functionalities. Defects in the Auto Launch functionality make it possible for a mismatch of patient data. [That is, in radiology reports. The dangers of transpositions is obvious.]

Case 3:
The issue involves powerchart local access medication administration task, used when certain cerner millennium solutions are not available. At powerchart local access sites that utilize coordinated universal time (utc) functionality, medication administration tasks might be displayed with incorrect times. When a pt download occurs from cerner millennium servers to powerchart local access, and there is no cerner millennium application session active, powerchart local access adds or subtracts the number of hours equal to the time zone difference from greenwich mean time. Scheduled medication administration tasks may show an incorrect administration time and the possibility exists for a pt to receive medications earlier or later than intended. [As just one example of the dangers with this type of defect, an elderly patient with sepsis might get an early dose of aminoglycoside, causing peak levels to rise to nephrotoxic and ototoxic levels - ed.]

Case 4:
Patient care delay. The issue involves functionality in cerner millennium powerchart office and powerchart core and affects users that utilize the powerchart inbox and message center inbox. In results to endorse or sign and review, if the user clicks ok and next multiple times in quick succession [e.g., a busy clinician with sick patients, waiting for the computer to respond - ed.] while attempting to sign a result or a document, the display could lag behind the system's processing of the action [that is to say, the human-programmed and supposedly tested and validated computer "system" -ed.], and multiple results or documents could be signed without the user's review. In message center, when clicking ok and next or accept and next, or when deleting or completing messages and moving to the next task, a document could be signed or a message could be deleted without the user's review. Results could be endorsed or documents could be signed without physician review, which could impact patient care. Cerner received communication that a patient's follow-up care was delayed as a result of this issue. [Luck prevailed that no injury occurred - in this reported case. One wonders if that is true of all the users who experienced this problem. Also, I do not recall such errors happening on paper order forms due to, say, a busy clinician tapping his pen on the paper - ed.]

Case 5:
The issue involves the pharmacy medmanager functionality. If the user performs a modify action on an order with an existing duration and duration unit, the order's stop date might not be recalculated. Specifically, this occurs when only the duration value is changed prior to entering the original duration unit. Pt care could be adversely affected, as medication therapy could be concluded prematurely or could last longer than intended based on the order details prior to the modification. This issue can be avoided if the user performs a renew action instead of a modify action to change an order's duration. If performing a modify action is required, users can manually set the stop date and time during the modify action. Cerner received communication that a pt's surgery required rescheduling as a result of this issue. [Again, was the lack of patient harm due to careful clinicians who at that moment just happened to not be distracted or cognitively overloaded or overworked or exhausted from on-call, or an act of Providence? Were other non-reported users at other organizations less lucky? - ed.]

Case 6:
The issue involves the results to endorse (rte) inbox functionality in powerchart, powerchart office and firstnet and affects users that use the rte inbox to view radiology reports that have been created in radnet. Radiology results might fail to be displayed in the ordering provider's results to endorse folder in the inbox. Treatment or diagnosis decisions could be delayed if the clinician is relying on the display of a result in the inbox results to endorse folder to initiate patient follow-up. Note: the final results are posted to the flowsheet and are available on the patient's chart. Cerner received communication that a patient's follow-up care was delayed as a result of this issue. [Were any diagnoses of, say, cancer missed by other organizations affected? -ed.]

Case 7:
The issue involves the direct charting flowsheet and icu flowsheet, used within the powerchart system. When the result details box is accessed for a negative result value in either the icu flowsheet or the direct charting flowsheet in powerchart, either by right-clicking a negative, unsigned result value and selecting chart details from the context menu, or by right-clicking a negative signed result value and selecting modify, the dialog box displays a blank result value. When the user clicks ok in the result details dialog box, the value is changed to zero in the result cell in the flowsheet. [Of all places, health IT in ICU's should be extensively validated before going into production - ed.]

Case 8:
The issue involves the careadmin medication administration wizard used within the carenet system. When scanning a medication in careadmin, the system fails to recognize mckesson identifiers or other miscelaneous identifiers and properly identify the scanned product, which could result in the documentation of an incorrect dose in the careadmini window. In such situations, the system does not display overdose or underdose or route/form compatibility warnings as it should. Patients could receive an inappropriate dose of medication. Cerner has not been made aware of any adverse patient care events that resulted from this issue. [Thank god for that - but were any unreported by other organizations? - ed.]

Case 9:
Microbiology set up a program within the cerner computer system to automate the reporting system for hsv (herpes simplex virus)testing. The system was tested with the assistance of cerner and found to be working appropriately. The new system was operational for approximately 3 weeks when it was determined that the first word of the sentence, "no" was inappropriately dropping off of the following sentence: "no herpes simplex virus type 1 or herpes simplex virus type 2 detected by dna amplification. " as such, two of five patients were incorrectly informed that they had hsv before the error was detected. One had started an antiviral creme treatment. The other three did not have follow-up visits until after the correct results were determined. Cerner has looked at the program and has not provided an answer for the system issue. In the interim, the previous manual review and entry process is being used. [How does the word "no", an essential descriptor in a medical test, simply get "dropped?" Does NORAD's ballistic missile warning system ever do that, i.e., "no ICBM's incoming" reported as "ICBM's incoming"?- ed.]


Additional reported errors can be seen in a file downloadable by clicking on this link (PDF). Some of these are a bit startling.


Still others can be seen at:

Again, these are reports about just one HIT vendor but this piece is not just about them. Other HIT vendor products have similar issues. Health IT remains an experimental technology.

The MAUDE reports are voluntary, however, so the absence of a report does not mean an absence of a problem. (I note MAUDE queries on a number of major HIT vendors produce no results, or results limited to HIT that is closely associated with physical "medical devices" -- as opposed to virtual ones -- such as radiology systems.) For example, while a query on mfg. "Cerner" brings up hits, a query on another vendor shows this result:

No records were found with Manufacturer: Allscripts Report Date From: 10/01/1998 Report Date To: 09/30/2009

... and this:

No records were found with Manufacturer: Nextgen Report Date From: 10/01/1997 Report Date To: 09/30/2009

... while a broader FDA search on "Nextgen" brings up exactly one relevant hit on "Nextgen EMR - Medical Device" -- without specifics as to the "malfunction" noted under product code "HGM" - which on lookup is a perinatal monitoring system.

A broader search on Allscripts only brings up drug related issues such as
this curious 2002 warning letter about the marketing of guaifenesin, a cough medicine, from FDA's Center for Drug Evaluation and Research.

Either these vendors are not reporting HIT problems, or are listed under another name. Rather unlikely is that they have no problems to report (latter hyperlink is PDF). As another example, a search on brand name "Centricity" brings up "hits" mostly on specialized GE products such as PACS.

So, is HIT safe, or can "glitches" affect patient care? Should this industry be entirely unregulated, or its products "certified" by groups with conflicts of interest such as those affiliated with industry trade groups? Should vendors be held harmless for HIT defects that harm patients?

I report, you decide.

-- SS


Anonymous said...

It should be clear to all readers that these devices can not be trusted.

WARNING TO USERS: You must be perpetually on guard for errors that defy logic, lest your patients be at increased risk for injury.

IT Guy said...

How does this compare to the errors for hospitals still using paper?

IT Guy said...

For the sake of context maybe you could post some of the problems that used to occur prior to tracking patient records on computers.

Anonymous said...

A note to the IT Guy: Comparative studies have not been done...a significant deficiency contributing to low adoption rates. Koppel reports 22 new errors, Han reports increased death, and Ash reports unintended consequences. Individual users report death and injury. MAUDE reports of adverse events are increasing. Individual users who complain are subject to forms of retaliation, sham peer review, or dismissal.

Users should continue to report HIT associated adverse events and device flaws to the FDA's anonymous reporting site.

MedInformaticsMD said...

IT guy wrote:

"How does this compare to the errors for hospitals still using paper?"

How do voluntary and clearly minimal reports of HIT malfunctions (to the point of 'zero' from some major vendors) compare to errors caused by paper?

"For the sake of context maybe you could post some of the problems that used to occur prior to tracking patient records on computers."

You've apparently completely missed the point of my posting.

-- SS

MedInformaticsMD said...

Anonymous wrote:

"A note to the IT Guy: Comparative studies have not been done...a significant deficiency contributing to low adoption rates."

Since the HIT industry largely regards information on HIT defects, errors, and patient harm as secret and proprietary, such studies simply cannot be done.

-- SS

PookieMD said...

Errors occur with both paper systmes and EMRs. Replacing one with the other just creates a different type of error. EMRs are not the be all and end all, they are a tool. EMR vendors need to be held responsible for these errors--and that is where the fight is, as the HIT industry doesn't want to be held accountable. "HIT industry largely regards informationon HIT defects, errors and patient harm as secret..." is exactly the opposite of what needs to happen. These errors need to be fixed, and transparency is a key to the fix.

IT Guy said...

You've apparently completely missed the point of my posting.

No, I haven't. You're trying to increase the fear of hospital computer systems while promoting yourself as the solution.

such studies simply cannot be done

Yes, they can. A fact that is being pounded home to you over at HIStalk.

Anonymous said...

Since the HIT industry largely regards information on HIT defects, errors, and patient harm as secret and proprietary, such studies simply cannot be done.

Not unlike most doctors regard information about their own errors.

IT Guy said...

The HIStalk guy edited one of my posts. I wanted to make sure you read the whole thing.

It’s that simple? In a complex environment, just pop in IT, perform a simple A/B test that fails to consider myriad other variables and issues, and you have proof the IT is effective.

Yes, it’s that simple. If you select for pre-IT and post-IT data and use a large enough sample size the other factors with equal out. If one hospital hires someone like you as their IT head, resulting in a spike in IT related errors, another hospital will hire a competent IT head. If the sample size is large enough you should have a relatively small margin of error.

It’s statements like this that make me say “thank god for international IT outsourcing.”

And it's statements from a teaching professor at a major university who has virtually no understanding of statistical analysis which make me say "well, at least I don't have to worry about losing my job to one of his students."

Roy M. Poses MD said...

Anonymous (posting on 20 October) - that may be a fair criticism of doctors (and perhaps all professionals), but drug companies are required to collect and forward to the FDA all reports of adverse effects they receive. (I realize they may not always do so perfectly.) I believe medical device companies have the same obligation. Why shouldn't health care IT companies, which after all are makins a kind of medical device, have the same obligation?

Roy M. Poses MD said...

IT guy -
If you could correct all biases in pre-post comparisons simply by having a "large enough sample size," why bother doing randomized, controlled trials of any kind of medical or health care intervention?

The problem with pre-post trials is that the results may be affected by changes occurring over time other than the intervention. Suppose at the same time a hospital implemented an EHR it also implemented a new quality improvement program, and the hospital also began admitting patients with a new disease (e.g., H1N1 flu). If there are any changes in hospital patient outcomes, which of the three changes over time produced them?

Tell me what statistical analysis will reliably correct for all possible secular changes that might occur during a pre-post comparison?

IT Guy said...

If you could correct all biases in pre-post comparisons simply by having a "large enough sample size," why bother doing randomized, controlled trials of any kind of medical or health care intervention?

Having a large sample size won't correct for every single possible bias, but it will give you a pretty good idea whether the effects of IT are good or bad. You will still need control samples if you want to do more detailed analysis.

Roy M. Poses MD said...

IT Guy - with all due respect, sir, you are simply wrong.

Increasing sample size per se will not reduce the effects of this sort of confounding.

MedInformaticsMD said...

IT guy writes:

"No, I haven't. You're trying to increase the fear of hospital computer systems while promoting yourself as the solution."

My postings are crystal clear as to what I promote as an enabler of health IT being "done better."

I promote a professional scientific discipline, Biomedical Informatics (soon to be a formal medical subspecialty), developed and fostered by the HIT pioneers, and education in that discipline in the many organizations both domestic and international that now offer such education.

That said, my colleague summed up the issues very well during my offline period this afternoon.

A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements. Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong. I wrote about one consequence of such beliefs in "data alchemy" this summer in the Journal of the American Association of Physicians and Surgeons here (pdf).

I believe the lack of system thinking common in IT personnel, perhaps borne of the narrow, linear, programmatic thinking required by those who interact with these machines [I myself have written thousands of lines of clinical IT code in midscale RDMS and other programming environments, so this is an informed opinion], the lack of scientific background, and overconfidence in computers by IT personnel is ultimately self defeating for the IT industry, because it's going to lead to a train wreck - a cybernetic "Libby Zion" case or cases that will result in massive governmental intervention. I would rather that not happen; it's better if regulation happens under voluntary circumstances.


Finally, IT guy, clearly a.k.a. "Programmer" at HistTlk:

Histalk's owner Tim edited your comments at his site because they were defamatory and, he agreed, inappropriate.

You are posting anonymously, but I am not; defamation cannot occur against an anonymous person, but it can indeed occur regarding someone who is not hiding their identity. Consider the possible consequences of defamation.

-- SS

MedInformaticsMD said...

Note: I have received comments that "IT Guy" works for Cerner in a high level position. I cannot confirm this. However, it could explain some of the extreme comments noted.

MedInformaticsMD said...

As it turns out, "IT Guy" a.k.a. "Programmer" is an apparent industry shill working for IT vendor Meditech.

See "More on Perversity in the Healthcare IT World: Is Meditech Employing Sockpuppets."