Pages

Thursday, February 09, 2012

A Critical Review of a Critical Review of e-Prescribing ... Or Is It CPOE?

In PLoS medicine, the following article was recently published by researchers at the University of New South Wales in Australia:

Westbrook JI, Reckmann M, Li L, Runciman WB, Burke R, et al. (2012) Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study. PLoS Med 9(1): e1001164. doi:10.1371/journal.pmed.1001164


The section I find most interesting is this:

We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated.

Here is my major issue:

Unless I am misreading, this research took place in hospitals (i.e., "wards" in hospitals) and does not seem to focus (if even refer to) discharge prescriptions.

I think it would be reasonable to say that what are referred to as "e-Prescribing" systems are systems used at discharge, or in outpatient clinic/offices to communicate with a pharmacy selling commercially and not involved in inpatient care.

From the U.S. Centers for Medicare and Medicaid Services (CMS), for example:

E-Prescribing - a prescriber's ability to electronically send an accurate, error-free and understandable prescription [theoretically, that is - ed.] directly to a pharmacy from the point-of-care

I therefore think the terminology used in the article as to the type of system studied is not well chosen. I believe it could mislead readers not experienced with the various 'species' of health IT.

This study appears to be of an inpatient Computerized Practitioner Order Entry (CPOE) system, not e-Prescribing.

Terminology matters. For example, in the U.S. the HHS term "certification" is misleading purchasers about the quality, safety and efficacy of health IT. HIT certification as it exists today (granted via ONC-Authorized Testing and Certification Bodies) is merely a features-and-functionality "certification of presence." It is not like an Underwriter Labs (UL) safety certification of an electrical appliance that the appliance will not electrocute you.

(This is not to mention the irony that one major aspect of Medical Informatics research is to remove ambiguity from medical terminology, e.g., via the decades-old Unified Medical Language System project or UMLS. However, as I've often written, the HIT domain lacks the rigor of medical science itself.)

I note that if this were a grant proposal for studying e-Prescribing, I would return it with a low ranking and a reviewer comment that the study proposed is actually of CPOE.

That said, looking at the nature of this study:

The conclusion of this paper was as follows. I am omitting some of the actual numbers such as confidence intervals for clarity; see the full article available freely at above link for that data:

Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission to 2.12 and at Hospital B from 3.62 to 1.46. This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change. There was limited change in clinical error rates, but serious errors decreased by 44% across the intervention wards compared to the control wards.

Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.

I note that "system related errors" were defined as errors "where system functionality or design contributed to the error." In other words, these were unintended adverse events as a result of the technology itself.

The authors conclude:

Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed.

The authors do acknowledge some limitations of their (CPOE) study:

Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.

Thus, this was mainly a pre-post observational study, certainly not a randomized controlled clinical trial.

Not apparently accounted for, either, were potential confounding variables related to the CPOE implementation process (as in this comment thread).

In that thread I wrote to a commenter [a heckler, actually, apparently an employee of HIT company Meditech] with a stated absolute faith in pre-post studies that:

... A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements. Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong.

Stated simply, in pre-post trials the results may be affected by changes that occur other than the intervention. HIT implementation does not involve just putting computers on desks, as I point out above.

In other words, the study was essentially anecdotal.

The lack of RCT's in health IT are, in general, one violation of traditional medical research methodologies for studying medical devices. That issue is not limited to this article, of course.

Next, on ethics:

CPOE has already been demonstrated in situ to create all sorts of new potential complications, such in at Koppel et al.'s "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors", JAMA. 2005;293(10):1197-1203. doi: 10.1001/jama.293.10.1197 that concluded:

In this study, we found that a leading CPOE system often facilitated medication error risks, with many reported to occur frequently. As CPOE systems are implemented, clinicians and hospitals must attend to errors that these systems cause in addition to errors that they prevent.

CPOE technology, at best, should be considered experimental in 2012.

In regards to e-Prescribing proper, there's this: Errors Occur in 12% of Electronic Drug Prescriptions, Matching Handwritten and this: Upgrading e-prescribing system can bump up error risk to consider; in other words, the literature is conflicting, confirming the technology remains experimental.

This current study confirmed some (CPOE) errors that would not have occurred with paper did occur with cybernetics, amounting to "35% of postsystem errors in the intervention wards."

In other words, patient Jones was now subjected to a cybernetic error that would not have occurred with paper, in the hopes that patients Smith and Silverstein would be spared errors that might have occurred without cybernetic aid.

Even though the authors observe that "human research ethics approval was received from both hospitals and the University of Sydney", since patient Jones did not provide informed consent to the experimentation with what really are experimental medical devices as I've written often on this blog [see note 1], I'm not certain the full set of ethical issues have been well-addressed. It's not limited to this occasion, however. This phenomenon represents a pervasive, continual world-wide oversight with regard to clinical IT.

Furthermore, and finally: of considerable concern is another common limitation of all health IT studies, which I believe is often willful.

What really should be studied before justifications are given to spend tens of millions of dollars/Euros/whatever on CPOE or other clinical IT is this:

The impact of possible non-cybernetic interventions (e.g., additional humans and processes) to improve "medication ordering" (either CPOE, or ePrescribing) that might be FAR LESS EXPENSIVE, and that might have far less IT-caused unintended adverse consequences, than cybernetic "solutions."

Instead, pre-post studies are used to justify expenditures of millions (locally) and tens or hundreds of billions (nationally), with results sometimes like this affecting an entire country.

There is something very wrong with this, both scientifically and ethically.

-- SS

Note:

[1] If these devices are not experimental, why are so many studying them to see if they actually work, to see if they pose unknown dangers, and to try to understand the conflicting results in the literature? More at this query link: http://hcrenewal.blogspot.com/search/label/Healthcare%20IT%20experiment


Addendum Feb. 10, 2012:

An anonymous commenter points out an interesting issue. They wrote:

The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices.

Delays are widespread with CPOE devices. One emergency room resorted to paper file cards and vacuum tubes to communicate urgency with the pharmacy. Delays were for hours.

I agree that lack of consideration of a temporal component, i.e., delays due to technology issues, is potentially significant.

I, for example, remember a more than five-minute delay in getting sublingual nitroglycerin to a relative with apparent chest pain due to IT-related causes. The problem turned out to be gastrointestinal, not cardiac; however, in another patient, the hospital might not be so lucky.

Addendum Feb. 12, 2012:

A key issue in technology evaluation studies is to separate the effects of the technology intervention from other, potentially confounding variables which always exist in a complex sociotechnical system, especially in a domain such as medicine. This seems uncommonly done in HIT evaluation studies. Not doing so will likely inflate the apparent contribution of the technology.

A "control ward" where the same education and training, process re-engineering, procedural improvements, etc. were performed as compared to the "intervention ward" (but without actual IT use) would probably be better suited to pre-post studies such as this.

A "comparison ward" where human interventions were implemented, as opposed to cybernetic, would be a mechanism to determine how efficacious and cost-effective the IT was compared to less expensive non-cybernetic alternatives.

-- SS

10 comments:

  1. The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices.

    Delays are widespread with CPOE devices. One emergency room resorted to paper file cards and vacuum tubes to communicate urgency with the pharmacy. Delays were for hours.

    And, what is an "illegal" prescription, anyway?

    ReplyDelete
  2. Seriously, I mean really, if they can't deliver and administer the meds ordered by the MD correctly when the patient is in the hospital, that is pretty sad.

    Indeed, I wonder what the definition of failure to administer was, a difference between what the doctor ordered vs. what was delivered and administered or was is what the computer said to deliver vs. what it said was delivered.?? I admit I didn't yet study the paper well enough to determine the measurement validity. Did you?

    ReplyDelete
  3. Anonymous February 9, 2012 8:50:00 PM EST wrote:

    The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices

    I agree with that assessment. A temporal dimension was not considered. I am adding this comment as a footnote to the main blog post.

    d, what is an "illegal" prescription, anyway?

    One that violates the diktats of the Lords of Kobol.

    -- SS

    ReplyDelete
  4. Live IT or live with IT said...

    if they can't deliver and administer the meds ordered by the MD correctly when the patient is in the hospital, that is pretty sad.

    Hospitals are under financial duress and it leads to bad processes and "inexpensive" employees - and errors. (I am not going to get into the issue of executive salaries and perks, however; that's usually Dr. Poses' domain on this blog.)

    I wonder what the definition of failure to administer was

    See Table S1 here for definitions. It appears well-taxonomized.

    -- SS

    ReplyDelete
  5. The paper clear states the study took place in two major teaching hospitals. You might also note this is an international journal and thus is intended for a global audience. Unlike the US, CPOE is not a commonly used term to describe these systems in Australia or the UK. The CMS definition you present is entirely consistent with e-prescribing systems as applied in this study.

    In your attempt to find fault with all aspects of the study you appear to have conveniently overlooked core features including the fact that three control wards were included in the study.

    You postulate about a series of factors that you think may possibly have occurred, on the basis of no evidence directly related to this study but based on your own anecdotal evidence “I’ve seen them”. You dismiss the results of a five year study which has been subject to extensive peer and practitioner review on this basis. Hmm.... where is the science in that?

    Finally, you conclude that our study, evaluating the impacts (both positive and negative) is unethical? As a result of this study both these systems have been improved in direct response to many of the issues identified. These systems are not silver bullets, and the study clearly demonstrates areas that require improvement. However what has been totally ignored from your argument is that hundreds of patients are harmed by medication errors every day. Further, several studies have demonstrated that the doctors and nurses who make those errors (even if negative consequences do not occur) are markedly affected and ‘harmed’ experiencing anxiety and guilt.

    Surely it is unethical not to implement and evaluation interventions which show clear evidence of reducing these harms.


    J. Westbrook

    ReplyDelete
  6. Re: J. Westbrook's comments.

    Dr. Westbrook,

    Unlike the US, CPOE is not a commonly used term to describe these systems in Australia or the UK.

    That is a fair comment.

    Since the article was intended for an international audience, though, perhaps that distinction should have been noted.

    In your attempt to find fault with all aspects of the study you appear to have conveniently overlooked core features including the fact that three control wards were included in the study.

    My statement was:

    The authors do acknowledge some limitations of their (CPOE) study: Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention. Thus, this was mainly a pre-post observational study, certainly not a randomized controlled clinical trial.

    I think that is a fair assessment. I then describe the limitations of pre-post studies, which I believe apply.

    I also wrote: The lack of RCT's in health IT are, in general, one violation of traditional medical research methodologies for studying medical devices. That issue is not limited to this article, of course.

    Again, I think that a fair comment, but again, not directed at your study specifically but at the domain of health IT evaluation studies.

    You postulate about a series of factors that you think may possibly have occurred, on the basis of no evidence directly related to this study but based on your own anecdotal evidence “I’ve seen them”. You dismiss the results of a five year study which has been subject to extensive peer and practitioner review on this basis. Hmm.... where is the science in that?

    First, I am not sure to what "personal anecdotal evidence" you refer.

    That said, my belief is that science is not about consensus or 'truth' as determined by particular authorities. Science is always open to debate.

    Another point I want to amplify is that the term "anecdotes" is a term that has been turned on its head with respect to HIT. On the issue of "anecdotes", from an Australian, see this link.

    Finally, you conclude that our study, evaluating the impacts (both positive and negative) is unethical

    Not in the personal sense you express, but in a more general sense that is pervasive in this domain. What I wrote is that:

    ... since patient Jones did not provide informed consent to the experimentation with what really are experimental medical devices as I've written often on this blog [see note 1], I'm not certain the full set of ethical issues have been well-addressed. It's not limited to this occasion, however. This phenomenon is a pervasive, continual world-wide oversight with regard to clinical IT.

    In other words, I see a worldwide problem in subjecting patients to experimental and unproven technology, of unknown risk, without their explicit informed consent.

    Surely those affected by system-related errors, which accounted for "35% of post-system errors in the intervention wards", deserved the opportunity for informed consent. If informed consent by patients to the use of clinical IT was obtained, of course, that would change my opinion on your study, so please let me know if that were so, as it was not mentioned in the text.

    In other words, I am not singling out your study, but pointing out a societal issue regarding the manner in which health IT is generally studied.

    --- continued below due to 4096 char limit ---

    ReplyDelete
  7. --- continued from above ---

    However what has been totally ignored from your argument is that hundreds of patients are harmed by medication errors every day.

    Indeed my explanation accounts for that issue: In other words, patient Jones was now subjected to a cybernetic error that would not have occurred with paper, in the hopes that patients Smith and Silverstein would be spared errors that might have occurred without cybernetic aid.

    If we can help a hundred by putting at risk and/or harming ten, or one, without the consent of the latter, I hope we can agree there are significant ethical issues in doing so. This is not the ethics of medicine as I learned it. Again, this is not a personal critique, but a systemic one with respect to health AIT evaluation studies.

    Further, several studies have demonstrated that the doctors and nurses who make those errors (even if negative consequences do not occur) are markedly affected and ‘harmed’ experiencing anxiety and guilt.

    Granted. Still, the evidence on whether HIT helps or hurts the cause is conflicting. For instance as collected in this reading list and at various posts throughout this blog under labels such as "healthcare IT risks", "healthcare IT dangers" and "healthcare IT deaths."

    Surely it is unethical not to implement and evaluation interventions which show clear evidence of reducing these harms.

    As above, that evidence is not entirely clear. Fundamentally, though, it is my conviction that any intervention must be implemented ethically (i.e., with informed consent and following protocols for other drugs and devices), not simply because the intervention is thought to be "better" on the basis of personal belief, technological determinism, industry claims about their products, etc.

    My only "personal" account is that my own mother is no longer with me as a result of an HIT-related error in 2010 that would not have happened with paper. How do I know? I once worked in the ED where the harm originated.

    In summary, my commentary is not an attack on your work. It's a critique of current health IT industry and research "norms" and a reminder that the evidence for and against health IT remains contradictory, with many unknowns.

    In such a medical environment, caution and the strongest adherence to medical ethics as in drug and other medical device trials (and as per the various ethical guidelines linked at http://ohsr.od.nih.gov/guidelines/index.html) is in my view the best path forward.

    -- SS

    ReplyDelete
  8. I add that if you are referring to as "anecdotal" my statement that "There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements", in fact that passage was just an aside to the main points of those paragraphs that:

    A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

    ... Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

    The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong.


    Those points are not in any way personal regarding HIT implementation, and the other points are quite unremarkable in regard to designing and conducting clinical trials in the pharmaceutical and medical device industries.

    -- SS

    ReplyDelete
  9. From anonymous who signed: "Surely it is unethical not to implement and evaluation interventions which show clear evidence of reducing these harms."

    The incidence of unintended consequences of CPOE and electronic prescription devices is high, creating death and injury in patients who otherwise would not have suffered if it were not for these new devices. Thus, the authors are robbing Peter to pay Paul. Outcomes are no better at formidable cost.

    There is not any "clear evidence" of reducing overall harms and improved outcomes to the patients when CPOE devices are deployed because the unintended harms are so frequent.

    Such unintended harms are neglected in these narrow studies and include: adverse events due to delays in therapy; misidentifications; outages of the entire hospital array of EMRs in which all medical records disappear for hours; overdosing of IV fluids and narcotics and other drugs; protocol care with automatic stop dates; time wasted by professionals distracting them from other cases; and nursing neglect of the patients and their monitors associated with these poorly usable devices, to mention a few.

    Indeed, these are experimental devices such that when their governance of care is studied, the patients whose care is so governed must be consented and informed of the risk of death from unintended adversity.

    ReplyDelete
  10. Anonymous February 11, 2012 9:31:00 PM EST wrote:

    Indeed, these are experimental devices such that when their governance of care is studied, the patients whose care is so governed must be consented and informed of the risk of death from unintended adversity.

    Health IT medical devices have gotten special accommodation for many years in this regard, for no reasonable explanations I can think of. At least, explanations that honor guidelines for human subjects experimentation and fundamental human rights.

    Further, the following question occurs to me:

    Were the patients who were subjected to the 35% of post-system errors in the intervention wards informed that this had occurred?

    If not, why not?

    If the answer is "no", how is this respectful of their rights?

    Again, this is really a generic commentary on health IT evaluation studies as they are commonly performed in 2012, worldwide.

    -- SS

    ReplyDelete