Thursday, February 14, 2013

Bipartisan Policy Center's Health Innovation Initiative: Health IT Industry Officials Lying to Regulators With Impunity?

On Wednesday, February 13, 2013, The Bipartisan Policy Center's Health Innovation Initiative held a discussion on its new report: An Oversight Framework for Assuring Patient Safety in Health Information Technology.  The announcement is here:  https://bipartisanpolicy.org/news/press-releases/2013/02/bipartisan-policy-center-releases-recommendations-oversight-framework-pa

The report is here (PDF):  "An Oversight Framework for Assuring Patient Safety in Health Information Technology."

The "who's" of the Bipartisan Policy Center's Health Innovation Initiative included these people:

  • Senator Tom Daschle, Former U.S. Senate Majority Leader; Co-founder, Bipartisan Policy Center (BPC); and Co-leader BPC Health Project Carolyn M. Clancy, M.D., Director, Agency for Healthcare Research and Quality, Department of Health and Human Services
  • Farzad Mostashari, M.D., ScM, National Coordinator for Health Information Technology, Department of Health and Human Services
  • Peter Angood, M.D., Chief Executive Officer, American College of Physician Executives
  • Russ Branzell, Chief Executive Officer, Colorado Health Medical Group, University of Colorado Health
  • John Glaser, Ph.D., Chief Executive Officer, Siemens Health Services
  • Douglas E. Henley, M.D., FAAFP, Executive Vice President and Chief Executive Officer, American Academy of Family Physicians
  • Jeffrey C. Lerner, Ph.D., President and Chief Executive Officer, ECRI Institute
  • Ed Park, Executive Vice President and Chief Operating Officer, athenahealth
  • Emad Rizk, M.D., President, McKesson Health Solutions
  • Janet Marchibroda, Moderator; Director, BPC Health Innovation Initiative 

Unfortunately, I was unable to attend.  I was at the 2013 Annual Winter Convention of the American Association for Justice (Trial Lawyer's Association) in Florida, as an invited speaker on health IT risk, its use in evidence tampering, and other legal issues.


"United for Justice" - click to enlarge



I found the following statement from the Bipartisan Policy Center's Health Innovation Initiative report remarkable as a "framework for health IT safety":

The Bipartisan Policy Center today proposed an oversight framework for assuring patient safety in health information technology. Among other guiding principles, the framework should be risk-based, flexible and assure patient safety is a shared responsibility, the authors said. “Assuring safety in clinical software in particular is a shared responsibility among developers, implementers, and users across the various stages of the health IT life cycle, which include design and development; implementation and customization; upgrades, maintenance and operations; and risk identification, mitigation and remediation,” the report states. Among other recommendations, the center said clinical software such as electronic health records and software used to inform clinical decision making should be subject to a new oversight framework, rather than traditional regulatory approaches [e.g.,  FDA - ed.] applied to medical devices given its lower risk profile.

I find it remarkable that the health IT industry and its supporters now feel they can lie to our government and regulatory agencies with impunity.  Stating that health IT has a "lower risk profile" is an example.

One cannot know what is acknowledged to be unknown.

From the Institute of Medicine in its 2012 report on health IT safety:

Institute of Medicine. 2012. Health IT and Patient Safety: Building Safer Systems for Better Care .  Washington, DC: The National Academies Press.

... While some studies suggest improvements in patient safety can be made, others have found no effect. Instances of health IT–associated harm have been reported. However, little published evidence could be found quantifying the magnitude of the risk.

Several reasons health IT–related safety data are lacking include the absence of measures and a central repository (or linkages among decentralized repositories) to collect, analyze, and act on information related to safety of this technology. Another impediment to gathering safety data is contractual barriers (e.g., nondisclosure, confidentiality clauses) that can prevent users from sharing information about health IT–related adverse events. These barriers limit users’ abilities to share knowledge of risk-prone user interfaces, for instance through screenshots and descriptions of potentially unsafe processes. In addition, some vendors include language in their sales contracts and escape responsibility for errors or defects in their software (i.e., “hold harmless clauses”). The committee believes these types of contractual restrictions limit transparency, which significantly contributes to the gaps in knowledge of health IT–related patient safety risks. These barriers to generating evidence pose unacceptable risks to safety.

... More worrisome, some case reports suggest that poorly designed health IT can create new hazards in the already complex delivery of care. Although the magnitude of the risk associated with health IT is not known, some examples illustrate the concerns. Dosing errors, failure to detect life-threatening illnesses, and delaying treatment due to poor human–computer interactions or loss of data have led to serious injury and death.” 

Even to those with particularly thick skulls, this statement seems easy to comprehend:

"The magnitude of the risk associated with health IT is not known."

I repeat once again:

One cannot know what is acknowledged to be unknown.

A statement that health IT has a "lower risk profile" compared to other regulated healthcare sectors such as devices or drugs, in order to seek continued and extraordinary regulatory accommodations, is remarkable.  It is either reckless regarding something that the statement's makers should know, or should have made it their business to know - or a deliberate prevarication with forethought.

The report did attempt to shroud the declarative "lower risk profile" in a sugar coating through misdirection, citing the need to take into account "several factors" including:

"the level of risk of potential patient harm, the degree of direct clinical action on patients, the opportunity for clinician involvement, the nature and pace of its development, and the number of factors beyond the development stage that impact its level of safety in implementation and use." 

These "factors" speak to a higher level of potential risk, not lower, and are a justification for stronger regulatory oversight, not weaker.  I would opine that there is a possibility that health IT. through which almost all transactions of care need to pass (e.g., orders, results reporting, recording and review of observations, finding, diagnoses, prognoses, treatment plans, etc.), could have a higher risk profile than one-off devices or drugs.  Health IT affects every patient, not just those under a specific therapy or using a specific device or drug.

Partial taxonomies developed from limited data themselves speak to the issue of a potentially huge risk profile of health IT, e.g., the FDA Internal Memo on HIT Risks (link), the AHRQ Hazards Manager taxonomy (link), and the sometimes hair-raising voluntary defects reports (largely from one vendor) in the FDA MAUDE database (link).  Further, health IT can and does affect thousands or tens of thousands of patients en masse even due to one simple defect, such as happened in Rhode Island at Lifespan (link), or due to overall design and implementation problems such as at Contra Costa County, CA (link) and San Francisco's Dept. of Public Health (link).

We don't know the true levels of risk and harm - but we need to, and rapidly.  Industry self-policing is not the answer; it didn't work in drugs and devices, and even with regulation there are still significant problems in those sectors.  (Imagine how it would be if those sectors received the special accommodations that health IT receives, and wishes to continue to receive.)

My other issue is with the "shared responsibility" including "users."

The user's responsibility is patient care, not being a beta tester for bug-laden or grossly defective health IT products.  Their responsibility ends at reporting problems without retaliation, and ensuring patient safety.

Their responsibility is to avoid carelessness - as it is when they drive their cars.

In other words, the inclusion of "users" in the statement is superfluous.

It is not a responsibility to be omniscient and be held accountable when bad health IT promotes "use error" (the NIST definition of "use error" I will not repeat again here; search the blog) -- as opposed to and as distinct from "user error" - note the final "r" - i.e., carelessness.

Bad health IT (see here):

Bad Health IT ("BHIT") is defined as IT that is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation. 

One special accommodation that the health IT industry has been afforded for far too long is to be able to "blame the user."

"Blaming the victim" of bad health IT is a more appropriate description.

-- SS

1 comment:

Anonymous said...

The sham of HIT and its proponents continue.

The lies make me reach for the antiemetics.