I've been meaning to write more on the just-before-Christmas, Friday afternoon, minimal-visibility release of the ONC report I'd written about in my Dec. 23, 2012 post "ONC's Christmas Confessional on Health IT Safety: HIT Patient Safety Action & Surveillance Plan for Public Comment." (The ONC report itself is available at this link in PDF.)
The Boston Globe and Globe staff writer Chelsea Conaboy, however, have beaten me to the punch in the Jan. 3, 2013 article "Federal government releases patient safety plan for electronic health records", link below.
('Lords of Kobol', of course, is a pun. They were fictional gods in a sci-fi series from the 1970's and a remake a few years ago, but in my circles the term is used satirically and derisively to reflect people expressing inappropriate overconfidence in - and perhaps worship of - computers. Cobol, the COmmon Business-Oriented Language, is one of the oldest programming languages and was the major programming language of the merchant computing sector, including business, finance, and administrative systems for companies and governments.)
First, I do want to reiterate what I'd mentioned in my earlier post: the new ONC report is a sign of progress, in terms of a government body explicitly recognizing the social responsibilities incurred by conducting the mass human subjects experiment of national health IT. However, I also wrote:
... [The ONC report] is still a bit weak in acknowledging the likely magnitude of under-reporting of medical errors, including HIT-related, in the available data, and the issue of risk vs. 'confirmed body counts' as I wrote at my recent post "A Significant Additional Observation on the PA Patient Safety Authority Report -- Risk".
The Globe quoted a number of people involved the health IT debate, and I am now commenting on their Jan. 3 article:
01/03/2013 11:16 AM
By Chelsea Conaboy, Globe Staff
The federal office in charge of a massive rollout of electronic health records has issued a plan aimed at making those systems safer by encouraging providers to report problems to patient safety organizations.
Though some in the field say it doesn’t go far enough, others said the plan is an important step for an office whose primary role has been cheerleader for a technology that has the potential to dramatically improve health care in the United States but that may come with significant risks.
A major issue at the heart of the controversy is the fact that, admittedly, nobody knows the magnitude of the risks - in large part due to systematic impediments to knowing. This has been admitted by organizations including the Joint Commission (link), U.S. FDA (link; albeit in an "internal memo" never intended for public view, and discovered only through the hard work of Center for Public Integrity investigative reporter Fred Schulte when he was at the Huffington Post Investigative Fund), Institute of Medicine of the U.S. National Academies (link, quoted at midsection of post), and others.
I have made the claim that when you don't know the level of harm of an intervention in healthcare, and there are risk management-relevant case reports of dangers, you don't go gung-ho and start a national-scale implementation with penalties for non-adopters, and then decide to study safety, quality, usability etc. You determine safety first in more controllable and constrained environments. Anything else is, as I wrote, putting the cart before the horse (link).
|Things are a bit out of order here.|
You also certainly don't dismiss risk management-relevant case reports from credible observers as "anecdotal", the common refrain of hyperenthusiasts and (incompetent) scientists who conflate scientific research with risk management - as a researcher from Down Under eloquently observed in the Aug. 2011 guest post "From a Senior Clinician Down Under: Anecdotes and Medicine, We are Actually Talking About Two Different Things."
Back to the Globe:
A year ago, the Institute of Medicine issued a report urging the federal government to do more to ensure the safety of electronic health records. It highlighted instances in which the systems were linked to patient injury, deaths, or other unsafe conditions.
The report suggested creating an independent body to investigate problems with electronic records and to recommend fixes, similar to how the National Transportation Safety Board investigates aviation accidents.
Instead, the Office of the National Coordinator for Health Information Technology delegated various monitoring and data collection duties to existing federal offices, including the Agency for Healthcare Research and Quality [AHRQ].
The problem is that AHRQ is a research agency (as its name suggests), has no regulatory authority nor any experience in regulation, and most clinicians have never heard of it. In effect, this ONC recommendation is lacking teeth, even compared to the relatively milquetoast recommendations of IOM itself (as I wrote about in a Nov. 2011 post "IOM Report - 'Health IT and Patient Safety: Building Safer Systems for Better Care' - Nix the FDA; Create a New Toothless Agency").
The [ONC] office has asked patient safety organizations, which work with doctors and hospitals to monitor and analyze medical errors, to add health IT to their agendas. Data from the organizations would be aggregated by the agency, but reporting by doctors and hospitals is completely voluntary. [A prime example of what I term an extraordinary regulatory accommodation afforded the health IT industry - ed.]
Now we're into septic shock blood pressure-level weakness. Here is PA Patient Safety Authority Board Member Cliff Rieders, Esq. on mandatory, let alone voluntary reporting. From “Hospitals Are Not Reporting Errors as Required by Law", Philadelphia Inquirer, pg. 4, http://articles.philly.com/
... Hospitals don’t report serious events if patients have been warned of the possibility of them in consent forms, said Clifford Rieders, a trial lawyer and member of the Patient Safety Authority’s board.
He said he thought one reason many hospitals don’t want to report serious events is that the law also requires that patients be informed in writing within a week of such problems. So, if a hospital doesn’t report a problem, it doesn’t have to send the patient that letter. [Thus reducing risk of litigation, and, incidentally, potentially infringing on patients' rights to legal recourse - ed.]
Rieders says the agency has allowed hospitals to determine for themselves what constitutes a serious event and the agency has failed to come up with a solid definition in six years.
Fixing this “is not a priority,” he added.
To expect hospitals to voluntarily report even a relevant fraction of mistakes and near-misses out of pure altruism, or permit their clinicians to do so, with the inherent risks to organizational interests such reporting entails, is risible.
The near-lack of reporting by most health IT sellers and hospitals in the already-existing FDA Manufacturer and User Facility Device Experience (MAUDE) database is substantial confirmation of that; the fraction of reports in MAUDE, however, are hair-raising. See my Jan. 2011 post "MAUDE and HIT Risks: What in God's Name is Going on Here?" for more on that issue.
Here's an example of what happens to 'whistleblowers', even those responsible for system development and safety: "A Lawsuit Over Healthcare IT Whistleblowing and Wrongful Discharge."
ONC's recommendations thus in my opinion reflect bureaucratic window dressing, designed to create progress - but progress that can probably be measured in microns.
“There was no evidence that a mandatory program was necessary,” Jodi Daniel, the [ONC] office’s director of policy and planning, said in an interview.
Really? See the aforementioned Philadelphia Inquirer article “Hospitals Are Not Reporting Errors as Required by Law", as well as numerous articles on pharma and medical device industry reporting deficits such as starting at page 5 in my paper "A Medical Informatics Grand Challenge: the EMR and Post-Marketing Drug Surveillance" at this link in PDF.
There is no evidence mandatory reporting is necessary ... to someone who's either naïve, incompetent - or persuaded, e.g. with money, to not find evidence or rationale.
The [ONC] office has been under pressure to roll out the electronic health records systems quickly while protecting patient data and making sure that the systems don’t cause problems in medical care, said Dr. John Halamka, chief information officer at Beth Israel Deaconess Medical Center.
Under pressure by the health IT lobby, perhaps; but nobody else that I can think of.
“It’s this challenging chicken-and-egg problem,” he said.
No, actually, it isn't. Patient safety must come first. This becomes clear when one considers the late 5th century BC ethical principle Primum non nocere ("first, do no harm" or "abstain from doing harm") versus the late 20th and early 21st century IT-hyperenthusiast credo I've expressed as "Cybernetik Über Alles" ("Computers above all"). Under CÜA, the computer has more rights than the patients, and the IT industry receives extraordinary regulatory accommodation to sloppy practices that no other healthcare or mission-critical non-healthcare sector enjoys.
I sent Dr. Halamka a set of arguments such as I make here, and a picture of a health IT 'chicken', my deceased mother in her death robes.
I received back a "thank you for the views" message - but no condolences. (It occurs that I have rarely if ever received condolences from any senior HIT-hyperenthusiast Medical Informatics academic or government official to whom I've mentioned my mother. Not to play amateur psychologist, but I believe it reflects the level of disdain or even hatred felt by these people towards health IT iconoclasts/patient's rights advocates.)
The plan, which is subject to public comment through Feb. 4, “is a reasonable start,” in part because it puts more pressure on hospitals and doctors to monitor safety, Halamka said.
As I expressed to Dr. Halamka, we are in agreement on that point.
The government would have risked stifling innovation in the industry if it had opted instead to require the kinds of tests and review by the Food and Drug Administration that new medical devices and drugs must go through, he said.
To that, I mention here (as I did in my email to him) my response to this industry meme, as I had expressed it at Q&A after my August 2012 keynote address to the Health Informatics Society of Australia:
... I had a question from the audience [after my talk], from fellow blogger Matthew Holt of the Health Care Blog. (I've had some online debate with him before, such as in the comment thread at my April 2012 post here.)
Matthew asked me a somewhat hostile question (perhaps in retaliation for the thrashing he received at the end of my May 2009 post on the WaPo's HIT Lobby article here), that I was well prepared for, expecting a question along these lines from the seller community, actually. The question was preceded by a bit of a soliloquy of the "You're trying to stop innovation through regulation" type, with a tad of Merck/VIOXX ad hominem thrown in (I ran Merck Research Labs' Biomedical libraries and IT group in 2000-2003).
His question was along the lines of - you were at Merck; VIOXX was bad; health IT allowed discovery of the VIOXX problem by Kaiser several years before anyone else; you're trying to halt IT innovation via demanding regulation of the technology thus harming such capabilities and other innovations.
The audience was visibly unsettled. Someone even hollered out their disapproval of the question.
My response was along the lines that:
- VIOXX was certainly not Merck at its best, but regulation didn't stop Merck from "revolutionizing" asthma and osteoporosis via Singulair and Fosamax;
- That I'm certainly not against innovation; I'm highly pro-innovation;
- That our definitions of "innovation" in medicine might differ, in that innovation without adherence to medical ethics is not really innovation. It is exploitation.
I stand by that assessment.
More from the Globe article:
There is little good research into how the systems improve health care and there are big obstacles to fixing even the known problems, said Ross Koppel, a professor of sociology at the University of Pennsylvania who studies hospital culture and medication errors.As per the title of this blog post, we are in a dark place, ethically, when a PhD sociologist who's never taken the Oath of Hippocrates (to my knowledge) appears to express more concern for patient safety and patient's rights than a Harvard physician-informatics Key Opinion Leader such as Dr. Halamka.
Some developers require providers to sign nondisclosure agreements before using their systems, and the safety plan does not prohibit such gag clauses. [Note: I wrote on this issue here, and in a published July 2009 JAMA letter to the editor "Health Care Information Technology, Hospital Responsibilities, and Joint Commission Standards" here - ed.] While the plan addresses reporting of known problems, Koppel said it will not help researchers and developers understand problems that go unnoticed but that may be causing real patient harm.
“We only know the tip of the iceberg” about how electronic health records affect patient care, said Koppel, who was an official reviewer for the Institute of Medicine report.
Koppel said the mantra of the Office of the National Coordinator has been that more health IT leads to better health care. “It probably is better than paper,” he said, “but it could be so much better than it is.”
I agree, but with caveats. I opine that bad health IT is likely worse for patients than a good, well-staffed paper based system. For instance, the former can cause systematic dangers that even a bad paper system cannot, such as tens of thousands of prescription errors (see my Nov, 2011 post "Lifespan Rhode Island: Yet another health IT 'glitch' affecting thousands - that, of course, caused no patient harm that they know of - yet") or mass privacy breaches (see the current 30 or so posts on that issue at this blog query link: http://hcrenewal.blogspot.com/search/label/medical record privacy).
On good health IT and bad health IT from my teaching site "Contemporary Issues in Medical Informatics: Good Health IT, Bad Health IT, and Common Examples of Healthcare IT Difficulties" at http://www.ischool.drexel.edu/faculty/ssilverstein/cases/:
Good Health IT ("GHIT") is IT that provides a good user experience, enhances cognitive function, puts essential information as effortlessly as possible into the physician’s hands, keeps eHealth information secure, protects patient privacy and facilitates better practice of medicine and better outcomes.
Bad Health IT ("BHIT") is IT that is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation.
The Boston Globe article concludes:
Ashish Jha, associate professor of health policy at Harvard School of Public Health and a member of the panel that drafted the Institute of Medicine report, said he wants doctors to be able to report problems -- errors in medication lists, for example -- in real-time so they can be found and fixed quickly. The safety plan does not require systems to have that capability, but Daniel said her office could soon add such a requirement for products that receive federal certification.
The bigger problem is that health care as a whole needs a better way of tracking patient safety, Jha said. Monitoring issues caused by electronic health records “should be a part of it, and then we can actually know if this is a small, medium or large contributor to patient safety issues,” he said. “But we don’t know that.”
I agree with Dr. Jha, but the IT sellers and healthcare organizations will (legitimately) claim that adding real-time error reporting/forwarding to their products will be extremely resource-intensive.
I have an alternate approach that will require little effort on the part of the sellers and user organizations.
- Post a message at the sign-in screen of all health IT along the lines that "This technology is experimental, adopted willingly by [organization] although not rigorously vetted for safety, reliability, usability, nor fitness for purpose, and thus you use it at your own risk. If problems occur, report them to the following" ...
"The following" could include a list of alternatives such as I wrote in my Aug. 2012 post "Clinicians: How to Document the EHR Screens You Encounter That Cause Concern."
... When a physician or other clinician observes health IT problems, defects, malfunctions, mission hostility (e.g., poor user interfaces), significant downtimes, lost data, erroneous data, misidentified data, and so forth ... and most certainly, patient 'close calls' or actual injuries ... they should (anonymously if necessary if in a hostile management setting):
- Inform their facility's senior management, if deemed safe and not likely to result in retaliation such as being slandered as a "disruptive physician" and/or or being subjected to sham peer review (link).
- Inform their personal and organizational insurance carriers, in writing. Insurance carriers do not enjoy paying out for preventable IT-related medical mistakes. They have begun to become aware of HIT risks. See, for example, the essay on Norcal Mutual Insurance Company's newsletter on HIT risks at this link. (Note - many medical malpractice insurance policies can be interpreted as requiring this reporting, observed occasional guest blogger Dr. Scott Monteith in a comment to me about this post.)
- Inform the Joint Commission (or similar national accreditor of hospital safety if not in the U.S.) via their complaint site at http://www.jointcommission.org/report_a_complaint.aspx . Also consider writing the JC senior officers (link to officer's list), whose awareness of HIT issues I can personally attest to via our correspondences.
- Inform the FDA (or similar healthcare regulator if not in the U.S.) via the FDA Medwatch Form 3500 reporting site at https://www.accessdata.fda.gov/scripts/medwatch/medwatch-online.htm. An example of such an adverse event report I filed myself (when the involved hospital refused) is at this link in the FDA MAUDE (Manufacturer and User Facility Device Experience) database.
- Inform the State Medical Society and local Medical Society of your locale.
- Inform the appropriate Board of Health for your locale.
- If applicable (and it often is), inform the Medicare Quality Improvement Organization (QIO) of your state or region. Example: in Pennsylvania, the QIO is "Quality Insights of PA."
- Inform a personal attorney.
- Inform local, state and national representatives such as congressional representatives. Sen. Grassley of Iowa is aware of these issues, for example.
(Left out of this reiteration is the demonstration on photographing problematic EHR screens. See the post for the details - it is easy to do, even with a commodity cellphone.)
HHS should be promoting laws on protection from retaliation upon clinicians reporting problems in good faith.
Thus, physicians, nurses and other clinicians can create needed health IT transparency and help our society discover the true level of risks of bad health IT. They simply need the right information on what to do and where to report, bypassing the ONC office and, in the spirit of medicine, taking such matters into their own hands in the interests of patient care and medical ethics.
I also made recommendations to the Pennsylvania Patient Safety Authority on how known taxonomies of health IT-related medical error can be used, and need to be used, to promote error reporting in common formats. Slides from my presentation to the Authority entitled "Asking the Right Questions: Using Known HIT Safety Issues to Improve Risk Reporting and Analysis", given in July 2012 at their invitation, are at http://www.ischool.drexel.edu/
Finally, another sign of progress: unlike the HITECH Act, this new ONC plan is open to public comment.
Addendum Jan. 8., 2012:
Dr. Halamka has put more details regarding his views in his blog. The entry is entitled "Electronic Health Record Safety" at this link: http://geekdoctor.blogspot.com/2013/01/electronic-health-record-safety.html .
... Some have questioned the wisdom of moving forward with EHRs before we are confident that they are 100% safe and secure. [That, of course, is not my argument - nothing is ever 100% safe and secure. However, we don't yet know just how safe and secure - or unsafe and insecure - HIT is. That is the issue I am concerned about - ed.] I believe we need to continue our current implementation efforts.
I realize it is a controversial statement for me to make, but let me use an analogy.
When cars were first invented, seat belts, air bags, and anti-lock brakes did not exist. Manufacturers tried to create very functional cars, learned from experience how to make them better, then innovated to create new safety technologies. many of which are now required by regulation.
Writing regulation to require seat belts depended on experience with early cars.
My grandmother was killed by a medication error caused by lack of an EHR. My mother was incapacitated by medication issues resulting from lack of health information exchange between professionals and hospitals. My wife experienced disconnected cancer care because of the lack of incentives to share information. Meaningful Use Stage 2 requires the functionality in EHRs which could have prevented all three events.
I express my condolences on those events.
I disagree, however, with continuing national implementation efforts at the current rate, with penalties for non-adopters. I opine from the perspective of believing health IT has not reached a stage where it is ready for national rollout and remains experimental, its magnitude of harms admittedly unknown and information flows systematically impaired. I recommend and prefer great caution under those circumstances, and remediation of those circumstances before full-bore national implementation.
I will leave it to the reader ponder the two views.