A few, though, have taken on the health IT industry at the heart of bad health IT design (including yours truly, which sadly was not enough to save my own mother from health IT design defects).
Probably the bravest soul on these issues, however, is Penn sociologist Ross Koppel. In a critique of the latest from the medical informatics academic community on reigning in the hazards of this technology, an article by Sittig and Singh at U. Texas, he wrote the following piece in the BMJ:
The health information technology safety framework: building great structures on vast voids
http://m.qualitysafety.bmj.com/content/early/2015/11/19/bmjqs-2015-004746.full.pdf
Download it and read it in its entirety. It makes the point that the solutions to these problems (which I increasingly believe just might be an insoluble, wicked problem without major scope and ambition reductions regarding the use of health IT) must be based on reality.
The reality must start from a firm response not to end users being flummoxed by bad rollouts or by carelessness (user error), but to the issue of products poorly designed from the get-go by their sellers whose primary interest is to make money come hell or high water.
Koppel makes the point that one will not get good results driving a car if that car is designed poorly, with hidden and confusing controls, defective brakes and an engine that overheats and explodes without warning, no matter what post-design interventions take place.
The issues of design flaws and fundamental fitness for purpose need to be blown open in a manner similar to the manner in which drugs and other medical devices are evaluated and regulated. Academia needs to lead the charge, not suggest band aids, however well intentioned those band aids might be.
Koppel writes:
... In essence, I suggest that these two eminent colleagues tell us to look under the lamppost even though, as the old saying goes, the keys were dropped 70 feet away from the lamppost in the dark. Both Singh and Sittig, of course, are fully aware of the errors listed above,3 4 but (1) they expect that we can detect and understand these problems with error reporting, although many potentially serious errors go undetected (thus, unreported), and when detected, the poor design features that contributed to the error may not be readily apparent. (2) Singh and Sittig tend to attribute those sorts of problems to poor implementation, user errors or lack of access to the technology. They do not seriously question if the software is fit for its purpose.
And this:
... In fact, their assumption that HIT software is well designed runs throughout their work. They write about: misused software, unavailable software, poorly
implemented software and malfunctioning software (emphasis added), but what of badly designed software—neither user friendly nor interoperable with systems holding needed patient data? That failure is
not in their purview. They don’t challenge HIT vendors who design the software, or the regulators, who so often serve primarily as HIT industry promoters. Here’s what they write we need to address (my
italics): ‘1) concerns that are unique and specific to technology (e.g., to address unsafe health IT related to unavailable or malfunctioning hardware or software);
2) concerns created by the failure to use health IT appropriately or by misuse of health IT.
I add that such articles tend to confuse policy makers about what truly is needed to solve problems with HIT.
I've had the guts to take on these issues via the legal route after the death of my mother, something that led a number of academic zealots to intone that the incident, in 2010, a decade after my writings on bad health IT began, caused me to lose my objectivity. That perverse reasoning passes for wisdom in certain academic informatics circles. Yet it appears their objectivity about health IT never existed.
I lack respect for paper writers who in effect become apologists for products birthed as dangerous right out of the gate by opportunistic health IT companies. Perhaps the health IT-mediated death of one of their loved ones would wake them up, but I sometimes doubt even that.
This is no mere academic spat. In this case, patient risk and harm worldwide is at issue.
The root of any software problem in healthcare, as I've written before, is at the design level. Trying to work around bad design without facing reality leads to and perpetuates risk, patient harm, clinician disillusionment (e.g., the Medical Societies letter to ONC) and impairment of clinicians trying to take care of patients.
Kudos to Koppel. I hope the repercussions of his challenge to the usual academic fecklessness and special accommodations afforded this unregulated industry are not too severe.
Academics can be feckless towards possible sources of funding, but quite mean to internecine challenges, as Sittig, one of the authors of the challenged piece, was with me in an incident I found out about only because he did not know one of the people to whom he badmouthed me had been a former student I'd mentored.
-- SS
4 comments:
There is a solution.
Subject these devices to the rigorous scrutiny afforded all other medical devices and remove the most risky components from use until their safety, efficacy, and usability is proven to the FDA (as stipulated, actually, by the F D and C Act).
The power of conflicts of interest is shown in the lack of questions on the design of systems provided by HIT vendors.
The 'solution' also includes a massive clawback in the scope and overreach of these systems and the expectations concerning their use, and even that may not be enough. There may be domains such as medicine, wickedly complex, not amenable to large scale cybernetic intervention.
When that cybernetic intervention is led by amateurs, profiteers and zealots, not clinicians, all bets are off.
-- SS
The money is already gone Scot. Spent and paid out as bonuses I surmise.
Post a Comment