However, they have appeared to have learned from their mistakes and in fact are on the way to being far ahead of the U.S. in terms of understanding what it truly takes for HIT to be efficacious - and perhaps even more importantly, as safe as possible.
From an informatics colleague who informed me of these developments:
The UK has recently adopted the ISO draft standards for the development and deployment of HIT. They don't go as far as premarket approval, but do require vendors to develop and deliver to healthcare organizations a formal hazard assessment for their products, require both to continually update their risk assessments, and require care delivery organizations to have an explicit process for identifying & mitigating risks, and formally accepting (or not) the residual risks that remain. The thinking is these standards will be adopted across the EU once the ISO approval process is completed.
These two remarkable documents are available from the UK's NHS:
http://www.isb.nhs.uk/documents/isb-0160/dscn-18-2009
"Health informatics — Guidance on the management of clinical risk relating to the deployment and use of health software"
Formerly ISO/TR 29322:2008(E)
DSCN18/2009
and
http://www.isb.nhs.uk/documents/isb-0129/dscn-14-2009
"Health Informatics — Application of clinical risk management to the manufacture of health software"
Formerly ISO/TS 29321:2008(E)
DSCN14/2009
From the first of these, the overall intro:
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO member bodies). The work of preparing International Standards is normally carried out through ISO technical committees. Each member body interested in a subject for which a technical committee has been established has the right to be represented on that committee. International organizations, governmental and non-governmental, in liaison with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of electrotechnical standardization.
Then on to matters at hand:
Introduction
The threat to patient safety
There is mounting concern around the world about the substantial number of avoidable clinical incidents which have an adverse effect on patients, of which a significant proportion result in avoidable death or serious disability, see references [1], [2], [3], [4], [5] and [6]. A number of such avoidable incidents involved poor or "wrong" diagnoses or other decisions. A contributing factor is often missing or incomplete information, or simply ignorance, e.g. of clinical options in difficult circumstances or of the cross-reaction of treatments (a substantial percentage of clinical incidents are related to missing or incomplete information).
It is increasingly claimed that information systems such as decision support, protocols, guidelines and pathways could markedly reduce such adverse effects.
[As I have written in many places such as here and here, this may or may not be true regarding today's commercial healthcare IT as it is currently designed and deployed. Evidence supporting the assertion, especially robust studies such as randomized controlled clinical trials, is scarce, and evidence contradicting it is growing. The technology remains experimental - ed.]
If for no other reason – and there are others – this is leading to increasing deployment and use of increasingly complex health software systems, such as for decision support and disease management. It can also be anticipated that, due to pressures on time and to medico-legal aspects, clinicians will increasingly rely on such systems, with less questioning of their "output", as a "foreground" part of care delivery rather than as a "background" adjunct to it. Indeed, as such systems become integrated with medical care, any failure by clinicians to use standard support facilities may be criticised on legal grounds.
Increased use of such systems is not only in clinical treatment but also in areas just as important to patient safety, such as referral decision-making. Failure to make a "correct" referral, or to make one "in time", can have serious consequences.
Economic pressures are also leading to more decision support systems. The area of generic and/or economic prescribing is the most obvious, but achieving economy in the number and costs of clinical investigative tests is another.
Thus the use of health software and medical devices in increasingly integrated systems, e.g. networks, can bring substantial benefit to patients. However unless they are proven to be safe and fit for purpose they may also present potential for harm or at least deter clinical and other health delivery staff from making use of them, to the ultimate detriment of patients. Annex A provides some examples of the potential for harm.
Harm can of course result from unquestioning and/or non-professional use, although the manufacturers of health software products, and those in health organizations deploying and using such products within systems, can mitigate such circumstances through, for example, instructions for use, training and on-screen presentation techniques, guidance, warnings or instructions.
Some of these system deficiencies are insidious, may be invisible to the end user [an obviously perilous situation - ed.] and are typically out of the sole control of either the manufacturer or the deploying health organization.
The reports note the obvious, something that the health IT vendors' contractual gag clauses and secrecy in the health IT industry make difficult to rigorously evaluate:
A necessary pre-cursor for determining and implementing controls to minimize risks to patients, from a health software systems that is manufactured and then deployed and used within a health organization, is a clear understanding of the risks which the deployed system might present to patients if malfunction or an unintended event were to occur, and the likelihood of such a malfunction or event causing harm to the patient.
These risks cannot be properly evaluated in an industry where the flows of information are dominated by the vendors.
Some examples of potential for harm, from annex (appendix) A, will likely sound quite familiar to readers of Healthcare Renewal:
- Patient (mis)identification
- Inadvertent accidental prescribing of dangerous drugs (such as methotrexate)
- Incorrect patient details retrieved from radiology information system
- CT and MRI images could not be seen after being moved to PACS
- Drug mapping error
- Pre-natal screening risk computation errors
- Radiotherapy errors
- Slack security
I especially note the following in the first document (on deployment):
5.3 Competencies of personnel
Persons performing risk management tasks will need to have the knowledge, experience and competencies appropriate to the tasks assigned to them. This will need to include, where appropriate, knowledge and experience of the particular health software systems (or similar health software products) and applications, the technologies involved and risk management techniques. This should include appropriate registered clinical input throughout the process. Appropriate competency and experience records will need to be maintained.
Clinical risk management tasks can, and should, be performed by a project team that contains representatives of each of the functions that are involved in deploying and subsequently using the health software systems or system, with each contributing their specialist knowledge to build both awareness and consensus. Of particular importance will be clinical input from clinicians who are familiar with the practical realities of the environments within which the software system will be used and the clinical processes to which the software system is directed.
Emphasis on the last sentence is mine. At a time when U.S. CIOs and health IT "talking heads" still find the need to write touchy-feely "Master of the Obvious" articles extolling the virtues of permitting clinicians 'input' into health IT projects, usually under the aegis of unempowered "Directors of Informatics" or "Chief Medical Information Officers" (a.k.a. Directors of Nothing and Chiefs of Nothing, with no true executive presence or authority), the latter direct, definitive sentence is refreshing.
Miracle of miracles, even postmarketing surveillance is covered (the pharma and medical device industries have been mandated by regulators to conduct such studies on their products for decades):
11 Post-deployment monitoring
Both manufacturers and organizations deploying and using health software and other products within systems, have a business need to establish, document and maintain a process to collect and review information about the clinical safety performance of the products and system in the post-deployment phase, at least to help manage their liabilities but also to enable them to optimize their products and systems.
There is much more in these documents.
Download and read the PDF's. I will have more to say in future posts, but thank god someone is considering the risks to patients of this technology, touted as universally beneficent by health IT exceptionalists, in a serious manner.
Now if only we can import this thinking into the United States.
-- SS
2 comments:
The clairvoyant former NHS IT Director Richard Granger: July 10, 2007 (as reported by e-health insider), had it right: "Going further than he before in acknowledging the extent of failings of systems provided to some parts of the NHS - such as Milton Keynes – the Connecting for Health boss, said "'Sometimes we put in stuff that I'm just ashamed of. Some of the stuff that Cerner has put in recently is appalling.'"
This is the stuff that makes for appropriate regulatory intervention. The Brits are to be commended. Too many patients are dying unexpectedly after HIT get deployed.
I am the author of the standards referenced above. If you wish to discuss the issues raised in this article or any of our UK experiences in implementing safety standards for ehealth, (we issued the standard in 2009), please email me at ian.harrison@patientsafety-management.com
Post a Comment