Traverse City Record-Eagle [Michigan - ed.]
June 30, 2011
Munson has 4-hour communications failure
By Bill O'Brien
bobrien@record-eagle.com
TRAVERSE CITY — Munson Healthcare officials are trying to figure out how to avoid a repeat of a four-plus-hour data systems crash and "resultant chaos" that gripped local hospitals and clinics this week.
A system failure Tuesday morning shut down computers, telephones, pagers and other telecommunications systems at Munson Medical Center and its Munson Healthcare affiliates in Frankfort and Kalkaska, an incident that administrators described as "unacceptable." [That sounds about right - ed.]
Munson officials still aren't sure why a back-up fiber optic circuit failed during a planned outage that started Tuesday at 7:30 a.m.
"You can rest assured we're looking very carefully at that," Munson Medical Center CEO Kathleen McManus said. "Of course, we need to know what happened."
McManus said no patients were adversely affected during the outage. Even with the "resultant chaos" that gripped the local hospitals and clinics, no patients were adversely affected.
Amazing. There must be a cybernetic angel department in heaven that prevents patients from harm - i.e., from missed or delayed treatments or treatment mistakes - during the "resultant chaos" of major systems outages.
I'm sure insurers and risk managers have full confidence in Providence when these mishaps occur.
-- SS
July 1, 2011 update:
The Traverse City Record Eagle has published a memo explaining the outage:
Munson Healthcare officials distributed a memo on Wednesday that explained Tuesday’s systems failure that affected Munson Medical Center, Paul Oliver Memorial Hospital and Kalkaska Memorial Health Center and various clinics. The following memo is attributed to Chris Podges, Munson’s vice president of information systems.
“As you are all aware, we experienced an unplanned network downtime (Tuesday) that had widespread operational and clinical implications. Briefly, here is what happened:
Munson’s data centers’ connectivity to the outside world runs primarily on two redundant high speed fiber optic circuits administered by Traverse City Light and Power. We were informed by them that they needed to take one of the circuits off-line in order for them to do maintenance.
This would leave us operating on one circuit for the duration of their planned, 12-hour downtime. This shouldn’t have been any problem for us and is precisely why we have parallel, redundant technology on our most important systems and infrastructure. We have frequently tested for an event like this (losing one of the circuits) by manually “switching off” a fiber circuit.
In our testing, the remaining circuit took on all the traffic, just as it was architected to do; no hiccups, no instability, no impact on users, no downtime. And that is what we fully expected yesterday morning when one of the circuits was taken off line.
In medicine, or in any complex domain, one must expect the unexpected...
Unfortunately, that isn’t what happened. The core switch of the remaining circuit became confused, couldn’t take over the role as the primary switch (a transition which is measured in milliseconds) and ultimately shut down. Once down, everything running on the network – applications, paging systems, wireless devices, IP phones, etc., went down with it.
(I hear Kate Winslet humming in the background...)
Anthropomorphism aside, it seems to me that switches and other inanimate objects don't become "confused." The engineers/programmers who designed them just didn't anticipate the event that sinks the Titanic.
But let's roll the technology out nationally, now, rush...rush...rush...before it's too late. Computer bits get stale after awhile, after all....
Re: "Unplanned network downtimes." In the long running TV series Stargate SG-1 there's a character, Walter Harriman, who runs the command console. Just about the only line he gets to say over the base PA system over 10 seasons is ...
"UNSCHEDULED OFF-WORLD ACTIVATION!"
... when some person or alien attempts to come through from another Stargate somewhere in our galaxy or others nearby.
Perhaps hospitals can use him as a role model to announce "unscheduled down time"...
... During the downtime, we assembled a small army working on three objectives:
1. Make sure the hospitals and clinics could operate - especially as to the provision of patient care – on downtime procedures
2. Communicate as comprehensively and as often as we could
3. Fix the technical issues
Aside from the fact that these "objectives" are obvious, now hospitals need "small armies" to protect patients when a network switch gets "confused." Pen and paper offer no analogous bellicose hostilities.
While the reports are that all hospitals and clinics did a fantastic job surviving the down-time, we fully understand that it was very difficult to manage the resultant chaos and that downtimes like this are unacceptable.
But no patient was hurt (as they never are when the IT goes down).
Thursday morning at 2 a.m. we are going to re-introduce the second fiber optic circuit into our network architecture. While we expect no issues, we’re planning otherwise. This afternoon your organizations will receive specific instructions on how to prepare for the event of another network outage; What to print in advance of 2 a.m., what resources are available to you during the downtime, how to get needed clinical information without the use of computers, who to call for help, etc.
Thank god.
Again, we do not expect any downtime tomorrow morning, but we did re-learn some valuable lessons yesterday and the safety of our patients is the number one objective should the network experience another issue. We’ll be ready at 2 a.m. and we want your organizations to be ready, too.
Why do such "lessons" need to be "re-learned" - EVER?
We are working diligently to understand what happened yesterday and will share with you what we learn and our plans to remedy whatever may need attention.
In effect, they really don't understand what caused the outage.
This is similar to other cases of outages written about at Healthcare Renewal. There's never certainty, because these IT systems have become so complex and the support relatively diffused (via outsourcing, consultants, etc.)
This is a fertile ground for patient injury and death when luck runs out. As stories like "Failures in care alleged after premature birth - $1,000,000 Settlement" from the Virginia Lawyer's Weekly I referenced here imply, the cost of ownership of HIT will likely go way up once the multimillion dollar lawsuits start adding up.
Considering the track record of health IT as it is in 2011 regarding reliability, risk/benefit and security, spending hundreds of billions of dollars to put patients at risk of life and limb, and risk of potentially career-ending or bankrupting loss of privacy, en masse nationally, is increasingly brainsick.
-- SS
3 comments:
If having an HIT system go down never negatively affects patient care, doesn't that imply having it up never positively affect care?
Re: Anonymous: a eureka moment_that is shy no study to date comparing wired to unwired hospitals have shown any benefit in outcomes or costs, after $ billions spent, and thousands of injuries and deaths of the unsuspecting guinea pigs, the patients.
The administrator is lying, using the language created by LIEber, the CEO of HIMSS.
Who monitors the incidence and durations of "unplanned downtime", aka crash?
How is it defined, exactly?
When is it reported and to whom?
Who analyzes the adverse events associated with episodes when all screens go blank and all information on the patient and his/her disease and its treatment suddenly vanish?
For instance, if the doctor had just put 11 complex orders on the "scratchpad" and had not clicked "send", and their is 5 seconds of down time that erases the doctor's work; and upon redoing it, forgets one of the 11 orders causing delays in care that lead to a cascade of problems that cause death, is that 5 seconds "unplanned downtime"?
Post a Comment