Pages

Thursday, March 15, 2012

Nancy Finn, author of "e-Patients Live Longer", openly calls for unethical medical experimentation without consent

My construct of the "Ddulite" orientation largely driving health IT is not merely a theoretical construct. Ddulites (derived from the term "Luddite" with first four characters reversed) are:

Hyper-enthusiastic technophiles who either deliberately ignore or are blinded to technology's downsides, ethical issues, and repeated local and mass failures.

Here is an example of this disposition on display:

In a March 14, 2012 Pittsburgh Post Gazette article "Digital ease may complicate health care" by Bill Toland about the recent controversy caused by a Harvard study showing EHR's may actually increase test ordering thus raising, not lowering, medical costs, Nancy Finn, a medical consultant and author of "e-Patients Live Longer" is quoted as saying:

... In an ideal world, management would know if a software suite is going to improve health outcomes before it's rolled out, said Nancy Finn, a medical consultant and author of "e-Patients Live Longer." Unfortunately, though, uncertainty is built into the process.

"The only way to know [the systems] are inefficient and flawed is to deploy them, then correct them as we go," she said. [That is, they are experimental - ed.]

"That is the way that all of the new innovative technologies have worked over the years. We have to take the risk, and then improvements get made."


This statement in highly alien to medical ethics.

She is explicitly stating that this technology is experimental - "The only way to know [the systems] are inefficient and flawed is to deploy them" - and then states "We have to take the risk" where the "we" are unconsenting patients, i.e., not afforded the opportunity for true informed consent, and 'investigators' also often coerced to use these systems, i.e., clinicians themselves.

Never mentioned are the downsides of experimental technology such as health IT: patient injury, death, litigation against physicians and other clinicians entrapped into "use error" (errors promoted by the common mission hostility of today's health IT), or led into errors by poor software quality causing data corruption, misidentification or outright loss, and additional issues described by FDA (link) and others. Nor are ethical issues considered.

NO, Ms. Finn: "We" do NOT have to "take the risk."

There are scientific methods for improving experimental technologies such as "controlled clinical trials" with informed consent, opt-out provisions and built-in protections for patients and investigators.

The "trial and error", "learn-as-we-go", "computers' rights supercede patients' rights" approach you suggest, while perhaps appropriate for mercantile computing, is highly inappropriate for healthcare.

Such issues, I had believed, had been settled after WW2.

There is nothing to argue, and nothing to discuss.

-- SS

5 comments:

  1. Scott,

    I find this position too extreme. There is much to argue and to discuss. How much trial? How much error? How much learning as we go?

    Clinical medicine IS about "trial and error", "learn-as-we-go". For example, the very premise of personalized medicine (e.g., pharmacogenomics) is completely at odds with a practice where one prescribes the same doses of the same drugs for different people that happen to have similar diagnoses. Isn't determining what a patient wants and what works for a particular patient and treating them accordingly the very definition of learning-as-one-goes? Each clinical case is an experiment in itself. Of course, we all wish our errors to be as small and insignifiant as possible but we can't change the fact that we do not know it all. All along these lines:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2621040/
    http://mh.bmj.com/content/26/1/18.full

    And while I am all for informed consent, I am very doubtful that a "controlled clinical trial", no matter how large-scale is a good fit or would definitely answer the very general question whether HIT is worth it.
    What do we control for? Which technology we assess? How about the future technologies. Should we stop creating new tech?

    But maybe it is the role of business in medicine that needs to be reassessed and its push for HIT in the name of increasing efficiencies and administrative management effectiveness that ultimately may impact the clinical side. See some of the comments in this thread:
    http://www.ama-assn.org/ama/pub/education-careers/graduate-medical-education/question-of-month/science-of-medicine.page

    unrelate: the captchas for the comments are almost impossible!

    Stefan V. Pantazi

    ReplyDelete
  2. Stefan V. Pantazi writes:

    I find this position too extreme. There is much to argue and to discuss. How much trial? How much error? How much learning as we go?

    In my view protection of patient rights in medical experimentation is never "too extreme" if calling on the long established principles for ethical conduct of human subjects experimentation to be followed.

    This is especially if you or a family member are the patient.

    This author can volunteer herself and her own family to be experimental subjects. That's her prerogative. Just don't involve mine or tell me I need to.

    As Woolhandler et al. stated in their response to ONC in the post immediately following this one, they wrote:

    "No drug or new medical device could pass FDA review based on such thin evidence as we have on health IT.

    Let's do the debugging of the technology in small scale controlled clinical trials, not roll out alpha and beta software on a national scale.

    But maybe it is the role of business in medicine that needs to be reassessed and its push for HIT in the name of increasing efficiencies and administrative management effectiveness that ultimately may impact the clinical side.

    Agreed, but I believe that what most importantly needs to be critically reassessed is the state of commercial health IT and its industry in 2012, and in a candid manner. Jon Patrick's work at U. Sydney, as linked to in numerous posts, is a start and an example.

    Deficient IT cannot "transform" a domain until the IT itself is transformed.

    In nay case, I've stated my case, you've stated yours. I thank you for your opinion.

    ReplyDelete
  3. In medicine, the FD and C Act was passed to protect the US citizens from exactly what is standard operating procedure by the (H)IT industry. The method as described by Finn is not even ok for non medical software but is life threatening when used for medical devices.

    The FDA must act to stop this now.

    ReplyDelete
  4. Can you just imagine the political machinations should the FDA step up and enter the fray? Polarization could go either way depending on who is in the white house at the time.

    "I was for it before I was against it." was the most telling political quote in the last half century or more. I don't even recall if it was said by a republican or democrat, and the specific persuasion doesn't matter because the thinking is embedded in how stuff is decided in both parties.

    Isn't that astounding? Support or non-support for an activity depends on the positioning of political parties for the next election? What have we sunk to?

    The problem isn't medicine, its government.

    ReplyDelete
  5. Afraid said...

    The problem isn't medicine, its government.

    It's medicine, too; physicians do nothing when an issue that can severely impact patients is usurped by, of all people, computer technicians.

    The physicians who do nothing perhaps deserve their fate; however, patients injured or killed do not deserve theirs.

    -- SS

    ReplyDelete