Pages

Friday, November 29, 2013

There They Go Again, Again... - Johnson and Johnson Loses Two Civil Cases, Makes $2.5 Billion Settlement Based on Claims it Withheld Safety Data on its Products

There has been some talk by US government officials that any day now they will actually get tough on corporate executives whose organizations are involved in multiply unethical actions (perhaps using the legally valid, but massively neglected responsible corporate officer doctrine, look here).  However, the march of legal settlements by such corporations continue without any hints of negative consequences for the people who might have actually been involved in unethical activity. 

So, we note another week, another multi-billion dollar settlement and another loss of a civil lawsuit by huge drug, device and biotechnology company Johnson and Johnson

The Articular Surface Replacement Metal-on-Metal Hip Prosthesis Settlement

After various rumors, the report of the settlement appeared in a New York Times article on November 19, 2013.  The basics were,

Johnson & Johnson and lawyers for patients injured by a flawed hip implant announced a multibillion-dollar deal on Tuesday to settle thousands of lawsuits, but it was not clear whether the deal would satisfy enough claimants.

Under the agreement, the medical products giant would pay nearly $2.5 billion in compensation to an estimated 8,000 patients who have been forced to have the all-metal artificial hip removed and replaced with another device. 

Separately, the company has agreed to pay all medical costs related to such procedures, expenses that could raise the deal’s cost to Johnson & Johnson to $3 billion, people familiar with the proposal said.

Under the plan, the typical patient payment for pain and suffering caused by the device would be about $250,000 before legal fees. Based on standard agreements, plaintiffs’ lawyers would receive about one-third of the overall payout, or more than $800 million, with those who negotiated the plan emerging as big winners.

The proposed settlement, which was submitted on Tuesday to a federal judge in Toledo, Ohio, must receive the support of 94 percent of eligible claimants to go forward.

An earlier NY Times article on a rumored version of the settlement emphasized that relevant litigation had featured strong allegations that Johnson  and Johnson's DePuy subsidiary hid what it knew about the faults of the device,


The A.S.R. hip was sold by DePuy until mid-2010, when the company recalled it amid sharply rising early failure rates. The device, which had a metal ball and a metal cup, sheds metallic debris as it wears, generating particles that have damaged tissue in some patients or caused crippling injuries. 

DePuy officials have long insisted that they acted appropriately in recalling the device when they did. However, internal company documents disclosed during the trial of a patient lawsuit this year showed that DePuy officials were long aware that the hip had a flawed design and was failing prematurely at a high rate.

Many artificial hips last 15 years or more before they wear out and need to be replaced. But by 2008, data from orthopedic databases outside the United States also showed that the A.S.R. was failing at high rates in patients after just a few years.

Internal DePuy projections estimate that it will fail in 40 percent of those patients in five years, a rate eight times higher than for many other hip devices. 

A later NY Times article about plaintiffs' sometimes negative reactions to the settlement added,

 The DePuy Orthopaedics division of Johnson & Johnson estimated in an internal document in 2011 that the device would fail within five years in 40 percent of patients. Traditional artificial hips, which are made of metal and plastic, typically last 15 years or more before replacement. 

 DePuy officials have insisted that they acted properly in handling the device, including waiting until 2010 to recall it. However, internal company documents show that company officials were warned years before by their own consultants that the device was so problematic they would not use it in their patients. 

In January, 2013, the NY Times had reported in more detail about how DePuy executives concealed evidence about safety issues with the hips,

. Johnson and Johnson executives knew years before they recalled a troubled artificial hip in 2010 that it had a critical design flaw, but the company concealed that information from physicians and patients, according to internal documents disclosed on Friday during a trial related to the device’s failure.

The company had received complaints from doctors about the device, the Articular Surface Replacement, or A.S.R., even as it started marketing a version of it in the United States in 2005. The A.S.R.’s flaw caused it to shed large quantities of metallic debris after implantation, and the model failed an internal test in 2007 in which engineers compared its performance to that of another of the company’s hip implants, the documents show.

Still, executives in Johnson & Johnson’s DePuy Orthopaedics unit kept selling the A.S.R. even as it was being abandoned by surgeons who worked as consultants to the company. DePuy executives discussed ways of fixing the defect, the records suggest, but they apparently never did so.
Plaintiffs’ lawyers introduced the documents on Friday in Los Angeles Superior Court during opening arguments in the first A.S.R.-related lawsuit to go to trial.
In particular, 
In 2007, DePuy engineers tested the A.S.R.’s rate of wear to see if it matched the wear rate of another all-metal hip implant made by the company. It did not.

'The current results for A.S.R. do not meet the set acceptance criteria for this test,' that report stated.

The same year, company officials began discussing ways to fix the problem, like redesigning the cup to eliminate the groove. But at the same time, it was actively marketing the A.S.R. to surgeons in the United States, who were implanting it into tens of thousands of patients.

'We will ultimately need a cup redesign, but the short-term action is manage perceptions,' one top DePuy sales official told a colleague in a 2008 e-mail. A DePuy executive, Andrew Ekdahl, who is now the unit’s president, was also told by a company consultant that the A.S.R. was flawed, according to another document. 

In mid-2008, DePuy apparently abandoned the redesign project, an internal document indicates. A company spokeswoman, Mindy Tinsley, declined to comment on the document. 

In the fall of 2009, the Food and Drug Administration rejected DePuy’s application to sell the resurfacing version of the A.S.R. in the United States, saying it was concerned about, among other things, “high concentration of metal ions” in the blood of patients who received it. 

DePuy executives soon started making financial estimates of when the company should stop selling the A.S.R., based on the time it would take to convert surgeons to another company implant, a document shows.

So the evidence introduced in litigation suggested that top DePuy executives knew the design was faulty, but chose to not disclose the evidence of this, and not to withdraw the product, but rather to "manage perceptions." 

As is typical of most settlements made by big health care corporations in the last 10 years, no one at DePuy or its Johnson and Johnson parent who might have authorized, directed, or implemented the continued sales of the device despite warnings that it might be unsafe would have to suffer any negative consequences.  In particular, apparently Mr Ekdahl will not suffer any such consequences (and was not obviously named in the few news reports of the settlement.)   Mr Ekdahl, now Worldwide President, DePuy Synthes Joint Reconstruction, was quoted in the official Johnson and Johnson news release about the settlement.

The Topamax Verdicts

This case, which was much smaller in terms of the monetary amounts involved, got much less coverage, but Bloomberg did report on November 18, 2013,

Johnson & Johnson's  Janssen Pharmeceuticals unit was ordered by a Philadelphia jury to pay $11 million in a case claiming its anti-seizure drug Topamax caused birth defects, the second such loss in less than a month.

Again, the case involved claims that the company withheld information about the safety of its product,

Janssen failed to adequately warn doctors for Haley Powell, a stay-at-home mother, of the risks of Topamax before she gave birth to a son with a cleft lip, jurors in state court in Philadelphia found today.

'Janssen has long known that this drug causes debilitating birth defects and yet intentionally kept this information from physicians and patients,' Shelley Hutson, an attorney for Powell, said after the verdict was read.

Furthermore,

 Janssen knew as early as 1997 that animal studies showed an increased risk for birth defects, especially oral clefts, Hutson said during closing arguments on Nov. 15.


Hutson accused Janssen of operating in a culture of secrecy and of intentionally concealing safety reports in 2003 and 2005. She rejected arguments by the company that it presented the information on poster boards, in abstracts and at medical conferences. Those actions “do not keep patients safe,” Hutson said.

'As early as 1997 in admission after admission, this company knew and they didn’t tell the doctors,' Hutson said.

A report in Law360 emphasized,

 Plaintiffs in the Pennsylvania suits alleged the company didn't fully, truthfully or accurately disclose Topamax data to the FDA, to them and to their doctors. As a result, Janssen intentionally and fraudulently misled the medical community, the public and herself about the risks to a fetus associated with the use of Topamax during pregnancy, plaintiffs claimed.

 Summary

There they go again....  So Johnson and Johnson has announced two multi-billion dollar settlements in one month (November, 2013, look here for the first).  It also announced a smaller settlement involving the marketing of Topamax, which is now in addition to a 2010 guilty plea for misbranding Topamax in 2010 (look here).  Note that all the November, 2013 legal actions involved allegations, often backed by seemingly convincing evidence produced in litigation (as noted above), of deceptive, unethical practices.  Both cases above included allegations that the company sold products without fully disclosing those products' harms to patients.  Furthermore, all the month's legal actions are now added to a long list of Johnson and Johnson's legal woes, often involving allegations and evidence of other unethical actions, sometimes involving guilty pleas to charges of such actions (see compilation of the record through July, 2013 here.)  (By the way, Synthes, which is now another Johnson and Johnson subsidiary, and is not run by the same individual on whose watch the ASR case occurred, has had its own legal and ethical woes, look here.) 

Yet despite this lengthy and sorry record, no individual manager or executive at Johnson and Johnson, including its many and confusing subsidiaries, seems to have suffered any negative consequences for authorizing, directing, or implementing any unethical activities, whether they risked harming patients, or whether they resulted in a guilty plea by a corporate entity.  Instead, as we have discussed most recently here, the top executives of the company have grown very rich. 

So since the US government seems to continue to recycle its policy of allowing corporate managers and executives impunity regardless of how repetitively harmful their actions might be to patients' and the public's health, I will recycle my comments from earlier in November, 2013,....

The latest settlement in the parade is another marker of the sort of conduct that big health care organizations have exhibited to increase revenue, and to use that revenue as a rationale for making their top insiders very rich.  The particular conduct alleged here could have put patients at risk, partly by deceiving health care professionals.  Yet in their wisdom, top US law enforcement saw fit not to try to hold any individuals accountable for this conduct, and allowed the company to deny any misconduct other than a single misdemeanor by a subsidiary.  This occurred despite the company's history of multiple legal settlements and findings of guilt in various courtrooms.

Yet none of these actions has resulted in any negative consequences for any individual within the company.  No one who authorized, directed, or implemented bad behavior will pay any penalty, even were the bad behavior to have lead to significant personal enrichment.

As we have said ad infinitum, and on the occasion of a previous Johnson and Johnson settlement, many of largest and once proud health care organizations now have recent records of repeated, egregious ethical lapses. Not only have their leaders have nearly all avoided penalties, but they have become extremely rich while their companies have so misbehaved.

These leaders seem to have become like nobility, able to extract money from lesser folk, while remaining entirely unaccountable for bad results of their reigns. We can see from this case that health care organizations' leadership's nobility overlaps with the supposed "royalty" of the leaders of big financial firms, none of whom have gone to jail after the global financial collapse, great recession, and ongoing international financial disaster (look here). The current fashion of punishing behavior within health care organization with fines and agreements to behave better in the future appears to be more law enforcement theatre than serious deterrent.  As Massachusetts Governor Deval Patrick exhorted his fellow Democrats, I exhort state, federal (and international, for that  matter) law enforcement to "grow a backbone" and go after the people who were responsible for and most profited from the ongoing ethical debacle in health care.

As we have said before, true health care reform would make leaders of health care organization accountable for their organizations' bad behavior.

Roy M. Poses MD on Health Care Renewal 

Friday, November 22, 2013

Confused Thinking about New Cholesterol Guidelines - Were Conflicts of Interest to Blame?

For years, clinical practice guidelines promulgated by prominent health care organizations have been hailed with accolades as received wisdom.  However, there is increasing reason to be skeptical of such guidelines.  Many guidelines are not based on rigorous application of the principles of evidence-based medicine, and often seem to arise from the personal opinions of their authors.  This is particularly troublesome when those authors  have conflicts of interest, and when the organizations that sponsor guideline development have institutional conflicts of interest.  Back in 2011, an Institute of Medicine panel advocated standards for guideline development, including strict limits on conflicted panel members, to make their results more trustworthy.  However, as we noted here, those standards have been largely ignored.   

Therefore, it is good news that the just released, long awaited guidelines on the treatment of blood cholesterol from the American College of Cardiology (ACC) and American Heart Association (AHA)(1)  they provoked controversy rather than adulation.  However, connecting some dots reveals that the guideline development process and the defenses of the guidelines by its developers were even more confused than they first seemed.  That confusion may be explained by conflicts of interest affecting guideline development of the sort that the IOM report wanted eliminated.  .


The New Cholesterol Guidelines - Background

A striking feature of the guidelines was a new approach to drug treatment for primary prevention, that is, for patients who do not already have heart disease or other atherosclerosis.   Such drug based primary prevention has been controversial, although drug treatment for people with high cholesterol who also have documented coronary heart disease, secondary prevention, is well-established.

The new cholesterol treatment guideline suggested drug treatment, essentially limited to statin medications, for patients believed to be at elevated risk of developing coronary artery disease or other forms of atherosclerotic disease, even in the absence of elevated cholesterol levels.  

Adults 40 to 75 years of age with LDL–C [so called "bad cholesterol"] ]70 to 189 mg/dL, without clinical ASCVD* [atherosclerotic cardiovascular disease] or diabetes and an estimated 10-year ASCVD risk ≥7.5% should be treated with moderate- to high-intensity statin therapy.

Also,

It is reasonable to offer treatment with a moderate intensity statin to adults 40 to 75 years of age, with LDL–C 70 to 189 mg/dL, without clinical ASCVD* or diabetes and an estimated 10-year ASCVD risk of 5% to [less than] ... 7.5% 

Previously, guidelines and other recommendations for the treatment of cholesterol for primary prevention, that is, for patients without known atherosclerotic cardiovascular disease, suggested treatments according to the level of cholesterol or its components.  

Less controversially, the guidelines recommended treatment for patients with existing ASCVD, very high LDL-C, and diabetes.

The new guidelines raise some  questions:
- Given the past controversy, how good is the evidence supporting cholesterol lowering drug treatment in primary prevention?
- What is the evidence supporting deciding on drug treatment for primary prevention on predicted risk of atherosclerotic cardiovascular disease?
- Can physicians make good enough risk predictions to use this approach?

Evidence Supporting Drug Treatment of Cholesterol in Primary Prevention - Do Benefits Outweigh Harms?

The reason that cholesterol lowering drug use for primary prevention has been controversial is the lack of clear evidence that such drug use leads to benefits to patients that outweigh its harms.  A central principle of evidence based medicine is that only treatments whose benefits clearly outweigh their harms should be prescribed.  .

A recent commentary by Abramson et al in the British Medical Journal in October, 2013 outlines the issues.(2)  There is no good evidence that statins used in primary prevention increase overall survival, or decrease overall incidence of adverse events, defined as death, hospital admission, prolongation of admission, cancer or permanent disability.

Individual trials and meta-analyses do show that statins lead to a small reduction in the rate of cardiovascular events.  For example, the authors' re-analysis of data from a patient level meta-analysis showed that of 140 low-risk primary prevention patients treated for five years, one patient would avoid a major coronary event or stroke.

However, it is not clear that this small likelihood of benefit offsets the likelihood of adverse events due to treatment.  The data about the harms of statins in primary prevention is not very clear, partially because the relevant randomized controlled trials featured "reporting of adverse events ... [that was] generally poor, 'with failure to provide details of severity and type of adverse events or to report on health-related quality of life."

In summary, Abramson et al wrote,

statin therapy prevents one serious cardiovascular event per 140 low risk people (five year risk ... [less than] 10%) treated for five years.  Statin therapy in low risk people does not reduce all cause mortality or serious illness and has about an 18% risk of causing side effects that range from minor and reversible to serious and irreversible.  Broadening the recommendations in cholesterol lowering guidelines to include statin therapy for low risk individuals will unnecessarily increase the incidence of adverse effects without providing overall health benefit.

However, the new guideline focused on the ability of statins to prevent ASCVD, 

The RCTs identified in the systematic evidence review indicated a consistent reduction in ASCVD events
from 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors (statins) therapy in secondary and primary prevention populations....
However, the new guideline ignored the absence of evidence that statin treatment prolonged life, prevented serious illness overall, or improved health status, function or quality of life.

Furthermore, the guideline seemed unreasonably optimistic about the harms of statins.  In particular, it ignored evidence about harms other than diabetes, myopathy (serious muscle disease), and stroke.  Yet there is evidence suggesting that statins at least might cause "liver dysfunction, acute renal failure, and cataracts; cognitive symptoms, neuropathy, and sexual dsyfunction; decreased energy and exertional fatigue; and psychiatric symptoms, including depression, memory loss, confusion, and aggressive reactions" as summarized by Abramson and colleagues.

Thus the new guidelines did not make a new and improved case that statin treatment for primary prevention has benefits that outweigh its harms.  One could argue that this is their fatal flaw and thus all the rest of the guidelines' discussion about which patients should get primary prevention was pointless.

Dr Abramson and his co-author, Dr Rita Redberg, did get to repeat their arguments about why statin use for primary prevention is not justified in a NY Times op-ed, but otherwise the fundamental problem with the guideline's argument for aggressive use of statins went unnoticed.

Evidence Supporting Making Decisions about Statin Treatment as Primary Prevention According to Patients' Risks of Developing Atherosclerotic Cardiovascular Disease

The guidelines suggested statin therapy for patients judged to have at least a 7.5% risk of ASCVD.  As noted above, there is no evidence that statin therapy in primary prevention in general increases survival, reduces serious events overall, improves health status, quality of life, or function, or has benefits that clearly outweigh its harms.  I could not find any reference in the guidelines to clear evidence about the effects of statin treatment in primary prevention for this sub-group of patients. 

In the supplemental guideline on risk assessment,(3) there was this 

After deliberation, the Work Group endorsed the existing and widely employed paradigm of matching the intensity of preventive efforts with the individual’s absolute risk.. The Work Group acknowledges that none of the risk assessment tools or novel risk markers examined in the present document have been formally evaluated in randomized controlled trials of screening strategies with clinical events as outcomes.

The wording is confusing, but might be stating that a strategy of using statins only for patients predicted to have a risk greater than 7.5% has not been assessed in a clinical trial.  (It might also, however, be stating that the specific method recommended by the guideline to assess this risk has never been tested in a clinical trial, see below.)

The apparent lack of evidence in support of the specific strategy advocated by the guideline to use statins for primary prevention in patients whose risk of ASCVD exceeded the designated threshold did not otherwise attract any notice.

Evidence Supporting the Ability of Physicians to Assess Risk of ASCVD Accurately Enough to Implement the Recommended Strategy


The supplemental guideline stated that previously published methods of risk prediction would not be suitable for the new proposed approach,

As part of its deliberations, the Work Group considered previously published risk scores with validation in NHLBI cohort data as 1 possible approach. However, a number of persistent concerns with existing risk equations were identified including nonrepresentative or historically dated populations, limited ethnic diversity, narrowly defined endpoints, endpoints influenced by provider preferences (e.g., revascularizations), and endpoints with poor reliability (e.g., angina and heart failure [HF]). 

Since the recommended strategy requires an estimation of risk, the lack of availability of acceptable risk assessment methods would appear to be another fatal flaw.  However, 

Given the inherent limitations of existing scores, the Work Group judged that a new risk score was needed to address some of the deficiencies of existing scores, such as utilizing a population sample that approaches, to the degree possible, the ideal sample for algorithm development and closely represents the U.S. population.


So the question now becomes: did the guideline provide sufficient evidence that the new risk prediction tool developed as part of guideline development predicts sufficiently well to be used to make decisions as recommended by the guidelines?

Note that developing, validating, and publishing a new risk score normally would be considered to be tasks that are part of research, not guideline development.  Nonetheless, the guideline developers decided to take on these tasks as part of guideline development, which precluded independent publication of the the results of this research after peer review.  This makes it more difficult for others to critically evaluate the results of the research concerning the new risk score.  But let me attempt to do so.

The work group went ahead to develop a new multivariate prediction model for ASCVD.  This means they used statistical methods to find variables, that is, patients' clinical or demographic characteristics, that independently could predict the outcome of interest, and then combined these variables in an equation (or algorithm) to make risk predictions for individual patients.  Such multivariate models to make diagnoses or, as in this case, prognostic predictions have been the subject of considerable research since at least the 1970s.  However, they have not been as useful as their initial advocates hoped.

The issues turn out to be a bit complex.  I must digress into an area in which I previously did research.

It is quite easy to develop multivariate diagnostic or prediction models.  The statistical analytic tools can almost always find multiple patient characteristics that independently correlate with the outcome of interest.  The problem is that such correlations can be based on random associations, or biases produced by idiosyncrasies in the particular data set used for model development.  That a new model can diagnose or predict accurately for patients who were not in the original data set thus is not assured, and should be considered to be merely a hypothesis.

Models developed on one group of patients often do not work when tried prospectively on new patients, probably because the initial development group was somehow not completely representative of all the patients of interest.  For example, a model may have been developed using patients from a particular hospital which attracts different sorts of patients than found in other hospitals in which the model might be used.  .Therefore, before one has confidence in such a multivariate model, one must verify that the model predicted or diagnosed well not only in the group of patients from which it was derived, but prospectively on other patients like those on whom it would be used in clinical practice.

The supplemental guidelines did assert that the new model was prospectively tested,

 The equations were also assessed in external validation studies using data from other available cohorts

However, it did not specify what these cohorts were.

There is some data buried in Appendix 4 of the supplemental guideline on the performance of the model.  They did not distinguish results from derived from prospective validation from those from model derivation.

 In summary, discrimination and calibration of the models were very good. C statistics ranged from a low of 0.713  (African-American men) to a high of 0.818 (African-American women). Calibration chi-square statistics ranged from a low of 4.86 (nonHispanic White men) to a high of 7.25 (African-American women).

These two sentences do not provide strong evidence that the model would predict well when used prospectively.

The C-statistic is an overall measure of the ability of the model to discriminate between patients who will go on to develop the diseases of interest and those who will not.  The statistic is formally equal to the probability that were two patients, one who developed disease, and another who did not, selected randomly, the model would predict disease more strongly for the patient who actually got it.  Thus, at best, for 20% of random pairs of African-American women, one who got disease, one who did not, the model would give a more pessimistic prediction for the women who would not get disease.  This is good, but not great discrimination ability.  (If this result referred to data from model derivation, not prospective validation, it is likely to be over-optimistic.)

Furthermore, since the guideline would be used to make decisions based on the absolute risk, its calibration is also important.  Calibration is the measure of whether the model predictions of risk are close to reality.  To assess calibration, one ought to assess the whole range of predictions made by the model.  Given that the guidelines suggest a 7.5% risk threshold, it would be particularly important to determine whether patients given predictions above and below that value really have risks above and below that value.

Unfortunately, the two sentences above are not helpful in this regard.  The chi-square statistic presented is a measure of overall calibration, but does not show calibration for groups of patients given predictions with particularly interesting values, like above or below 7.5%

Note that the supplemental guideline on risk assessment includes a statement that details about the model validation done "internally and externally" are in a Full Panel Report Data Supplement.  The download of that supplement so far does not seem to work

Setting that aside for a moment, the published guidelines do not provide good evidence that the risk prediction tool the guidelines recommend should be used to assess ASCVD risk in order to make decisions on statin use for individual patients in primary prevention. The lack of a sufficiently accurate method to predict risk seems to a fatal flaw for a strategy that requires risk prediction.

Controversy in the Media

It turns out that I was not the first person to identify this problem. In fact, it appears that during the internal review process for the guidelines, major questions had already been raised about the risk prediction model's calibration. However, these questions did not seem to have been conveyed to the guideline authors, and hence were never addressed.

As reported by the New York Times on November 17, 2013,

The problems were identified by two Harvard Medical School professors whose findings will be published Tuesday in a commentary in The Lancet, a major medical journal. The professors, Dr. Paul M. Ridker and Dr. Nancy Cook, had pointed out the problems a year earlier when the National Institutes of Health’s National Heart, Lung, and Blood Institute, which originally was developing the guidelines, sent a draft to each professor independently to review. Both reported back that the calculator was not working among the populations it was tested on by the guideline makers.

That was unfortunate because the committee thought the researchers had been given the professors’ responses, said Dr. Donald Lloyd-Jones, co-chairman of the guidelines task force and chairman of the department of preventive medicine  at Northwestern University.

The article by Ridker and Cook was indeed published. on November 19, 2013(4).  It suggested major problems with the calibration of the risk assessment model,

Another concern for clinicians is whether the new prediction algorithm created by the ACC/AHA correctly assesses the level of vascular risk. To be useful, prediction models must not only discriminate between individuals with and without disease, but must also calibrate well so that predicted risk estimates match as closely as possible the observed risk in external populations. We calculated predicted 10-year risks of the same atherosclerotic events using the new ACC/AHA risk prediction algorithm and compared these estimates with observed event rates in three large-scale primary prevention cohorts, the Women's Health Study, the Physicians' Health Study, and the Women's Health Initiative Observational Study.

As shown in figure 1, in all three of these primary prevention cohorts, the new ACC/AHA risk prediction algorithm systematically overestimated observed risks by 75–150%, roughly doubling the actual observed risk. As shown in figure 2, similar overestimation of risk was observed in two external validation cohorts used by the guideline developers themselves, an issue readily acknowledged in the report. Thus, on the basis of data from these five external validation cohorts, it is possible that as many as 40–50% of the 33 million middle-aged Americans targeted by the new ACC/AHA guidelines for statin therapy do not actually have risk thresholds that exceed the 7·5% threshold suggested for treatment. Miscalibration to this extent should be reconciled and addressed in additional external validation cohorts before these new prediction models are widely implemented. It is possible, for example, that the five external validation cohorts are more contemporary than the cohorts used in the risk prediction algorithm and thus reflect secular improvements in overall health and lifestyle patterns in the USA over the past 25 years.

Note that Figure 1 of the Ridker and Cook article showed the calibration of the model in the patient cohorts newly tested by these authors, focusing in particular on whether patients predicted to have risks of cardiovascular disease just over the 7.5% threshold actually had rates of such disease greater than 7.5%.  They clearly did not   Note also that Figure 2 showed calibration of the model when the guideline authors attempted to test it on new patient cohorts, apparently the data found in the so far inaccessible Full Panel Report Data Supplement.  Again, the model overestimated risk for patients just over the crucial threshold.

So the guideline developers knew that their model overestimated risk, buried this information in supplemental data, did not admit or perhaps appreciate how it threatened the credibility of their guidelines, and somehow were not given the internal review that suggested this was a fatal flaw of the proposed guidelines . 

The Guideline Developers' Responses

The media controversy over the accuracy of the prediction model incorporated into the new guidelines provoked responses from the guideline developers, but these responses were at best confused.  .  

First, as reported by the NY times,

In a response on Sunday, Dr. [Sidney C] Smith of the guidelines committee said the concerns raised by Dr. Cook and Dr. Ridker 'merit attention.'

But, he continued, 'a lot of people put a lot of thought into how can we identify people who can benefit from therapy.' Further, said Dr. Smith, who is also a professor of medicine at the University of North Carolina and a past president of the American Heart Association, 'What we have come forward with represents the best efforts of people who have been working for five years.'

Note that this response includes two logical fallacies.  First, it contained a straw man argument.  It appeared to respond to accusations no one made.  Nobody accused the guideline developers of being lazy, not putting in much effort, or not devoting much thought to the effort.  The response also included an implied appeal to authority: big experts came up with these guidelines so their opinions should be credited, even in the presence of data to the contrary.  

Also, as reported by CNN,
However, 'I can't speak to whether the calculator is valid or not,' Dr. Robert Eckel, co-chair of the American Heart Association committee that wrote the new guidelines and the association's past president, told CNN. 'That needs to be determined.'

'We trusted that the calculator worked,' he said. 'We trusted that the calculator is valid.'

This was confusing.  Maybe Dr Eckel meant it to be another appeal to authority, the authority in question being that of the work group that developed the risk prediction model. 

Furthermore, again according to CNN,

Researchers apparently did not receive the professors' responses, Dr. Donald Lloyd-Jones, chairman of the committee that developed the equation, told the Times.
 
But Lloyd-Jones told reporters Monday, 'There's nothing wrong with these equations.'

Committee members were aware there could be 'overestimation of risk in some populations,' he said.

In addition,

 'Our risk assessment guideline doesn't tell you what to do. ... It just evaluates risk,' he said.

I am not sure this even rises to the level of a logical fallacy.  It appears to be pure denial.  The whole point of the risk assessment tool was to determine whether a patient's risk is above or below the thresholds suggested by the treatment guideline, and thus to tell you what to do.  There clearly is something wrong with "these equations."  Using them appears to vastly overestimate risk, and thus implementation of the guideline would probably lead to vast over treatment of real people  .

Thus, after promulgating guidelines that seemed hardly based on evidence, when challenged, the guideline developers' response was confused and illogical.   

What About the Issue of Benefits versus Harms?

The other problems that I identified above, lack of evidence that primary prevention provides benefits that clearly outweigh harms, and lack of evidence that a strategy based on risk assessment would provide benefits that outweighs harms, did not get media coverage, and so did not provoke a response from the developers.  However, I  did find that one of the members of the guideline panel implied her approach to the benefits versus harms issue in a Medscape news article.   Dr Noel Bairey Merz gave a talk at the 2013 American Heart Association meeting about primary prevention for women, first saying

Although the randomized clinical trial evidence supporting primary prevention with statin therapy in women is not perfect, 'the absence of data means negative data.'

That is the argument formulated by Dr Noel Bairey Merz (Cedars Sinai Medical Center, Los Angeles, CA), who spoke today here at the American Heart Association 2013 Scientific Sessions.  

'How confident are we that statins do not save lives in the week before a heart attack, but they do save lives the week after a heart attack, for women and men?'

Also,

in the overall JUPITER study of 18 000 patients, there was no treatment benefit when women were studied as a subgroup. Merz argues that JUPITER is powered for the total sample size only, not for women alone. In addition, the statistical test for heterogeneity revealed the interaction by sex was not statistically significant.

'Pretty much all the subgroups fall beyond the statistically significant range,' said Merz. 'So should we withhold treatment for women, who now are the majority of victims of cardiovascular disease, because of low precision and a trial that was not designed to address or answer this question?'

So Dr Merz seemed to assume that statins work for particular patients in the absence of evidence that they do not work, maybe implying a general assumption that all treatments are beneficial until proven otherwise.  This stands a central precept of evidence-based medicine, and perhaps the ancient dictum to physicians to do no harm, on their heads.


Unwarranted Enthusiasm for (Over) Treatment and Conflicts of Interest

The new cholesterol guidelines, and those who developed them, seem enthused about the treatment of cholesterol with statin drugs for primary prevention in the absence of evidence that such treatment produces benefits that outweigh its harms.  They also seem enthused about basing treatment decisions on a statistical prognostic model that has not been shown to be accurate, and in fact which appears to be biased towards promoting excess treatment in the context in which it would be used.   The excess enthusiasm occurred in spite of evidence, and at times in spite of logic.

 One possible reason that the guideline developers got so enthused that they seemed unable to think straight appears to be their own conflicts of interest, as first publicly noted in a post on Pharmalot. Reviewing the disclosure forms provided with the guidelines revealed more detail.

Of the 13 people on the main treatment guideline panel who were not NHLBI staffers serving ex-officio, 7 had financial relationships with pharmaceutical companies that manufacture statins:

- Jennifer Robinson, co-chair, research funded by AstraZeneca and Merck;
- C Noel Bairey Merz, consulting for Abbott, Bristol-Myers Squib Novartis, and Pfizer;
- Robert H Eckel, consulting for Merck, Pfizer and Abbott;
- Anne Carol Goldberg, consulting for Abbott and Merck, research funded by Abbott, Merck, and Novartis;
- J Sanford Schwartz, consulting for Abbott, Merck and Pfizer, research funded by Pfizer;
- Karol Watson, consulting for Abbott, AstraZeneca, Merck and Pfizer, research funded by Merck;
- Peter W F Wilson, consulting for and research funded by Merck.

Of the 10 expert reviewers for this panel, 3 had financial relationships with pharmaceutical companies that manufacture statins
-  William Virgin Brown, consultant for Abbott, Bristol-Myers Squibb, and Pfizer;
-  Matthew Ito, consultant for Kowa;
- Robert S Rosenson, consultant for Novartis and Pfizer.

Of the 11 people on the risk prediction panel who were not NHLBI staffers serving ex-officio, 5 had financial relationships with pharmaceutical companies that manufacture statins
-David C Goff Jr, co-chair, research funded by Merck;
- Raymond Gibbons, consultant for AstraZeneca;

- Jennifer Robinson, research funded by AstraZeneca, and Merck;
- J Sanford Schwartz, consulting for Abbott, Merck and Pfizer, research funded by Pfizer [although these relationships were not listed as relevant to this panel, but found in the listing for the panel above];
- Peter W.F Wilson, consultant for and research funded by Merck.

Also, in the Abramson and Redberg NY Times op-ed, the authors noted

both the American Heart Association and the American College of Cardiology, while nonprofit entities, are heavily supported by drug companies

As noted on Pharmalot, the prevalence of conflicted panel members did not appear to conform to the standards for the development of trustworthy guidelines recently published by the Institute of Medicine:

whenever possible, guideline development group members should not have conflicts of interest… and the chair or co-chairs should not be a person(s) with conflicts of interest.

Also, noted by the Los Angeles Times was this comment from Dr John Abramson, lead author of the commentary on statins in primary prevention(4),


'There is overtreatment that’s been built into the risk calculator, and this is a warning sign about the overtreatment that’s built into the guidelines themselves and the conflicts of interest in the organizations that are overseeing the production of these guidelines,' said Dr. John Abramson, a Harvard University cardiologist who has argued that statins offer little value for people with a 10-year risk level of heart attack or stroke of less than 20%. 'There aren’t brakes being put on the enthusiasm and overreaching of the experts.'

'There are statin believers, and when you hear these experts talk, they’re talking emotionally, not scientifically,' Abramson added. 'The experts are using emotion, not science.'

As Joe Collier  observed, "people who have conflicts of interest often find giving clear advice (or opinions) particularly difficult."(5)

This difficulty giving clear advice, when amplified by a guideline for a common problem supported by prestigious non-profit organizations, and promoted by vigorous public relations, could lead to "more than 45 million middle-aged Americans who do not have cardiovascular disease being recommended for consideration of statin therapy" (per Ridker and Cook[4]) unnecessarily, likely resulting in millions suffering unneeded side effects, and billions in costs.  

Summary

Guidelines for management of a very common problem promulgated by a major medical society and a major disease oriented non-profit organization suggested a strategy that would vastly increase drug treatment of currently healthy patients.  The strategy appears not to have been based on good evidence.  When some of the problems with this evidence were pointed out, the guideline developers responded with illogic.  Apparently many of the guideline developers have financial relationships with the drug companies that would most profit from increases in drug treatment as recommended by the guidelines  Implementation of the new guidelines might results in millions of people in the US receiving unneeded drugs, with resultant side effects and costs. .

Do we need more examples of how conflicts of interest are causing the poor outcomes and excess costs that are wrecking our health care system?  Do we need more excuses not to eliminate conflicts of interest from guideline development?  Do we need more delay implementing the standards provided by the Institute of Medicine report on trustworthy guidelines?  Do we need more excuses not to drastically reduce conflicts of interest affecting academic medicine, medical societies, and disease specific non-profits, specifically starting with the earlier (and so far generally disregarded) Institute of Medicine report on conflicts of interest in medicine?

While we in the US argue incessantly in the details of minor reforms of our supposed free health care market, we ignore the rot at its foundations.  True health care reform would attack the conflicts of interest that have put money, not patients at the center of health care.


References

1.  Stone NJ, Robinson J, Lichtenstein AH,  et al.  2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  Circulation 2013.  Link here.
2.  Abramson JD, Rosenberg HG, Jewell N et al.  Should statins be prescribed to people at low risk of cardiovascular disease. Brit Med J 2013; 347: 15-17.  Link here.
3.  Goff Jr DC, Lloyd-Jones DM, Bennett G, et al.  2013 ACC/AHA Guideline on the Assessment of Cardiovascular
Risk.  Journal of the American College of Cardiology (2013), doi: 10.1016/j.jacc.2013.11.005.  Link here.
4.  Rikder PM, Cook NR. Statins: new American guidelines for prevention of cardiovascular disease.  Lancet 2013;   Link here.
5.   Collier J. The price of independence. Br Med J 2006; 332: 1447-9.  Link here

Thursday, November 21, 2013

WHEN IS DISCLOSURE NOT DISCLOSURE?



WHEN IS DISCLOSURE NOT DISCLOSURE?

Hint: When it is made by the Chairman of the DSM-5 Task Force.

Here is a case study in conflict of interest (COI). A remarkable confession has just appeared by a group of 5 prominent academics, writing in the journal JAMA Psychiatry. Having been outed to the Editors, they now admit to concealing pertinent financial information. One of the five is David J. Kupfer, MD, chairman of the DSM-5 Task Force and past chairman of the department of psychiatry at The University of Pittsburgh. The others are from Pittsburgh, Minnesota, and Chicago.

With millions in funding from NIMH, these folks have been developing new approaches to testing for anxiety and depression. The technical details are not important for this story. What matters is that they have clearly set their sights on a large market for diagnostic screening, epidemiological research, and clinical practice. They have recently published promotional reports talking up three products and projecting major applications – but before taking care of the nuts and bolts of scale development.

Their major report in JAMA Psychiatry contained a disclosure to the effect that they might at some time in the future consider commercial development of their depression scale. This disclosure was phony. We now know that, in lieu of frankly disclosing a major COI, they opted for a disingenuous, dissembling statement that was economical with the truth.

I thought the content of their report was sub-par, and the journal published a letter from me to that effect. They responded to my criticism with hand waving, and they tried to impugn me for bias related to my own (disclosed) COI. This foolish, ad hominem tactic aimed to divert attention from the substance of my critique – for which they had no adequate response.

As I am not a person who suffers fools or insults gladly, their evasive response caused me to do some checking. I quickly learned that the gang of five are shareholders in a private corporation. Before their paper was accepted by JAMA Psychiatry, the corporation was incorporated in Delaware and soon after registered to do business in Illinois. Those facts were not disclosed in the original report or in the published letter of Reply to me. These omissions were acknowledged in the notice of Failure to Report that appeared on-line today.

It gets worse. Other things that I learned – and that I communicated to the journal – make it clear that the corporate train had left the station in advance of the letter of Reply. For instance, a professional operations and management executive (Mr. Yehuda Cohen) had joined the corporation. He had established the corporate website, where he was featured as a principal, along with the gang of five. The website also displayed a professionally crafted Privacy Policy, dated ahead of the letter of Reply. This document identified what appears to be a commercial business address for the corporation. The notice of Failure to Disclose is silent on these facts.

So, the published notice of Failure to Disclose still withholds pertinent information, which makes a mockery of the weasel words that they have not released any tests for commercial or professional use. Not yet, they haven’t. But they are under way, make no mistake. This prevarication creates the impression of a habitual lack of transparency. Considering that I gave the journal all this information, one has to be surprised that JAMA Psychiatry went along with this prevarication. Plus, would it have killed them to apologize for their foolish attempt to smear me, as I requested? In correspondence with me, the Editor in Chief of JAMA didn’t want to go there, and he refused to publish my letter that detailed the facts, citing the most specious of grounds. The Editor of JAMA Psychiatry has ducked for cover when I faulted him for publishing the ad hominem material in the first place.

This deplorable episode casts a pall on the repeated assurances by Dr. Kupfer that COI issues were under control in the DSM-5 process. If the chairman of the DSM-5 Task Force doesn’t have his own act together, then what are his assurances worth? The new instruments proposed by this corporation bear an eerie resemblance to cross-cutting dimensional measures that were promoted for DSM-5 but that didn’t make the final cut. As far as I know, Dr. Kupfer didn’t declare his interest in this corporation for scrutiny vis à vis DSM-5.

Another lesson is that even Failure to Report notices can be weasel documents. I fault the Editors of JAMA Psychiatry, Joseph Coyle, MD, and of JAMA, Howard Bauchner, MD, for the non-rigorous standard they applied to the notice of Failure to Disclose. They acquiesced in another dissembling response.

Yet another lesson is that all authors are accountable for the accuracy of disclosure statements. Some of these authors might claim they relied on the lead author and president of the corporation, Robert Gibbons, PhD, a statistician at The University of Chicago, to make sure everything was kosher. Big mistake – just like the infamous case of the Nemeroff-led Cyberonics – vagus nerve stimulation review that failed to disclose that the academic authors (ahem) were paid consultants of the corporation. When your name is on the byline, be sure you cover the bases for your own protection, and don’t squawk when you are called out.

Finally, where is NIMH in all of this? Since when are public NIMH funds to be treated as commercial seed money? Who actually owns the algorithms and data bases on which the Gibbons corporation relies for its commercial aspirations? Why are they not publicly accessible? Is Thomas Insel on top of this?



Sunday, November 17, 2013

Another 'Survey' on EHRs - Affinity Medical Center (Ohio) Nurses Warn That Serious Patient Complications "Only a Matter of Time" in Open Letter

I've written previously about substantial problems nurses at Affinity Medical Center, Ohio (http://www.affinitymedicalcenter.com/) and other organizations are having with EHRs, and how hospital executives were ignoring their complaints.  The complaints have been made openly, I believe, in large part due to the protection afforded by nurses' unions.

See for example my July 2013 post "RNs Say Sutter’s New Electronic System Causing Serious Disruptions to Safe Patient Care at East Bay Hospitals" at http://hcrenewal.blogspot.com/2013/07/rns-say-sutters-new-electronic-system.html (there are links there to still more examples), and my June 2013 post  "Affinity RNs Call for Halt to Flawed Electronic Medical Records System Scheduled to Go Live Friday" at http://hcrenewal.blogspot.com/2013/06/affinity-rns-call-for-halt-to-flawed.html, along with links therein to other similar situations.

Particularly see my July 2013 post "How's this for patient rights? Affinity Medical Center manager: file a safety complaint, and I'll plaster it to your head!" at http://hcrenewal.blogspot.com/2013/07/hows-this-for-patient-rights-affinity.html, where a judge had to intervene in a situation of apparent employee harassment for complaints about patient safety risks.

Here's the latest at Affinity Medical Center - an open letter to the Chief Nursing Officer (CNO) dated August 15, 2013.  Images and text below:


Page 1 - click to enlarge (text is below)




Page 2 - click to enlarge (text is below)

The letter to the CNO states (emphases and comments in red italics are mine):

August 15, 2013

Mr. Osterman,

Nurses at Affinity Medical Center are pleased to see that you have responded to our request and provided additional Cerner education classes, but education was only one of many concerns. [I note that education cannot compensate for the toxic effects of bad health IT that is poorly designed and/or poorly implemented, and that it's legally the responsibility of a hospital to ensure all apparatuses implemented and the environment of care are themselves safe - ed.]

Since the implementation nurses throughout the hospital have brought many serious concerns to the attention of both yourself and other supervisors.  When nurses have reported these concerns they have been either ignored or dismissed. It is distressing that Affinity would so blatantly disregard the concern of their RN staff surrounding issues that concern patient safety.  It is clear to direct-care RNs that many of the problems that exist with Cerner are a direct result of the failure to include nurses in the planning stages. [Exclusion of enduser domain experts from health IT development, in 2013, is grossly negligent IMO - ed.]

Some of the concerns that nurses have brought to the attention of management include:

  • Medication errors/scanning issues - perhaps the biggest concern of all RNs 
  • RNs unable to access patient records for hours at  a time
  • Incorrect descriptors and inaccurate drop-down menus
  • Incorrect calculations in the I&O [fluid input and output, especially important in very ill patients - ed.] and MAP [mean arterial pressure, used to calculate drip rates of potent drugs that raise or lower blood pressure in critical care, among other things - ed.] portions of the chart 
  • Inaccurate medication times and the inability of RNs to ensure medications are scheduled correctly
  • Endless loops of computer prompts that are unable to be dismissed by RNs in an emergency 

[It should be apparent that these 'issues' - some due to fundamental design flaws - are quite serious in terms of the harm they can cause, even with the exceptional and stressful RN hyper-vigilance their presence necessitates- ed.]
 
These threats to patient safety cannot continue.  It is only a matter of time before a communication error or a medication error lead to a serious complication for a patient.  These types of errors have the ability to harm every patient and must be addressed immediately. 

[This is not theoretical or unlikely.  Such an error led to my own mother's crippling injuries and death, and to injuries and deaths in numerous other cases of which I am aware through my legal work - ed.]

We ask that you set up a meeting with a delegation of RNs from our Facility Bargaining Council to discuss the concerns that nurses have documented on 'Technology Despite Objection' forms [complaint forms about use of technology with objections - ed.] and make a plan to fix these life-threatening problems.

You may set up a meeting by contacting Pam Gardner, RN at [redacted] or our National Representative  Michelle Mahon, RN.  [For the National Nurses United labor union, http://www.nationalnursesunited.org/, with close to 185,000 members nationwide - ed.]  We look forward to hearing from you.

The letter is followed by the signatures of about 70 nurses.

I am informed by a Union rep in mid-November 2013, 3 months after the date of this letter, that (emphasis mine):

Nurses there are continuing to document the problems and concerns that they are experiencing.  I have attached a letter sent to the CNO at Affinity signed by nearly 70 RNs.  This is a pretty significant number of nurses especially in light of the fact that managers stole the circulating letter two times to prevent nurses from signing on.  The response to this letter was…….nothing.  The nurses have been ignored yet again. [Ellipsis in the original, I did not redact - ed.] 

It is clear the administration of this healthcare system has been explicitly put on notice of likely if not imminent danger to patients by multiple qualified experts, its own RNs.  If they do not act and patient harm occurs, it is my belief criminal negligence charges could be merited.

I also highly doubt patients are informed of these EHR system 'issues' and have been afforded the opportunity to give informed consent to the use of these computer systems in their care, or to go elsewhere for treatment.

These problems are repeating themselves over and over across this country and others, but many clinicians, especially those not protected to some degree by a labor union, do not speak out due to fear of retaliation.

Let's hope the nurses who signed this letter don't get their complaints plastered to their foreheads, as some were threatened with as in the aforementioned post.

-- SS

Thursday, November 14, 2013

No Dogs to Bark - Failure to Connect the Events Foreshadowing the Fall of the Upstate President

In Silver Blaze, Sherlock Holmes noted the important clue of the watchdog that did not bark.  In health care, nowadays watchdogs are absent, so who is expected to bark?  So, when cases of failed leadership of health care organizations appear, in retrospect they often appear to have been foreshadowed by various events which caused no reaction.. 

Last week we posted about the president of the State University of New York (SUNY) - Upstate Medical University who resigned his job after revelations that he was receiving hundreds of thousands of dollars in income from outside organizations which he had not properly disclosed, and which could have constituted conflicts of interest. 

Only days later, on November 20, 2013, James T Mulder, the Syracuse Post-Standard journalist who had done much of the relevant reporting, published an article summarizing previous events that in retrospect should have suggested that there were serious ongoing problems within SUNY - Upstate leadership. 

Claims that the University Fired Physician Whistle-blowers - the Case of Dr Stewart

According to the summary article,

Critics of [SUNYH - Upstate Medical University President Dr David R] Smith ... [said]  he ruled with an iron first and did not tolerate opposing views.

'It became clear once he knew who dissenters were, things would happen,' said Dr. James Holsapple, a former Upstate neurosurgeon who now practices and teaches in Boston, Mass.'Dissidents would not be tolerated.'

Holsapple cited the case of Dr. William Stewart, a neurosurgeon fired by Upstate in 2011. Stewart contends he was fired because he filed a complaint with the state Health Department, alleging misconduct by two other Upstate doctors. That complaint led to an investigation by the state which issued a report critical of Upstate.

Upstate contends Stewart was let go because he refused to cooperate with an investigation into a breach of confidential information, including patient records.

Holsapple calls the incident a ' ... clear example of Smith taking out a critic.'

In a pending lawsuit in federal court, Holsapple has accused Upstate of retaliating against him for speaking out against what he calls dangerous medical practices and other unethical activities at the Syracuse teaching hospital. Upstate has denied his accusations and called them 'baseless.'

In fact, Mr Mulder's article from April, 2011 about Dr Stewart's case noted that Dr Stewart

said his complaint ' ... made me an enemy of the regime.'

Although Upstate accused Dr Stewart,

 ... because he refused Upstate’s request that he cooperate in an investigation into an intentional breach of confidential information, including private patient care records.

Dr Stewart contended that he knew how to handle confidential information,


Stewart served 20 years as a member of the state Board for Professional Medical Conduct, which takes disciplinary action — including license revocation and suspension — against doctors. He chaired that board for three years in the late 1980s.

Stewart said he never disclosed any private patient information. 'I took the Hippocratic oath when I graduated from medical school,' he said.

Stewart said physicians are required by state law to report suspected cases of medical misconduct. Failure to do so is considered misconduct under the law.

Stewart said that’s why he filed a complaint with the Health Department, alleging that a resident neurosurgeon training at Upstate was allowed to perform complex spine surgery without proper supervision because the neurosurgeon who was supposed to be doing the operation was in another operating room with a different patient.


Furthermore, Dr Stewart alleged that Upstate sought to intimidate him,

Stewart said two private investigators hired by Upstate visited his office in December, but he refused to talk to them.


Claims that the University Fired Physician Whistle-blowers - the Case of Dr Holsapple


The case that caused Dr Holsapple to file a lawsuit against Upstate had some interesting similarities, but also some uniquely colorful aspects, as documented in an article from February, 2011, by Mr Mulder.  Dr Holsapple also alleged a major quality of care problem at Upstate,

 The suit charges Holsapple’s problems began in 2007 when he objected to a plan to have a single neurosurgeon supervise two spine surgeries in separate operating rooms at the same time. Holsapple thought the plan was risky for patients. At the time death rates following spine surgery at Upstate were five times higher than normal, the suit said.

The suit contends Dr. Ross Moquin, a neurosurgeon no longer at Upstate, was overseeing the two surgeries, according to the suit. While Moquin was with one patient, Dr. Walter Hall, then chair of the department of neurosurgery, and a resident doctor in training, began surgery on the second patient, the suit says. Neither Hall nor the resident were qualified to finish the operation, but Hall told the resident to complete the surgery because Moquin was busy with the other patient, according to the suit. Holsapple’s suit contends the patient suffered complications. It also accused the hospital of creating fraudulent documentation about the operation and billing fraud. 

Some of these charges appear to have been independently corroborated, at least,

That same two-room spine surgery case was cited in a recent state Health Department investigation. In a 68-page report issued in August, the Health Department said Upstate did not provide surgical services in a way 'that assures protection of the health, safety and rights of patients ... .'

Nonetheless, Dr Holsapple argued that the hospital punished him for his whistleblowing,

The suit says Hall stripped Holsapple of his job as residency coordinator in 2007 without any explanation, cutting his annual pay by $82,500. Hall also removed Holsapple as the department’s quality officer, according to the suit.

Dr Holsapple also contended that the hospital attempted to harass him after he left, and did so, as Dr Stewart also argued, using private investigators,

Holsapple contends Upstate continued the retaliation after he left. He said Upstate sent two investigators to his home in South Boston in November to intimidate and harass him and his wife.


The lawsuit suggested that Dr Holsapple was qualified to complain about quality issues,

Upstate hired Holsapple as an assistant professor of neurosurgery in 1994 after he completed his residency training there. In his suit he says he served as the quality improvement officer for the neurosurgery department, on the SUNY faculty senate and on the hospital’s medical executive committee alongside top Upstate executives, department chairs and other officials.

It also suggested that perhaps Dr Hall, the chairman of neurosurgery whose conduct figured in Dr Holsapple's complaint, was not so qualified to sit in judgment of Dr Holsapple,


The following year nursing staff discovered images of Nazi artifacts within the computerized medical records of Hall’s patients, the suit says.

Holsapple said in an interview the images were photographs of tank medals and other Nazi paraphernalia displaying the swastika.

The suit says a hospital investigation confirmed the Nazi images belonged to Hall and he was forced to resign as chair of the neurosurgery department.

Holsapple said in an interview Hall told hospital officials he bought and sold the Nazi memorabilia and stored the images on his computer. 'Some of the Jewish faculty were disturbed these icons were finding their way into the computer system,' Holsapple said.


A Cheating Scandal and Probation by an Accrediting Agency

Mr Mulder' November 10, 2013, summary article also reported,


The medical school was embroiled in a cheating scandal in 2011. An Upstate investigation found more than 100 fourth-year medical students cheated on quizzes in a medical literature course.

Last year the medical school was placed on probation by the Liaison Committee on Medical Education, an accrediting group. That group criticized the school for having an erratic learning environment. It also said the medical school's curriculum was out of sync, student complaints were being ignored and the dean was ineffective. Upstate fixed the problems and was taken off probation earlier this year.

Mr Mulder's article jogged by hazy memory into the recollection that I had seen several other examples of problems at SUNY - Upstate, but since the articles that illustrated them seemed to appear in isolation, I had just dumped them into the pile of  documents to be filed. A hurried search then revealed articles on the two whistle-blowers, and the cheating and probation issues above, and...

A Costly Merger?

I found a story from February 21, 2011, also authored by Mr Mulder, that suggested a merger proposed for Upstate University Hospital would lead to a big increase in health care prices,

 The cost to replace a hip, treat a heart attack and provide many other routine hospital services covered by Medicare is about 46 percent higher at Upstate University Hospital than at Community General Hospital.

Some experts fear a proposal to merge Upstate and Community into one hospital with one payment rate will raise Community’s prices to Upstate’s level and increase overall health care costs in Central New York. 

So,

MVP Healthcare, a health insurer, is worried about potential price increases at Community.

'We need to know if the charges for services provided by Community General Hospital would increase as a result of the merger,' said Gary Hughes, a spokesman for the insurer. 'If the rates increase, we would be concerned how the merger would affect health care costs in the region, how that would affect the business community and individuals.'

At a recent public forum on the proposed acquisition, Dr. Douglas Tucker, of MVP, said blending Upstate’s high rates with Community’s could be 'catastrophic,' especially for patients with high-deductible plans who have to pay big upfront costs before their insurance coverage kicks in.

SEIU Local 1199, the union representing Community General workers, shares that concern. The union wants Community General to remain as a lower cost, private sector community hospital. The union also wants to remain at Community to preserve its members pensions and benefits. It does not want to see Community workers become state employees, who are represented by different unions.


Note that although the merger did occur in 2011, I could not find any documentation about its effects on costs since.


Summary


In the last two years, SUNY - Upstate Medical University has been accused of seeking a merger mainly to drive up its prices, and of independently driving out and attempting to intimidate two separate physicians for blowing the whistle about health care quality concerns.  The department chairman whose work was criticized by one of the physicians, and who subsequently drove him to leave was himself forced out of his leadership position over allegations of Nazi photographs found within his computerized patient files.  University students were involved in a big cheating scandal, and the university was placed on probation by its accrediting agency.  These stories generated lots of concerned and sarcastic comments online.  However, I could find nothing to suggest that they lead to any official investigations by SUNY leadership, state government, accrediting agencies (other than as listed above), Medicare, medical societies, etc  I could find nothing to suggest that any actions were taken by students or faculty at the school, or by independent watchdogs or other members of civil society. 


Yet in retrospect the sequence of events suggested major problems with the leadership of SUNY - Upstate Medical University.  The existence of these problems seems to have been confirmed by the revelations that the university president was accepting hundreds of thousands of dollars of outside income which he failed to properly disclose to the university, and which could have constituted major conflicts of interest.  The seriousness of the leadership problems was just underscored by the recent decision of the state Comptroller to audit the institution's contracts (look here for AP story), and the sudden resignation of the university senior vice president for administration and finance who apparently was receiving pay, admittedly which he had disclosed, from the same organizations who were paying the president (see story by Mr Mulder here.)


 One obvious conclusion is that bad leadership leads to lots of bad consequences.

Another is that here in these United States we lack any effective watchdogs who can spot major leadership problems at important health care organizations before they lead to bad outcomes. 

  In particular, our the leadership of our academic medical organizations do not seem to get any organized scrutiny from student, faculty or alumni groups, from government agencies, from accrediting organizations, from medical societies, or from community based civil society organizations  or health care watchdog groups.  Hoping that overworked local reporters or volunteer bloggers will not only be able to spot evidence of trouble in a timely way, and then make it visible enough to generate action is whistling past the cemetery. 



 
 IMHO, concerned citizens, hopefully including those in and in training for the health professions, need to set up organized civil society watchdog groups to hold health care leadership accountable, and to push for involvement by government, accrediting organizations, medical societies, etc. 

Saturday, November 09, 2013

"We’ve resolved 6,036 issues and have 3,517 open issues": Extolling EPIC EHR Virtues at University of Arizona Health System

The public may believe that, in healthcare, only the Obamacare insurance exchange website has lots of bugs.  On those, see my Oct. 10. 2013 post "Drudge Report, Oct. 10, 2013, 9 AM EST: All that needs to be said about government, computing and healthcare" at http://hcrenewal.blogspot.com/2013/10/drudge-report-oct-10-2013-9-am-est-all.html.

Another pillar of the Affordable Care Act, electronic medical records (promoted with incentives for adopters and with penalties for non-adopters via the HITECH section of the 2009 economic recovery act or ARRA) are pretty damn bad themselves.  Only, those systems don't make it hard to find insurance.  Through bugs and other features of bad health IT, they directly interfere with safety and provision of quality care:

Bad Health IT ("BHIT") is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, is difficult and/or prohibitively expensive to customize to the needs of different medical specialists and subspecialists, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation. 

At my Oct. 20, 2010 post "Medical center has more than 6000 'issues' with Cerner CPOE system in four months - has patient harm resulted?" (http://hcrenewal.blogspot.com/2010/10/medical-center-has-more-than-6000.html) I observed:

From the October 2010 "News for Physicians affiliated with Munson Medical Center" newsletter, a large medical center in Northern Michigan, about more than six thousand "issues" with their Cerner CPOE.

... One wonders how many of those 6,000, and how many of the 600 remaining "issues" fall into categories of "likely to cause patient harm in short term if uncorrected" or "may cause in patient harm in medium or long term."

I note that Cerner CPOE is not a new product, nor are similar products from other vendors also afflicted with long lists of "issues." That there could be more than 6,000 "issues" at a new site suggests deep rooted, severe problems with CPOE specifically and health IT design and implementation processes in general.

Here's another such multi-"issue"-laden EHR, this at University of Arizona Health Network.  Image of frequent periodic "EHR Update" below.



"We’ve resolved 6,036 issues and have 3,517 open issues."

[Ignore the 'kewl dark sunglasses' worn by the hipsters at the top of this announcement.  Not sure if this has something to do with EPIC, but I consider the wearing of dark sunglasses by clinicians or any other staff in a hospital setting - where people are sick and/or dying - to be in exceptionally bad taste.]

The text starts:

ISSUES UPDATE as of 4:00 p.m., Nov. 8
We’ve resolved 6,036 issues and have 3,517 open issues.

That's a total of nearly ten thousand "issues."  As of now, that is.  "Issue" is a euphemism for "glitch" a.k.a. "software defect" and/or "implementation error", see http://hcrenewal.blogspot.com/search/label/glitch.

These "issues" are  in a supposedly "mature" product for which this organization has spent enormous sums of money, that has undergone "innovation" for several decades now - in an environment free from regulation, I might add.

Many of the "issues" reduce patient safety, and could or already may have resulted in patient harm.  Such items on this listing, seen below, which is updated frequently, include:
  • Pharmacy Medication Mapping Errors – Making good progress: watch for further notices.  [Perhaps these should have been tested and fixed before go-live? - ed.]
  • Microbiology Results Mapping Incorrectly [does that mean "mapping" to the wrong patient? - ed.]  – all known errors fixed, monitoring and working on enhancements. [As above, perhaps these should have been tested and fixed before go-live? - ed.]
  • Prescription printing - output for prescription printing has been fixed
  • Refill requests for providers will be routed to the CLIN SUPPORT In Basket pool for the provider’s department.  This was a decision made by UAHN leadership. [Not sure why this is being done; perhaps for approval by managers? - ed.] 
  • Errors transmitting prescriptions will also be sent to the CLIN SUPPORT In Basket.  [Errors transmitting prescriptions? That's not reassuring regarding data integrity.  See ECRI report below  - ed.]

This is not to mention that all of the "reminders" that follow are a distraction to clinical personnel, who cannot be expected to remember all of them.

Bad as this is, at my April 1, 2012 post "University of Arizona Medical Center, $10 million in the red in operations, to spend $100M on new EHR system" (http://hcrenewal.blogspot.com/2012/04/university-of-arizona-medical-center-10.html) I observed that:

... $100 million+ is probably enough to pay for AN ENTIRE NEW HOSPITAL or hospital wing ... or a lot of human medical records professionals.

To add more bitter icing to this cake, I wrote about a campaign for clinicians to speak only in wonderful terms about the new U. Arizona Health System EHR at my Oct. 3, 2013 post "Words that Work: Singing Only Positive - And Often Unsubstantiated - EHR Praise As 'Advised' At The University Of Arizona Health Network."  I observed the following about the "words that work" is the shameless 'suggested' script:

Efficient - see aforementioned links as well as "Common Examples of Healthcare IT Difficulties" at http://cci.drexel.edu/faculty/ssilverstein/cases/

Convenient - as above.  According to whom?  Compared to what?  Pen and paper?

Improves patient safety and quality - see IOM report post at http://hcrenewal.blogspot.com/2011/11/iom-report-on-health-it-safety-nix-fda.html .  We as a nation are only now studying safety of this technology, and the results are not looking entirely convincing, e.g. ECRI Deep Dive Study of health IT safety at http://hcrenewal.blogspot.com/2013/02/peering-underneath-icebergs-water-level.html.  171 health IT mishaps in 36 hospitals, voluntarily reported over 9 weeks, with 8 reported injuries and 3 reported possible deaths is not what I would call something that "improves patient safety and quality" without qualifications.

The Cadillac of its kind - according to whom?

Patients at hospitals using this system love it -  Do most patients even know what it, or any EHR, looks like?  Have they provided informed consent to its use?

Exciting - clinician surveys such as by physicians at http://hcrenewal.blogspot.com/2010/01/honest-physician-survey-on-ehrs.html and by nurses at http://hcrenewal.blogspot.com/2013/07/candid-nurse-opinions-on-ehrs-at.html shed doubt on that assertion.

The best thing for our patients - again, according to whom?

Sophisticated new system - "New"?  Not so much, just new for U. Arizona Health.  "Sophisticated", as if that's a virtue?  Too much "sophistication" is in part what causes clinician stress and burnout, raising risk

Considering the near 10,000 issues, the new ECRI Institute report "Top Ten Technology Hazards in Healthcare", 2014 edition comes to mind (https://www.ecri.org/Press/Pages/2014_Top_Ten_Hazards.aspx).  Named in that report, as has been the case for the past several years, is healthcare IT. 

This year's problem description is:

#4. Data Integrity Failures in EHRs and other Health IT Systems

"Data integrity failures" include "issues" (per the bad health IT description) such as: data loss, data corruption, data attributed to the wrong patient, etc.

ECRI Institute, a nonprofit organization, dedicates itself to bringing the discipline of applied scientific research to healthcare to discover which medical procedures, devices, drugs, and processes are best to enable improved patient care. As pioneers in this science for 45 years, ECRI Institute marries experience and independence with the objectivity of evidence-based research. Strict conflict-of-interest guidelines ensure objectivity. ECRI Institute is designated an Evidence-based Practice Center by the U.S. Agency for Healthcare Research and Quality. ECRI Institute PSO is listed as a federally certified Patient Safety Organization by the U.S. Department of Health and Human Services. For more information, visit www.ecri.org.

ECRI also produced the 2012 Deep Dive Study of Health IT Risk (http://hcrenewal.blogspot.com/2013/02/peering-underneath-icebergs-water-level.html), where in a volunteer study at 36 member PSO hospitals, 171 health IT "mishaps" were reported in just 9 weeks, 8 of which caused patient injury and 3 of which may have contribute to patient death.

In summary, The University of Arizona Health System, with components in the red, is spending hundreds of millions of dollars on an EHR system, that has had decades to mature. Yet, they are finding 10,000 "issues" already, a number of which reduce patient safety and are unresolved, with many more likely to be found.

They are also 'advising' their staff to speak in glowing, unsubstantiated terms to patients about an EHR system that has 10,000 issues, and not seeking patient consent to its use in mediating and regulating their care - or giving elective patients the information that might allow them to choose another less "buggy" hospital.

If (when) patient harm results from such cavalier hospital (mis)management, the juries are going to just love the dark sunglasses, I bet.

-- SS