Unfortunately, often such manipulation seems to escape public notice. What skepticism they may generate often gets little notice, an example of the anechoic effect. Very rarely do the people responsible for the trial deign to address skeptical criticism.
However, we recently noted that cogent criticism of a very recently published trial got some circulation, leading to a dialogue with the trial's principal investigator. The results seemed to show why those involved with manipulated sponsored trials often try to just ignore criticism.
Introduction - the PARADIGM - HF Trial of Valsartan - Sacubitril
As we recently posted, based on a new article now online in the New England Journal of Medicine, a combination of a new drug, sacubitril, in a new class, naprilysin inhibitors, with an older drug, valsartan, an angiotensin receptor blocker (ARB), has been hailed as a "game changer" for patients with heart failure. However, although the study (entitled PARADIGM - HF) had many good features, it also had some major problems which made its interpretation difficult, and made the hype about "new hope" seem excessive. Unbeknownst to me when I wrote the post, some pithy overlapping criticisms of PARADIGM - HF by Dr Vinay Prasad were posted on CardioExchange.
Surprisingly, Dr Prasad's post elicited a lengthy comment by Dr Milton Packer, the principal investigator of PARADIGM - HF, defending the study's methods. This resulted in a back-and-forth between him and Dr Prasad. (Available by subscription only.) This seems to be on of those rare instances in which a pillar of the medical establishment was willing to defend the way things are done these days in health care, and in this case, the way commercially sponsored randomized controlled trials are designed.
In my humble opinion, this exchange illustrated one reason that most criticisms about flaws in commercially funded clinical research get the silent treatment: there really are not good explanations for them, other than they resulted from the intention to increase the likelihood that the sponsors' products would look better than they really are.
Let us consider in detail some of the written comments by Dr Packer addressing two major criticisms by Dr Prasad.
The Question about the Choice of Comparator
Dr Prasad and I both questioned the choice of the drug to which valsartan - sacubitril was compared. Dr Prasad wrote,
In PARADIGM-HF, oral enalapril was dosed up to 10 mg twice daily, whereas LCZ696 was dosed up to 200 mg twice daily (which includes a cumulative 320 mg of valsartan). The problem is that 320 mg is the maximum HF dose of valsartan per drug labeling, but enalapril can be dosed up to 40 mg daily (20 mg twice daily) — double the maximum dose proscribed per protocol.
In effect, drug dosing in PARADIGM-HF was a 'straw man' comparison. The reported outcomes may be entirely a consequence of more ARB versus less ACE inhibitor. That is reason enough to doubt the findings. Sacubutil, the novel drug, could have been a sugar pill, and the results may well have turned out the same. But there are two more good reasons to be skeptical.
Note that in effect Dr Prasad charged that the entire trial was based on a logical fallacy, the "straw man" fallacy.
Dr Packer's Response: Red Herrings, Ad Hominem Fallacies, and Appeals to Authority
Red Herring - Comparison to Trial with a Different Patient Population
Dr Packer made several responses to this criticism. First, he asserted that using the maximum dose of enalapril as a target dose would have been inappropriate,
Dr. Prasad proposes that the dose of enalapril was too low, and we should have used 40 mg daily of enalapril as a comparator. However, when 40 mg of enalapril daily has been used in a clinical trial (CONSENSUS), these extremely high doses were poorly tolerated due to hypotension and renal insufficiency.
However, that appears to be to be an example of the red herring fallacy. The PARADIGM - HF trial was meant to include patients with mild to severe symptoms of CHF (CHF classes II - IV), although it actually included a few (about 5%) patients with no symptoms (class I). However, as Dr Prasad pointed out in his later comeback,
Dr. Packer suggests that CONSENSUS trial proves that enalapril 40 cannot be given safely. It is worth noting this trial enrolled only NY Heart Classification IV patients, while these were less than 1% of pts in PARADIGM HF. Many patients in PARADIGM HF might well have been able to tolerate and benefit from enalapril 40mg.
So Dr Packer's argument based on a trial of only the sickest patients with CHF seems likely not be relevant to discussion of a trial of patients with much milder disease.
Red Herring - Physiologic Changes vs Patient-Centered Outcomes
Then, Dr countered Dr Prasad's concern that the design of PARADIGM - HF could not distinguish whether the apparent benefits of valsartan (at maximum dose) and sacubitril versus enalapril (at a moderate dose) were due to the valsartan alone versus the combination thus,
Furthermore, Dr. Prasad can provide no evidence whatsoever than valsaratan 160 mg BID produces more blockade of the renin-angiotensin system than enalapril 10 mg BID. It is simply not true.
This seems to be an even better example of the red herring fallacy. The argument is not about the physiological changes the drugs may or may not produce. It is about the design of a clinical trial and how that design could affect interpretation of patient-centered outcomes. Degree of renin-angiotensin system blockade may not directly predict survival, hospitalization, functional status, etc.
Red Herring - References to a Trial of Valsartan in Addition to ACE Inhibitors
Appended to the above, Dr Packer wrote,
In fact, valsartan 160 mg BID does not even have a mortality effect when compared with placebo, whereas enalapril 10 mg BID does have a survival benefit.
It later became apparent that the evidence he felt supported this assertion came from yet another trial with an alphabet soup name, Val - HEFT. But, as Dr Prasad argued, this was yet another red herring,
The VAL-HEFT trial– where Valsartan 160 BID was no better than placebo– occurred in the setting where 92% of patients were already on an ace-inhibitor. As such, it cannot be used to say what the effect of valsartan is among patients not taking an ace-inhibitor, as was the case in PARADIGM-HF.
To explain a bit, the Val - HEFT trial enrolled patients who were nearly all already taking an ACEI, including enalapril. So its data could only speak to the question of whether adding valsartan to an ACEI has an effect, not whether valsartan alone is efficacious in CHF. It does not appear that there has ever been a large, long-term randomized controlled trial that tested valsartan versus placebo for CHF. So Dr Packer seemed to have supplied another quite large red herring.
Of course that raises the question of why PARADIGM - HF only assessed the combination of sacubitril plus valsartan, rather than sacubitril combined with other ARBs. This question was not directly addressed in the exchange between Dr Packer and Dr Prasad. Parenthetically, note that valsartan is sold by Novartis, the sponsor of PADADIGM - HF, as Diovan.
Dr Packer only complicated things later by writing,
if Dr. Prasad dismisses the evidence from Val-HeFT, he eliminates ALL of the evidence that supports the use of valsartan in heart failure. If he sets the Val-HeFT trial aside, what evidence is there that valsartan 160 mg BID does ANYTHING in heart failure?
Again, Dr Packer was the one supposedly responsible for the choice of valsartan as the ARB to combine with sacubitril.
In summary thus far, I could not find any instance in the exchange in which Dr Packer logically used evidence to explain why his trial compared valsartan (targeted to maximum dose) plus sacubitril to enalapril (targeted to a moderate dose). Instead, his arguments seemed to consist of multiple examples of the red herring fallacy.
Ad Hominem - Dr Prasad's Degree of Understanding of the Heart Failure Literature
Instead, he also threw in some additional general points which appeared to be rather gratuitously fallacious, To start,
I wish that Dr. Prasad understood the field of heart failure trials better than he does,
I wish Dr Prasad understood the heart failure literature better.
These seem to be examples of the ad hominem fallacy. Rather than addressing the logic and evidence used by Dr Prasad, Dr Packer implied that Dr Prasad simply lacks understanding. Dr Prasad's polite response was,
Dr. Packer could tighten his posts by reducing the number of times he wishes I understood the heart failure literature better.Appeal to Authority - Dr Packer's and Colleagues' Implied Superior Expertise on the Medical Literature
That did not prevent Dr Packer from coming back with,
I suggested that Dr. Prasad become more familiar with the medical literature because it would save him considerable time in formulating useful arguments.
With this repetition, Dr Packer seems to be not only using the ad hominem fallacy, but implying the fallacy of the appeal to authority. The implication is that Dr Packer clearly is an expert, and Dr Prasad is not, and the expert should be heeded. Just to underline this, Dr Packer later wrote,
Dr. Prasad suggests that others share his concerns. If he were here in Barcelona at the ESC meeting, he would know that that was not the case. However, I realize that It is common for those who seek only to win debates to claim that others agree with them. But Dr. Prasad, wishing that people agree with you does not make it true.
That just makes it worse. The implication is that all the experts in Barcelona agree with Dr Packer, and hence as a group they must be right. By the way, it is obvious from our previous blog post, comments on it, and other comments on the CardioExchange exchange that there are at least other people who agree with Dr Prasad.
Appeals to Authority - The New England Journal of Medicine and the US Food and Drug Administration Must Always be Totally Right
Not to leave it there, Dr Packer added as general comments several other appeals to authority. At the end of his first set of comments there was this,
The real lesson of PARADIGM-HF is that combined angiotensin receptor neprilsyin inhibition is superior to inhibition of the renin-angiotensin system alone in patients with chronic heart failure. That is the conclusion of our paper, which passes stringent peer review in the New England Journal of Medicine.
The implication is that no paper published in the New England Journal of Medicine should ever be questioned about anything. Also,
it does not appear that you are aware of the criteria that the FDA uses to evaluate or approve new drugs for cardiovascular disease.
This added the appeal to authority that since the FDA approved this trial, there must be nothing major wrong with it, to another implied ad hominem about Dr Prasad's lack of awareness.
Thus it seemed that Dr Packer's defense of his PARADIGM - HF study's choice of drugs to compare was based almost entirely on a string of logical fallacies, rather than logic and evidence.
The Question of Run-In Period Bias
Dr Prasad's other major criticism of his trial had to do with its use of active run-in periods. He wrote,
The reason why drug run-in periods are problematic is discussed at length in the literature. In short, run-in periods exclude intolerant and nonadherent patients, foster spuriously large treatment effects, and (most troubling) create inclusion criteria that are irreproducible — i.e., that apply to no population we can clearly describe, as reasons for dropout are multifaceted and unique.
Even more concerning is that drug run-in periods test a different question than the one we think we are testing. In PARADIGM-HF, the run-in tested whether sticking with LCZ696 or switching to enalapril is better for HF patients who have taken and tolerated enalapril followed by LCZ696. It effectively turns the trial into a withdrawal study. If stopping LCZ696 is harmful, that counts against enalapril.
Dr Packer's Response: Appeals to Common Practice
Dr Packer's main argument in defense of the run-in period involved yet another logical fallacy, the appeal to common practice, for example,
Dr. Prasad seem ill-informed here. Drug run-in periods are not a controversial study-design choice. In fact, this type of design is strongly preferred because it closely mimics clinical practice.
I wish I understood Dr. Prasad’s arguments against run-in periods. We have used them in many heart failure trials, and it was used in the SOLVD Treatment Trial,...
Dr Prasad ultimately responded so as to underline the essence of the fallacy,
The fact that many (and often industry sponsored) studies use drug run in periods is not a justification for their use.
The recently published paper reporting the results of PARADIGM - HF has already generated considerable media hype (and an uncritical editorial) proclaiming valsartan - sacubitril as a new wonder drug for congestive heart failure. While the trial was not without good features, several critics, including Dr Vinay Prasad and yours truly, suggested the study had multiple problems which make its results difficult to interpret. The Principal Investigator of the study, Dr Milton Packer, chose to publicly defend his trial, yet so far his defense seems built more on logical fallacies than on logic and evidence. After he published his remarks in defense of the trial, the hype seems no more justified than it did before.
Not only was PARADIGM - HF sponsored by Novartis, but many of its investigators had ties to Novartis and other pharmaceutical companies. Dr Packer should be applauded for disclosing clearly the number of companies with whom he works in his dialogue with Dr Prasad.
Competing interests: Personal fees from AMAG, Amgen, BioControl, CardioKinetix, CardioMEMS, Cardiorentis, Daiichi, Janssen, Novartis, and Sanofi.
However, not only is it likely that financial relationships with commercial health care firms influence health care professionals to be more favorably disposed to these firms' products, but also such conflicts of interest may cause conflicted, and hence confused thinking. As I have noted before, Dr Joe Collier said, "people who have conflicts of interest often find giving clear advice (or opinions) particularly difficult." [Collier J. The price of independence. Br Med J 2006; 332: 1447-9. Link here.]
This all adds to the argument that society needs to reconsider its delegation of the responsibility for much clinical research to the companies that make the drugs, devices, and other goods and services used in health care. The temptation for them to manipulate the results to improve their marketing is too great. The temptation for the health care professionals involved to go along to get along with the rich sponsors is too great. It may be less profitable for some individuals, but it would be much better for patients' and the public's health if research involving people, particularly experiments (clinical trials) involving patients, were directly funded by, and designed, implemented, and analyzed by people without vested interests in the results turning out in favor of particular commercially produced goods or services.