Friday, March 23, 2012

Pseudo-Evidence Based Medicine Should be a Global Health Concern

We have frequently advocated for evidence-based medicine (EBM) , that is, medicine based on judicious use of the best available evidence from clinical research, critically reviewed and derived from systematic search, combined with biomedical knowledge and understanding of patients' values and preferences.  However, EBM risks being turned into pseudo-evidence based medicine due to systematic manipulation and distortion of the clinical research evidence base.  Dr Wally Smith wrote about pseudo-evidence based medicine, which he defined as "the practice of medicine based on falsehoods that are disseminated as truth," in the British journal Clinical Governance (Smith WR. Pseudoevidence-based medicine: what it is, and what to do about it. Clinical Governance 2007; 12: 42-52. Also see this post).  

Now it appears that this issue is causing concern in the Cochrane Collaboration, the main voluntary international group promoting EBM.  An article from December, 2011 in the Indian Journal of Medical Ethics outlined why we need to question the trustworthiness of the global clinical evidence base.  (See Tharyan P. Evidence-based medicine: can the evidence be trusted.  Ind J Med Ethics 2011; 8: 201-207.  Link here.)  It merits further review.

The Role of Vested Interests

Dr Tharyan, its author, emphasized that a major threat to the integrity of the clinical research data base is the influence on clinical research of those with vested interests in marketing particular products, e.g., drugs, devices, etc.
The motives for conducting research are often determined by considerations other than the advancement of science or the promotion of better health outcomes. Many research studies are driven by the pressure to obtain post-graduate qualifications, earn promotions, obtain tenured positions, or additional research funding; many others are conducted for financial motives that benefit shareholders, or lead to lucrative patents.

The Importance of Pervasive Conflicts of Interest

Early on, the author discussed how the distortion of the clinical research data base arises from the pervasive web of conflicts of interest in medicine and health care:
This hijacked research agenda perpetuates further research of a similar nature that draws more researchers into its lucrative embrace, entrenching the academic direction and position statements of scientific societies and academic associations. Funders and researchers are also deterred from pursuing more relevant research, since the enmeshed relationship between academic institutions and industry determines what research is funded (mostly drugs at the expense of other interventions), and even how research is reported; thus hijacking the research agenda even further away from the interests of science and society.

The article then goes on to show how the influence of vested interests may distort the design, implementation, analysis, and dissemination of research.

Distortion of Research Design and Implementation

The Research Question

Dr Tharyan noted
The majority of clinical trials conducted world-wide are done to obtain regulatory approval and a marketing licence for new drugs. These regulations often require only the demonstration of the superiority of a new drug over placebo and not over other active interventions in use. It is easier and cheaper to conduct these trials in countries with lower wages, lax regulatory requirements, and less than optimal capacity for ethical oversight. It is therefore not surprising that the focus of research does not reflect the actual burden of disease borne by people in the countries that contribute research participants, nor address the leading causes of the global burden of disease. Some 'seeding' trials, conducted purportedly for the purpose of surveillance for adverse effects, are often only a ploy to ensure brand loyalty among participating clinician-researchers.

Insufficient Sample Size

The article stated,
Many trials do not report calculations on which the sample size was estimated, often leading to sample sizes insufficient to detect even important differences between interventions (for primary, let alone secondary outcomes)

I would add that such small trials are particularly bad at detecting important adverse effects of the interventions being promoted.

Excessively Stringent Enrollment Criteria

As Dr Tharyan wrote,
Most RCTs funded by industry and academia are designed to demonstrate if a new drug works, for licensing and marketing purposes. In order to maximise the potential to demonstrate a 'true' drug effect, homogenous patient populations; placebo controls; very tight control over experimental variables such as monitoring, drug doses, and compliance; outcomes addressing short term efficacy and safety; and methods to minimise bias required by regulatory agencies are used to demonstrate if, and how, the drug works under ideal conditions.

The problem is that very few patients in clinical practice resemble those enrolled in such trials, so the generalizability of the trials' results is actually dubious. Ideally, clinicians practicing evidence-based medicine could refer to trials that include patients similar to those for whom they care.
Practical or pragmatic clinical trials are designed to provide evidence for clinicians to treat patients seen in day-to day clinical practice, and evaluate their effectiveness under 'real-world' conditions. These trials use few exclusion criteria and include people with co-morbid conditions, and all grades of severity. They compare active interventions that are standard practice, and in the flexible doses and levels of compliance seen in usual practice. They utilise outcomes that clinicians, patients, and their families consider important, such as satisfaction, adverse events, return to work, and quality of life). Recommendations exist on their design and reporting , but such trials are rare.

Comparisons to Placebo, not the Other Interventions Clinicians Might Realistically Consider

Per the article,
Industry sponsored trials rarely involve head-to head comparisons of active interventions, particularly those from other drug companies, thus limiting our ability to understand the relative merits of different interventions for the same condition.

The result may thus be a number of studies showing particular interventions appear to be better than nothing, but few if any studies that would help clinicians decide which intervention would be best for a particular patient.

Inappropriate Comparators

However,when studies are done comparing the intervention of interest to other active interventions, the details of the choice of comparators are often managed so that the comparators are likely to appear to be worse.
Even if active interventions are compared in industry-sponsored trials, the research agenda has devised ways in which the design of such trials is manipulated to ensure superiority of the sponsor’s drug. If one wants to prove better efficacy, then the comparator drug is a drug that is known to be less effective, or used in doses that are too low, or used in non-standard schedules or duration of treatment. If one wants to show greater safety, then the comparator is a drug with more adverse effects, or one that is used in toxic doses. Follow up, also, is typically too short to judge effectiveness over longer periods of time.

Meaningless Outcomes

A favorite tactic used in the design of trials influenced by vested interests is to choose outcomes that are likely to show results favorable to the product being promoted, but have no meaning for patients or clinicians. There are three ways this is commonly done.

The first is to use rating scales that are very sensitive to small perturbations, but not meaningful in terms of how patients feel or function:
The choice of outcome measures used often ensures statistically significant results in advance, at the expense of clinically relevant or clinically important results. Outcomes likely to yield clinically meaningless results include the use of rating scales (depression, pain, etc.). These scales yield continuous measures usually summarised by means and standard deviations, rather than the dichotomous measures clinicians use such as: clinically improved versus not improved. These rating scales, however extensively validated, are hardly ever used in routine clinical practice. A difference of a few points on these scales results in statistically significant differences (low p values), that have little clinical significance to patients.

Another is to use surrogate outcomes,
Other outcomes commonly used are surrogate outcomes; outcomes that are easy to assess but serve only as proxy indicators of what ought to be assessed, since the real outcome of interest may take a long time to develop. These are mostly continuous measures that require smaller sample sizes (blood sugar levels, blood pressure, lipid levels, CD4 counts, etc.). These measures easily achieve statistical significance but do not result in meaningful improvements (reduction in mortality, reduction in complications, improved quality of life) in patients’ lives, when the interventions are used (often extensively) in clinical practice.

Finally, there are composite outcomes,
The use of composite outcomes, where many outcomes (primary, secondary and surrogate outcomes) are clubbed together (e.g.: mortality, non-fatal stroke, fatal stroke, blood pressure, creatinine values, rates of revascularisation) as a single primary outcome, can also mislead. Such trials also require smaller sample sizes, and increase the likelihood of statistically significant results. However, if the composite outcome includes those of little clinical importance (lowered blood pressure, or creatinine values), the likelihood of real benefit (reduction in mortality, or strokes, or hospitalisation) and the potential for harm (increase in non-fatal strokes or all-cause mortality) are masked.

Distortion of Analysis

Relative Instead of Absolute Risks

A favorite trick is to emphasize relative risks rather than absolute risks. For example if a trial reduces the risk of a bad outcome from 2 of 10,000 patients (0.02%) to 1 of 10,000 patients (0.01%), the relative risk reduction is 50%, but only 1 of 10,000 patients experienced a benefit.
The use of estimates of relative effects of interventions, such as relative risks (RR) and odds ratios (OR) with their 95% confidence intervals, provides estimates of relative magnitudes of the differences and whether these exclude chance, as well as if these differences were nominal or likely to be clinically important.

However, even relative risks can be misleading since they ignore the baseline risk of developing the event without the intervention. The absolute risk reduction (ARR) is the difference in risk of the event in the intervention group and the control group, and is more informative since it provides an estimate of the magnitude of the risk reduction, as well the baseline risk (the risk without the intervention, or the risk in the control group). Systematic enquiry demonstrates that on average, people perceive risk reductions to be larger and are more persuaded to adopt a health intervention when its effect is presented as relative risks and relative risk reduction (a proportional reduction) rather than as absolute risk reduction; though this may be misleading.

Sub-Group Analysis

Another statistical trick used to present favourable outcomes for interventions is the use of spurious subgroup analyses, where observed treatment effects are evaluated for differences across baseline characteristics ( such as sex, or age, or in other subpopulations). While they are useful, if limited to a few biologically plausible subgroups, specified in advance, and reported as a hypothesis for confirmation in future trials; they are often used in industry-sponsored trials to present favourable outcomes, when the primary outcome(s) are not statistically significant.
A major statistical point is that the more ways one subdivides the study population, the more likely it is to find a difference between those receiving different interventions by chance alone.

Distortion of Dissemination

Poor Exposition

As noted by Dr Tharyan,
Evidence also shows that the published reports are not always consistent with their protocols, in terms of outcomes, as well as the analysis plan, and this again is determined by the significance of the results. Harms are very poorly reported in trials compared to results for efficacy; and are also often suppressed or minimised.


Also, those with vested interests may try to ensure maximally favorable dissemination by employing ghost-writers,
Other tactics used to influence evidence-informed decision making include ghost-writing, where pharmaceutical companies hire public-relations firms who 'ghost-write' articles, editorials, and commentaries under the names of eminent clinicians; a strategy that was detected by one survey in 75% of industry-sponsored trials, where the ghost author was not named at all, and in 91% when the ghost author was only mentioned in the acknowledgement section. Detecting such conflicts of interest is difficult, since they are rarely acknowledged due to the secrecy that shrouds the nexus between academia and industry in clinical trials.

Industry-sponsored trials often place various constraints on clinical investigators on publication of their results; these publication arrangements are common, allow sponsors control of how, when, and what is published; and are frequently not mentioned in published articles.
Ghost-writers under the direct control of those with vested interests could more efficiently bias their writing in favor of their sponsors than could even academics constrained by contractual obligations to sponsors.

Suppression of Research

When all else fails, the most crude example of distorted dissemination is suppression of studies that despite all the manipulations noted above fail to produce favorable results for the product being promoted:
A considerable body of work provides direct empirical evidence that studies that report positive or significant results are more likely to be published; and outcomes that are statistically significant have higher odds of being fully reported, particularly in industry funded trials.

By the way, the latest demonstration of suppression of research was a study by Turner et al (Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database. PLoS Medicine 2012; 9(3): e1001189. Link here) It showed that of 24 trials of atypical antipsychotics registered in the US FDA database, 15/20 (75%) of the published trials were positive, that is, had results in favor of the sponsors' drugs according to the FDA review, but only 1/4 (25%) of the unpublished trials were positive. In other words, 15/16 of positive trials were published, but only 5/8 non-positive ones were.


Thus, Dr Tharyan provided a nice summary of many of the ways that the clinical research data base can be manipulated or distorted to serve vested interests. Such distortions risk the transformation of evidence-based medicine into pseudo-evidence based medicine.

We have repeatedly discussed (see links above) most of these issues. Because we are located in the US, and speak mainly English, this blog may have given the impression that the lack of trustworthiness of the clinical research evidence base is primarily a US problem. The article by Tharyan emphasizes, however, that it is a global problem.

While this problem now seems to have the attention of the Cochrane Collaboration, it seems to be relatively anechoic in global health circles. It appears to be no more prominent on the agendas of global health organizations than is health care corruption (look here.) Yet the two issues are highly related. Most of the distortions in the global clinical evidence database may be driven by conflicts of interest. Conflicts of interest are risk factors for corruption. Distortions of the global clinical research data base that lead to use of expensive, but ineffective or dangerous interventions when other options would work as well or better leads to needless suffering and death, and by unnecessarily raising the costs of care, decreases access, especially for the poor.

Also note that conflicts of interest may be one reason that all these problems remain so anechoic (look here).

True global health care reform requires addressing health care corruption, but also conflicts of interest and their role in the distortion and manipulation of the clinical research data base. I still live in hope that that some academic health care institutions, professional societies, health care charities and donors, and patient advocacy groups will gain enough fortitude to stand up for accountability, integrity, transparency, and honesty in health care.


Steve Lucas said...

You may find this link of interest:

Just remember the number $2 billion.

Steve Lucas

Judy B said...

And my family and friends wonder why I don't trust the medical industry?

EMR said...

Good information here.Ill health is bad but it is worse if you are not equipped with the right information to handle or control the same.

Marilyn Mann said...

Four commentaries in the March 2012 issue of Circulation: Cardiovascular Quality and Outcomes discuss data sharing as a partial remedy for bias in the medical literature. See my post here:

InformaticsMD said...


Harlan is right. What he suggests is quite sound, certainly from the medical informatics perspective.

Can/will it happen in the current medical environment? That remains to be seen.

I was working with a small committee, as the MI domain expert, at Merck years ago that was discussing making clinical trials data publicly available.

You can be certain of my input to that committee.

I suspect I was not the most popular person on that committee.

-- SS

Unknown said...

Its it time medicine gets away from this practice and finds the epistemologic cornerstone for biology in evolution and not in a RCT that has cooked data. It is killing millions of patients yearly.

Roy M. Poses MD said...


I don't see randomized controlled trials as a fundamental ill. I see badly done trials done for the wrong reasons as one.

I don't understand what you wrote about the "epistemologic cornerstone for biology in evolution."