Friday, March 24, 2006

More Questions, Few Answers About TGN 1412

The disastrous results of the phase I trial of TGN 1412 (see our most recent post here) has lead to a news article in the British Medical Journal, and opinion pieces in the BMJ and the Lancet.

None of these has important new answers. The Lancet editorial noted that the journal's request for the protocol of the trail was rebuffed by TeGenero, the manufacturer of the TGN 1412, and by the UK Medicines and Healthcare Regulatory Agency (MHRA) as "commercially sensitive."

The BMJ editorial listed the questions raised by this incident, which somewhat parallel the list of thing we did not know about the trial at the time of our first post on TGN 1412. The questions were:
  • How were the volunteers recruited and motivated?
  • How much accurate information, based on full risk analysis, do volunteers receive?
  • How much money is too much, and when does money cloud the judgment needed to evaluate risks realistically?
  • Why was the drug tested on healthy volunteers rather than patients?
  • Why were all eight volunteers given the drug at the same time? [Actually, only six were given the drug, while two others got a placebo - Ed]
  • What information did the ethical and regulatory bodies have before the trial?
  • How much do regulatory and ethical bodies have to rely on information from investigators and sponsors, which may be subject to publication bias, rather than truly independent reviews?
  • Finally, what does this trial tell us about the degree of transparency througout the process of developing new drugs?
The conclusion to the BMJ article,
This tragedy creates one more imperative for an open culture in medical research, a culture that many fear is increasingly losing its way.
and the conclusion of the Lancet article,
the fact that [dreadful events] ... have occured should lead to maximum transparency to reaffirm trust in clinical trials and their regulation.
mirror the conclusion of our first post about TGN 1412, "research subjects and future patients deserve complete transparency about the drug or device to be tested, and how the testing will be performed and supervised."
Maybe if enough people repeat this sentiment, it will actually have an effect on the increasingly opaque world of drug development and evaluation.

1 comment:

Kevin T. Keith said...

How much money is too much, and when does money cloud the judgment needed to evaluate risks realistically?

This is a common question in research ethics, but it always strikes me as ill-formed.

The question implies that there is some "tipping point" at which money "clouds the judgment" of patients - that, somehow, a modest reimbursement for volunteering is incidental to their evaluation of the risks of the experiment, but above some certain level the reimbursement starts to influence their thinking until it reaches the point that they will begin to accept higher risks than otherwise, or, in the worst case, "do it for the money". This seems to me a very unrealistic picture of human decisionmaking.

It seems obvious to me that reimbursement is always a factor in the decision to volunteer for a paid experiment, no matter what other circumstances obtain. That is, for almost any patient, a given experiment with some attached reimbursement is more attractive than that same experiment with no reimbursement, no matter what else (whether the patient is healthy or seeking an experimental cure, whether the experiment is high- or low-risk, whether whether the reimbursement is high or low). The more reimbursement, the more attractive the experiment is - but there is no "tipping point", no level (above zero) at which reimbursement plays no role in the patient's decisions, and no level at which it is the only factor in the patient's decision. In other words, all patients who receive any money at all for their consent to an experiment are doing it "for the money", but none is doing it exclusively for the money - and this remains true at any possible level of reimbursement.

So it makes no sense to talk about "clouding patients' judgment" or about their "evaluating risks realistically". What this seems to imply is that there are some very risky experiments that it would be rational to refuse, and that agreeing to those in return for compensation is an irrational decision, or involves a miscalculation of the risks. But there is no reason to assume that. What is much more likely is that the patients have evaluated the risks as realistically as they themselves need to do in order to make a decision, and have then decided that the risk plus the expected reimbursement are an attractive combination for them - just as every patient does for every experiment, no matter how risky or how well reimbursed.

If patients are simply making rational and coherent risk/reward calculations for themselves, it is hard to see why those decisions become "unrealistic" just because the numbers get bigger (i.e., there is both a higher risk and a higher reward involved). And it is also hard to charge that the expected reward has "unduly influenced" a patient's judgment, if that judgment is structurally identical to the judgment the same patient would be making under a low-risk/low-reward scenario.

I am sensitive to the extreme dangers of abuse by taking advantage of patients' poverty or need (the likelihood that very poorly-off patients will accept very high risks for paltry rewards because they have no better alternatives), and the near-certainty that drug companies would eagerly seek out desperate patients and pay them a pittance for the most extreme abuses if they were allowed to do so. But the problem there lies with the experimenters and the economic structure in which research is conducted - not with some supposed psychological inability of poor people to determine their own best interests.