1 Department of Primary Care and Population Sciences, University College London, London N19 5LW, 2 Department of Primary Care and General Practice, University of Birmingham, Birmingham, 3 Department of Psychology, University College London, London
Correspondence to: T Greenhalgh p.greenhalgh@pcps.ucl.ac.uk
Even when good scientific data are available, people's interpretation of risks and benefits will differ
Introduction |
---|
The authorities work at the level of the whole population. But individual patients may believe (rightly in some cases) that a particular regulatory decision is not in their own best interests, and vociferous campaigns sometimes result (box 1). Involvement of patients can be a powerful driver for improving services.5 But both lay people and professionals are susceptible to several biases when making health related decisions (box 2). What can be done to ensure that the care of individual patients is not compromised by regulatory decisions intended to protect the population as a whole, and to encourage objective and dispassionate decision making in the face of cognitive biases?
Sources and selection criteria |
---|
Individual need versus population level policy |
---|
Known susceptibility to adverse effect
For some drugs it is possible to identify in advance which people are
going to be susceptible to the adverse effect. A licence might be
granted on condition that the drug is absolutely contraindicated in
certain high risk groups (such as the under 16s in the case of
aspirin, or women of childbearing potential in the case of retinoids
for acne). In practice, however, enforcing such restrictions may be
impossible, especially in developing countries (box 1, thalidomide).
More speculatively, given the emergence of pharmacogenomics,6
future licences for such drugs might be granted on condition that
individual patients are tested for susceptibility before a
prescription is issued.
Detection of adverse effect by surveillance
The adverse effect of some drugs can be detected at a reversible stage
by surveillance. In this situation, the patient can be offered the
option of taking the drug and having regular check ups, or not taking
it at all—for example, the combined oral contraceptive (blood
pressure every six months), penicillamine (monthly urine analysis),
and warfarin (regular blood tests). Examples of surveillance
programmes being written into drug licensing decisions include
clozapine and alosetron (box 1).
|
Surveillance of large numbers of individuals for extremely rare side effects is a poor use of clinicians' time. In practice, we balance risks and benefits on a case by case basis and prescribe certain drugs only in patients who are more likely than average to benefit—or less likely than average to develop adverse effects. Increasingly, such complex clinical decisions are effectively written into regulatory decisions, as when a drug licence is granted "only for prescription by a specialist"—for example, acitretin in psoriasis, thioridazine in schizophrenia, and corticosteroid eye drops in anterior segment inflammation.
Unique benefit ("named patient")
For some drugs, the adverse effect is not identifiable in advance but
some patients with some conditions are likely to benefit uniquely.
The drug may then be given a licence on a "named patient" basis—a
bureaucratic hurdle that effectively restricts its prescription to
tiny numbers of patients. Examples include tiabendazole for
strongyloidiasis, ivermectin for scabies, and quinolone ear drops for
chronic otitis media.
Cognitive and social influences on risk decisions |
---|
We often assume that, when faced with any decision involving a range of possible outcomes, we should subjectively estimate how nice or nasty each outcome will be, weight these by the probability that each outcome will occur, and intuitively choose the option with the highest weighted score. This line of reasoning (known as subjective expected utility theory) implicitly underpins much research into health related decision making.
But in reality, neither patients nor the members of regulatory bodies make choices in this fundamentally rational way. Limits to our capacity to process information, for example, prevent us from considering all options, outcomes, and likelihoods at once. Those that we focus on will inevitably influence us more. Anxiety associated with decision making (in uncertain situations) that may do us, as patients, harm (or, as professionals, may get us sued) both exaggerates the narrow focus of our attention and draws it towards more threatening potential outcomes. Even when not anxious, we tend to use simplification strategies in our perception of probabilities and potential outcomes. We tend to see things as either safe or risky (and tend to be "risk averse"), use rules of thumb ("heuristics") to judge likelihood, and consider losses as more serious than gains ("loss aversion"). When trying to imagine how we may feel in the future, we are influenced mainly by our current health state and fail to consider the multiple aspects of future health states or the adaptation to those states that comes with time.7
The well established cognitive biases listed in box 2 help to explain several non-rational influences on drug regulatory decisions and campaigns to overturn them. How information is framed (a treatment that "saves eight lives out of 10" seems better than one that "fails to save two in every 10") is one reason why even objective evidence can be interpreted differently in different contexts.8 The conflation of "natural" with "risk free" is a widely used framing tactic in the herbal medicines industry (box 1, kava; see also figure). The widely reported (but scientifically unproved) link between the MMR (measles, mumps, and rubella) vaccine and autism9 is partly explained by a combination of "availability bias" (in this case, the emotional impact of a severely brain damaged child) and "illusory correlation" (box 2).
|
|
Preference for the status quo and illusory correlation explain why both patients and doctors resist change when a regulatory decision requires adjustment in someone's treatment. Doust and del Mar recently reviewed a host of historical examples, from blood letting to giving insulin for schizophrenia, which showed that doctors too are remarkably resistant to discontinuing treatment when evidence emerges of lack of efficacy or even potential harm.10
On the other hand, the way we make decisions might be well adapted to the complex environment in which we operate—a concept known as bounded rationality.11 12 Gigerenzer and colleagues offer some compelling examples of decisions made on the basis of "fast and frugal" rules of thumb that equal or outperform those of more complex analytical procedures.13
Patients' decision making about risk and benefit is also influenced by beliefs, attitudes, and perceived control (box A, bmj.com) and may also have psychoanalytical explanations—in terms of repression, denial, and transference (box B, bmj.com). These decisions may be distorted by a host of past experience and social influences.14 Regulatory bodies and campaign groups have their own unwritten codes of behaviour (perhaps respectively summed up as "protect the public—if necessary by erring on the side of caution" and "defend the individual's right to autonomy"), which probably set unconscious parameters for individual behaviour. The influence of accountability and social and institutional contexts on decision making should not be underestimated.15
|
Narrative influences on decision making |
---|
Newman describes two major policy decisions in health care and aviation, both of which went against a rational assessment of the benefit-harm balance.18 The decisions were made on the basis of widely publicised stories in which one child was left seriously disabled (from kernicterus after non-interventional management of a borderline neonatal bilirubin level) and one died (in a runway crash when travelling unrestrained on a parent's lap).18 Why were the stories so persuasive at a national policy making level? Bruner divides reasoning into two categories: logico-deductive (rational) and narrative (storytelling).19 Whereas logico-deductive truth is verified through rigour in experiment and observation, a "good narrative" is defined by such terms as authenticity (the story "rings true" and has plausibility within its genre), moral order (a hero gets his just reward, a villain her come-uppance), and coherence (all loose ends are tied up by the final scene). The policy decisions—that all neonates with jaundice must be admitted to hospital and all infants must have their own seat in planes—held considerable narrative validity but lacked logico-deductive validity.
Conclusion |
---|
When drug licensing decisions are overturned (box 1), it is generally not because new scientific evidence emerges but because existing evidence is reinterpreted—especially in the light of context and personal values. In other words, the evidence base for drug regulatory decisions is to some extent socially constructed through active and ongoing negotiation between patients, practitioners, and policy makers.20 Consumer groups, scientists, and the media all have an important role to play in this process, but all parties should recognise that non-rational factors are likely to have a major influence on their perceptions. Greater awareness of affective factors as well as our cognitive biases should help us understand why different stakeholders interpret the benefit-harm balance of medicines differently, and this awareness could provide the basis for strategies to counter such influences.
|back||afisnahome|