Nutt's message undermined by his own lazy evidence, say scientists
This post is from Dr Maria Viskaduraki, a biostatistician at the University of Leicester, and Diamanto Mamuneas, a PhD student at the Royal Veterniary College
In October 2009, David Nutt was forced to step down from his position as head of the UK’s Advisory Council on the Misuse of Drugs after counselling the then government against reclassification of Cannabis from Class C to Class B (advice which the government ignored).
Shortly after, the BBC reported Nutt's words; "If scientists are not allowed to engage in the debate at this interface (between scientific advice and policy making) then you devalue their contribution to policy making and undermine a major source of carefully considered and evidence-based advice."
It is unusual to witness scientists defending the value and importance of science in the public sphere – especially when it comes to politics – and Nutt should be credited for daring to do so. Science is the best tool we have for settling disputes over such issues and it is a dangerous mistake for governments to choose the majority opinion in the face of scientific evidence.
However, irrespective of whether David Nutt and the ACMD's original position was evidence-based or accurate, Nutt and colleagues' high-profile paper, published in The Lancet a year later, falls short of settling the controversy and is a poor example of science's potential usefulness.
Nutt presents evidence generated using the “multicriteria decision analysis (MCDA) approach” and to understand just what the results tells us, and more importantly just how these results were arrived at, one must first push past this obscurantist's dream of an acronym and see just what the procedure involved.
Over the course of a single day, a group of “experts” got together to discuss how each of 20 drugs deemed relevant in the UK scored (from 0-100) in terms of 16 criteria. These experts were asked to share their own opinions and it is clearly stated in the subsequent publication that “scores [were] often changed… as participants share[ed] their different experiences and revise[d] their views”. It is well-documented that an outspoken individual can bias a crowd so the subjectivity of this approach should be considered – not to mention the extent of the individuals' expertise if they were so easily biased.
The seemingly arbitrary criteria on which the substances were then scored included drug-specific damage, family adversities, economic cost, crime, loss of tangibles, injury, dependence and loss of relationships. It is difficult to imagine a situation where most of these are not interconnected. Statistical analyses are expected to take into account interactions between variables and not simply assume them to be independent. It becomes impossible to weigh up the relative costs when the criteria are so closely linked.
Consider a drug that causes, through crime, an economic cost. Might this drug not also lead to family adversities due to subsequent arrests? And then perhaps a loss of tangibles and loss of relationships too. In another instance, it might be that a given drug results in just a few of these but almost never certain others – perhaps a drug doesn't result in crime as much because it costs less, is readily available and is not illegal (such as alcohol) but can still lead to family adversities and loss of relationships because of the extent to which it alters behaviour.
Some of the criteria are also difficult to define and might be understood and weighted differently by another panel of experts or even by individuals within the group expressing their opinions here (e.g. is economic cost more important than family adversities?). Indeed, when two drugs were tied for all criteria (both with a maximum score of 100), the panel seems to have simply chosen between them
Nutt and colleagues do consider evidence from other countries that seem to support many of the experts' prior positions (unsurprisingly). Unfortunately, such comparisons cannot validate the poor methodology evident in this study and are not appropriate to make in the first place due to differences in “availability and legal status” across the locations, which influence their impact in terms of each criterion.
Sadly, it looks like David Nutt might be right about recreational drugs and he might have had an important and valid message for the government at the time of his sacking, but, by presenting lazy evidence, he might have unwittingly devalued and undermined his own contribution.