By Mary Gabb ( [email protected] )

A negative result is defined as one that shows evidence of absent (or no) effect of an intervention; it is not due to absence of evidence, i.e, inconclusive results due to an underpowered study.

Traditionally, negative clinical results were considered ‘uninteresting’ and so were not published, or were not submitted for publication. In experience as a biochemist, a result was not ‘good’ or ‘bad’, ‘positive’ or ‘negative’; it simply was. But a doctoral student could not obtain a PhD with only negative results, published or not. So, if we only strive for positive results, is this really rigorous science?

Dr Trisha Groves, Deputy Editor of the British Medical Journal , says that ‘we’re keen to publish negative studies when they illustrate important points. Specifically, our policy is that if a research question is important, original, and relevant to the decision making of our general medical readership, and if that question is answered with the right study design and sufficient power, the answer should be published, whether it’s positive {or} negative. We often publish negative results’. In fact, two of the BMJ’s top 10 research papers published in 2006–2007 were negative studies (‘top’ being defined by a combination of cites, hits, downloads, letters, pick-ups in evidence-based medicine journals, email alerting services, and media coverage).

How do negative clinical trial results affect economic analyses? Are they even used? One assumes that negative clinical trial data affect HE outcomes (e.g, assumptions made about the study population or efficacy of study drug, utilization rates, rates of adverse effects).

Christopher Carswell MSc, MRPharmS, Editor of PharmacoEconomics , feels that the effect of limited published clinical trials with negative results on HE research ‘is a very interesting research question, which to my knowledge has not been investigated and would probably be context specific. It undoubtedly leads to biased estimates of cost effectiveness but by how much and whether it is enough to affect decision making is an interesting question’.

When asked whether health economists actively look for negative clinical trial results in their economic analyses, Carswell says, ‘If so, I have rarely seen authors make an effort to search for anything other than major published clinical trials, which I find very disappointing. A related point – rarely do authors employ formal methods to combine evidence from disparate sources, e.g, meta-analyses, mixed treatment comparisons, meta-regression – which is also disappointing’.

Both editors agree that negative clinical trial results are an important part of decision-making – for both efficacy and cost-effectiveness. As Christopher Carswell notes, ‘I don’t like the term “negative [result]” as I think such studies should still be viewed in a positive light – they have shown a technology is not cost effective which helps with future decision making. In other words, we have added to the evidence base, which is not a negative thing’.