By David Woods (firstname.lastname@example.org)
“An opportunity and a challenge”. That’s what Alan Bakst found when he became Director of Health Economics and Outcomes Research at TAP pharmaceuticals in Chicago in August 2004. Up until then, TAP didn’t have a stable and fully staffed group in health outcomes, so the opportunity came in building the group.
That, says Alan, meant hiring staff and educating TAP senior management and others about the precise role of health economics and its value in helping the product. The challenge came from integrating the group into the corporate culture which meant being a coach, a mentor, and a public relations presence for the often only vaguely-perceived science of health economics.
Before coming to TAP, Alan, trained as a clinical pharmacist, worked at Glaxo SmithKline for 10 years as an outcomes researcher and health economist. During that time he earned an MBA at Philadelphia’s Temple University, with a focus on marketing.
TAP has some 3500 employees and an ancestry that is both Japanese through Takeda and American through Abbott; its main products include Prevacid for gastroesophageal reflux disease, and Lupron, a drug for prostate cancer. The company has several products in late phase development, including a therapy for gout.
An additional aspect of Alan’s work is in setting up ISPOR’s first US regional chapter in Illinois – and serving as its inaugural president. The chapter’s primary goal, he says, is to bring local health outcomes researchers together to network and share their research. The organisation is in place and its first full meeting was held on 19 April at TAP.
Alan’s wife Karen is a hospital pharmacist and he has a daughter of 22 and a son of 20. When not promoting health economics and outcomes research, he enjoys golf and poker.
“I look forward to receiving HOC,” he says “ because it keeps me in the loop of current activities in the world of health outcomes and interesting techniques to keep in mind when communicating health economics information to others”.
By David Woods (email@example.com)
HOC editor David Woods has just returned from attending two healthcare congresses. A full report of both can be found on the Rx website at www.rxcomms.com; here is just a taster.
A panel on competition, moderated by John Iglehart, the founding editor of Health Affairs, included Michael Porter, a Harvard professor and a leading authority on competitive strategy, who said that 21st-century medicine is being delivered with 19th-century organisation and management.
What’s called for, he said, is a fundamental restructuring of health systems rather than incremental improvement, and an emphasis on value and on health outcomes per dollar spent. True competition must be based on measurable results, not process, he said; and while information technology is an enabler, it’s not a solution. Restructuring must come from the bottom up and physicians have to ‘get out of the bunker’ to lead the change…
See the news page on www.rxcomms.com for David’s full report.
“Ensuring integrity in medical publications: conflicts, credibility and collaboration” was the conference theme, and the highlight was a panel discussion that featured all three of those in a sometimes heated but always engaging debate on conflict of interest.
Faith McLellan, North American senior editor of The Lancet, pointed out several examples of scientific fraud, including one perpetrated by a scientist who made up data on 960 patients – but gave them all the same date of birth. The lessons learned about conflict of interest, she said, are: slow down, develop more rigorous peer review and get a better handle on who the authors are and what they actually did…
See the news page on www.rxcomms.com for more.
By Kevin Frick (firstname.lastname@example.org)
Many economists think their profession has something to say about almost everything. This is because nearly everything can be thought of in terms of incentives and constraints.
The list is endless: decisions to marry or have a partner, the number of children people have, parenting styles, attendance at religious services, the way people choose to die. Many of these issues come under the headings of lifestyle and demographics.
Should economists have a say in these areas, since they are not always immediately obviously about economics?
A more important question is how much say should economists have about these issues and where should we have that say – particularly outside our own professional journals?
The opportunities for comment are almost limitless – letters to the editor in the popular press, speaking invitations, manuscripts in non-economics professional journals, and even casual conversations with our friends, neighbours, and colleagues.
However, on the question of how much say, we as economists should not get carried away with our sense of unique insights. That would be one way to lose all influence and the ability to affect areas outside our own usual domains.
Non-economists (and perhaps even some economists) could find an overbearing sense of “we know best about everything” to be off-putting. Many other professions use the same outcomes and even many of the same things that we refer to as incentives and constraints as predictors of behaviour.
We should be careful to make clear arguments about the particular insights that economics adds and not overstate what economics can predict. We should be equally clear about what economics does not predict or explain well.
But we should take every opportunity made available to us to make known what economists have to offer. We should use the power of persuasion to make our case about the particular insights that we add to the discussion. However, we should remember that without some humility about how our arguments fit (or fail to fit) into the bigger picture, those opportunities are likely to fall on deaf ears.
This is the first in a series of four articles about observational studies. It’s our impression that observational studies (also known as “naturalistic studies”) are becoming more and more important to healthcare authorities, and it behoves us all to understand more about the uses and abuses of this type of research.
So we have created this series to 1) describe what they are, 2) show how they differ from randomised controlled trials in the way data are collected, 3) have a brief look at how they should be conducted, and 4) describe how to interpret the results.
“Science is built up of facts, as a house is built of stones; but an accumulation of facts
is no more science than a heap of stones is a house.”
Henri Poincare (mathematician, 1854–1912) Science and Hypothesis, 1905
While randomised, controlled clinical trials (RCTs) are the cornerstone of the drug development process, they cannot replicate actual clinical practices. Observational studies help close the evidence gap by providing insights into real life situations, and thus aid our understanding of how both patients and their clinicians manage healthcare problems.
Observational studies are characterised by the lack of intervention when treatment decisions are made; the treatments are administered as they would be in normal clinical practice, and information is collected regarding the outcomes of those treatments. This means that switching therapies midway through the treatment can be common – patients are not restricted to a particular drug therapy. Some observational studies are carried out retrospectively using existing databases of patient data, but the most robust and useful type of study is carried out prospectively; i.e. the study design is decided, then the patients are enrolled.
Observational studies can also collect data on outcomes important to patients that may not be included in RCTs, for example:
Patients and their concerns are central to the study, and unlike RCTs, patients are active partners in both their treatments and the study. Typically, patient reported outcomes (PROs) are integral to the study design.
The table below shows where observational studies tend to fit on the hierarchy of evidence established by Bandolier.
Levels of evidence
Level 1 is the highest; i.e. considered the most robust. Levels of evidence system adapted from Bandolier.
While RCTS answer questions such as “Is medicine efficacious?” or “Is a treatment safe and tolerable?” these answers are often provided in highly controlled settings, which may mean that the findings are not easily translated to actual clinical practice.
So while RCTs can provide definitive answers in specific circumstances and populations, other evidence is required to answer more far-reaching questions now posed by healthcare authorities, such as
Large, well-designed prospective observational studies help provide answers to these all-important questions about medicine use in the real world.