By John F P Bridges

While outcomes for any given treatment differ significantly among patients, national healthcare systems continue to take a top down population perspective in reviewing not only epidemiologic data but to evaluating the effectiveness and cost-effectiveness of new medicines.

This top down approach, coupled with a growing need for cost containment, has recently caused many governments to institutionalise these practices through health technology assessment institutions. The purpose of these agencies is to promote better quality or value for money in the healthcare system; but this has led to medicines and technologies either being considered good (in other words, good for all) or they are deemed bad and are blacklisted.

Whether it be data from a randomised control trial, a comparative effectiveness study or a cost-effectiveness study, the focus is on the average patient’s health outcomes, where all individuals are treated equally (or to be more correct, identically). Variation is something that we consider only when it comes to statistical inference, viz. does the average effect differ from zero.

This is a major over simplification since patients do vary: their needs vary; their preferences vary; their circumstances vary and, most important, their outcomes vary. Even if you wanted to treat individuals equally on ideological grounds, these top down approaches ignore the risks and uncertainties in medical decision making. For example, rather than understanding risk in clinical trials, we attempt to make it go away by demanding larger and larger trials (a movement away from the individual, towards the population).

As technology progresses, we are increasingly aware that the variation in benefits and adverse consequences of many healthcare interventions are predictable. As our knowledge of genetics and proteinomics expands, our ability to predict these events grows exponentially. This has prompted many healthcare innovators to develop diagnostics to tailor medicines for particular patients – in what is often referred to as personalised medicine. Governments everywhere have been eager to support these start-ups, almost to the point of frenzy and with little accountability.

One key problem exists, however, in that these new technologies are incompatible with the fundamental principles of many national healthcare systems and with the top down evaluation that has been implemented. Hence a bottleneck is occurring – ironically with government playing a dual role of promoting and rationing medical technology. In order to alleviate this bottleneck, we either have to address the funding crisis through personalised approaches to healthcare finance (The Netherlands and Switzerland have attempted this) or we need to stop wasting money on research and development of personalised medicine technologies that are unlikely to be funded in the future.

Essentially this means that healthcare systems need to decide whether they want to focus on the mean (the average effects of medicine) or on the gene (by accommodating personalised medicine.

John Bridges, PhD, is Assistant Professor, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Senior Fellow at the Center for Medicine in the Public Interest and founding editor, The Patient: Patient-Centered Outcomes Research.