Medicine has gone through enormous innovations over the past centuries which have been driven by excellent clinical research, comparing different treatments to decide which one works best.
One of the first famous trials was carried out in 1747 by a Scottish surgeon called James Lind. He was determined to find a cure for scurvy, a lethal disease among sailors. He selected 12 scurvy patients, placed them in the same quarter and gave them the same diet. Then, groups of two men each received a different daily “treatment”; oranges and a lemon (containing vitamin C), sea water, vinegar, an elixir of vitriol, and two other common remedies for the time period. After 6 days, when they ran out of fresh fruit, only one man was fit for duty; the one on vitamin C treatment.
So did this provide the definitive proof that scurvy should be treated by eating fresh fruits? Perhaps this one man had other typical features that explain why he was cured and the others weren’t. Perhaps he was younger and healthier. Or perhaps there were external factors that contributed to his miraculous cure; perhaps he was closer to the kitchen so he didn’t suffer as much from gusts of cold sea wind?
These differences among trial subjects are what we, in clinical research, call selection bias; the patients in the different treatment groups are not the same, and as such these differences could (partly) explain their different outcomes. A similar methodological problem is confounding, which means that another factor, which is associated with the treatment of interest, could actually explain their chances of healing. In the above example, perhaps the oranges were prescribed to the person closest to the stock of fruits in the kitchen, but being closer to the kitchen also meant that the patient was more comfortable, so what caused his recovery? For proper clinical research, we would preferably have two completely similar, parallel universes. Then we could introduce one type of intervention in universe A, and another in universe B, and see what happens. Unfortunately, we have to make do with methodological solutions.
Selection bias and confounding are often detected in research related to the prevention and management of antimicrobial resistance. The emergence of disease-causing bacteria that are resistant to all currently available antibiotic treatments is an important contemporary problem. This has been well covered in the news, with captions like “Woman killed by superbug.” One central way to slow down the development of superbugs is something we call antimicrobial stewardship programs. This means that within a hospital there is a team of specialists that govern the use of antibiotics, preventing overuse and misuse. This can be done in many different ways — unfortunately, we do not know which approach is the best and should be advocated worldwide to halt the rise of superbugs.
There have been multiple studies evaluating the impact of different antimicrobial stewardship approaches; unfortunately many of these studies have a before-after design. This means that they measured, for example, the level of antibiotic resistance among bacteria causing hospital infections before the intervention and compared this to the level after the intervention to evaluate the impact of the intervention. The problem with these observational studies is that many other things could have changed as well; perhaps the before period was in the summer, and the after period in the winter, or another hospital policy change started at exactly the same time. Consequently, these types of studies can never provide the final proof of whether the implemented intervention by itself caused a diminution of antibiotic-resistant bacteria.
Fortunately, there are a number of methodological approaches that can improve the quality of studies evaluating antimicrobial stewardship programs. First, we would need to have a control arm; a group of patients that do not experience the intervention, but are followed-up during the same period. If the intervention is the real cause of a change, we would expect to see no difference over time among the patients in the control arm. Another important feature is randomization. This means that patients are assigned to the intervention or the control arm at random, by flipping a coin, without considering their personal features (like age), or any other external factors (like being close to the kitchen). Since antimicrobial stewardship interventions are more often aimed at a group than at an individual patient, randomization is often applied at group level, like hospital wards for example. In a cluster-randomized trial, hospital wards are randomly assigned to be in the intervention or control arm. In this way, the influence of selection bias and confounding can be minimalized.
Another possibility is to implement a so-called controlled interrupted time series design. In this case, you do not need to apply randomization; you can just start an intervention for all patients in a certain ward of your hospital, and use another ward as the control arm. However, you have to make sure that you have multiple measurements of the level of antimicrobial resistance over time, for example, every month for 12 months, before and after the intervention. Then, you should test whether in the months before the intervention antimicrobial resistance was already changing. Secondly, you should test whether it abruptly changed in the month the intervention was started. Thirdly, you should test whether the level of antimicrobial resistance is still changing after the intervention was started. And finally, you can compare the information from all three steps between the intervention and control wards. All this information together can give you a better understanding of what is going on, and how much the intervention contributed to the observed changes.
It is very important to tackle the rise of antimicrobial resistance and one of the main measures to halt this development is antimicrobial stewardship. However, we need to know exactly what approach will have the largest positive impact. Therefore, quality improvement of clinical studies in this field is desperately needed.
These findings are described in the article entitled Good epidemiological practice: a narrative review of appropriate scientific methods to evaluate the impact of antimicrobial stewardship interventions, recently published in the journal Clinical Microbiology and Infection. This work was conducted by M.E.A. de Kraker, M. Abbas, B. Huttner, and S. Harbarth from Geneva University Hospitals and Faculty of Medicine.