Evaluations should share interim findings only when pieces of research are completed and on terms that are agreed in advance

by oona campbell

A global maternal health initiative that could save thousands of lives has highlighted dilemmas for those assessing its performance, says Oona Campbell

 

 It is difficult to over-estimate the urgency of improving maternal health in developing countries. Women die in childbirth or from complications during pregnancy, day in and day out. Some 99 per cent of maternal deaths are in the developing world – these tragic mortality figures are the public health indicator showing the greatest gulf between rich and poor countries. Most of these deaths occur during labour or within 24 hours after delivery, typically because of excessive bleeding.

So it is extremely important to us to have been asked to evaluate MSD’s 10 year, $500 million MSD for Mothers initiative, designed to create a world where no woman dies giving life.  As the team chosen to evaluate parts of the initiative, we had to think about which interim findings to share, when and how and with whom. We appreciate the importance of communicating findings quickly, but it is vital that evaluation is independent and that learning is robust. We wonder whether communicating too soon or too frequently would undermine independence.  Right now, our view is that we should not wait until the study ends to detail some interim findings. But we plan to share only completed pieces of research with clear protocols and objectives.

Why evaluate?

MSD for Mothers focuses on two leading causes of maternal mortality – post-partum haemorrhaging and pre-eclampsia. There are a number of priority countries – among them we focus on work in India and Uganda. It is a big initiative, with multiple pillars: product innovation, global awareness and advocacy and also numerous projects aiming to improve access to affordable, quality care for women.

Why was the company interested in having an evaluation? They sought an independent assessment of their contribution to maternal mortality reduction and to identifying sustainable solutions. They wanted guidance in their existing strategy and to ensure they were investing in high impact programmes. From the policy perspective the aim is to contribute to the evidence base for better decision-making globally and to have robust research available through publications in peer reviewed journals.

The difficult issue for us was how we might contribute to guiding the existing strategy, ensuring investment in high impact potential projects. When do we provide input? What do we do? How do we do it? Does this affect our ability to be independent? If we get that involved in programme design, will it affect our ability to do robust evaluation? How do we work with the implementers who are actually doing the projects? Will they continue to work with us if we share interim findings? How does all this affect our ability to be relevant?

Trying to be helpful

Our initial thought was that we wanted our evaluation to be used. Too few resources go into women’s maternal health in low income countries, so, we certainly did not want to say, at the end of 10 years: ‘No, it didn’t work.’ We wanted to maintain a dialogue with policy makers, commissioners of research and the implementers. As Tom Woodcock from NIHR CLAHRC Northwest London has explained, this can be very successful. So there was an assumption that we should be responsive, engaged and give feedback. But when should we do this? What does it mean to be ‘maximally responsive’ and should policy makers – or in this case the funder – have sight of the interim findings?

Our approach

We are using a multi-disciplinary, mixed method approach that tries to capture the scope and range of the activities. Our basic approach is to work with MSD to identify overarching questions and then to work with the implementers, usually non-governmental organisations in specific countries, to understand what they are trying to do. Then we identify projects to evaluate and agree key evaluation questions. Within that, we try to understand exactly what people are doing, the theory of change and how the implementer thinks it is going to work. Where possible, we like to recommend ways to design their implementation that allow for evaluation, but typically that’s not possible. Then we provide technical support to improve the rigour of monitoring in specific projects and we aim to use robust, analytical methods for the independent evaluation, including by gathering further data.

Guidance from global literature on sharing of findings tends to be vague, but mentions sharing interim findings. The Centers for Disease Control and Prevention is probably most explicit saying: ‘It’s important to use the findings that you learn all along the way because if we don’t, opportunities are missed, if you wait until the very end of your evaluation to use some of those results. And sometimes those key nuggets of information in terms of interim findings may not necessarily be captured in that final report, so it’s important to use them as you learn about them.’

Clinical trials have formal mechanisms for interim findings.  Data monitoring committees look at elements of implementation such as adequacy of enrolment, as well as trial endpoints and adverse events.  Insights from clinical trials tend to focus on ethical obligations to stop trials early to reduce study participants exposure to inferior treatment. But there is also concern that multiple interim analyses of accumulating data can find differences when actually there are none.

We wish to learn from these approaches, but in terms of our evaluation, an important consideration is the multiplicity of interventions underway in this wide-ranging programme.  The application of interim findings about simple interventions, such as those usually tested in clinical trials, is more straightforward. Imagine evaluating interventions to reduce the incidence of maternal tetanus. The interventions might be ensuring clean delivery – because an unhygienic birth environment exposes a mother potentially to tetanus spores – plus immunization with sufficient doses of tetanus toxoid to prevent the onset of maternal tetanus.

But what about a complex intervention where you are trying to change maternity care?  A huge range of interventions are required, including, for example, health worker training, changes to ambulance services, accreditation of private providers, behavioural change communication, health insurance etc. This programme might involve a long complex causal chain with feedback loops and multiple groups of individuals. Is an interim finding on one aspect a solid basis for changing the implementation?

Uses of interim findings

There is also a wide variety of potential purposes for interim evaluation. They might include: stopping a complex intervention that is harmful; to proclaim success and roll out elsewhere; to improve implementation of intervention; to change the intervention and bolster failing/problematic bits; to ensure politicians/ policymakers remain engaged; to respond to a need for quick results.

We’re trying to better understand how to share interim findings and with whom. Should it be with programme implementers, policy makers, funders or others? Or, perhaps, all of them? Should it be simply findings on implementation or just on outputs and impacts? Our ‘interim conclusion’ on the interim findings is that we certainly do not have to wait until the end of the programme and we do want to communicate some research. But, by and large, we will be clear that these will only be completed pieces of research, with clear protocols and objectives, formally specified before implementation begins.

Dr Oona Campbell is Professor of Epidemiology and Reproductive Health at the London School of Hygiene and Tropical Medicine. This piece is based on a presentation that Professor Campbell gave at the meeting ‘Evaluation – making it timely, useful, independent and rigorous’ on 4 July 2014, organised by PIRU at the London School of Hygiene and Tropical Medicine, in association with the NIHR School for Public Health Research and the Public Health Research Consortium (PHRC).