by nicholas mays
Our ambition to co-produce evidence with advisors and officials is fraught with challenges, but remains a worthy goal with valuable benefits, explains PIRU director, Nicholas Mays.
When PIRU was set up three and a half years ago, there was a great deal of ambition on all sides. The Department of Health, as funder, wanted us ‘to strengthen the use of evidence in the initial stages of policy making’. That was the distinctive, exciting bit for us. We were to support or undertake evaluation of policy pilots or demonstration initiatives across all aspects of the Department’s policy activity – public health, health services and social care.
We were also brave, seeking to ‘co-produce’ evidence by working closely with policy advisors and officials, aiming to break down conventional sequences in which evaluation tends to follow policy development. We wanted early involvement from horizon scanning to innovation design and implementation design, plus support work for evaluations or to do them ourselves. It was clear that if we could be engaged, flexible and responsive, officials would be more likely to work with us.
Some researchers prefer planned, longer term work. They see the responsive element as regrettably necessary to pay the mortgage. In fact, our more responsive work has often turned out to be the most interesting: some of it we would probably have planned to do in any case; other parts have led to substantial pieces of research. It can be highly productive, not least because policy advisors are fired up about the findings.
In our first years, we have tried hard to work across all stages of policy development. To support the early stages of policy innovation, we did some rapid evidence syntheses. We have advised on the feasibility of a number of potential evaluations – for example, we looked at the Innovation Health and Wealth Strategy to examine which of the strategy’s 26 actions could credibly be evaluated. We have advised on the commissioning and management of early stage policy evaluations. We have also helped define more precisely what the intervention is in a particular pilot because, in pilot schemes or demonstrations, the ‘what’ is often presumed, but can actually be rather unclear.
We had expected to guide roll-out, using the learning from evaluations, but that’s not always easy for academic evaluators. PIRU often works with different parts of the social care and health policy system, perhaps for quite short periods of time, which is a very different relationship from working, say, with clinicians for an extended period. Also, in policy and management, unlike the clinical world, people change jobs fairly frequently making it difficult to sustain relationships.
We have also advised on modelling and simulation, which is useful for playing out possible effects of innovations and to debate potential designs. However, that work typically tends to happen within government rather than through outsiders such as PIRU.
Indeed, we have found it difficult to become involved in the early stages of policy development, partly because health and social policy decision-making in England has been restructured and become more complicated as a result of the Health and Social Care Act 2012. There are new agencies and new people, altering long-established relationships between policy makers and evaluators.
Engaging us early on is also demanding. It requires greater openness and communication within government, so that research managers actually know when an initiative is starting, and a willingness to share early intelligence with outsiders in the research community. Some policy makers also find that the perceived benefits of sharing new thinking with us fails to outweigh the perceived risks of having us at the table early on.
There have been other big issues. How close should evaluators get to those who commission an evaluation? How candid – and sometimes negative – should we be? Should we refuse to do an impact evaluation because we know that too little time will be allowed to elapse to demonstrate a difference? Should we actively create dissonance with customers who are also funders through a process of constructive challenge? Strangely, the researchers are sometimes the ones saying, ‘No, we should not be looking at outcomes. You are better doing a process evaluation or no evaluation at this stage.’ In some cases, the researchers are asking for less evaluation and the policy makers are asking for more.
Can it be predicted that certain pilots do not realistically lend themselves to being evaluated? For example, we conducted a study of a pilot scheme allowing patients to either visit or register with GP practices outside the area in which they live. We highlighted in our report that we couldn’t look at the full range of impacts in the 12 months for which the pilot ran. Nevertheless, critics of the policy were annoyed with the evaluation because it was seen to legitimise what was, in their minds, an inadequate pilot of a wrong-headed policy.
We frequently have to say that the policy pilot will take a lot longer than expected to be implemented. However, the commissioners of evaluation often have no time to wait and want the results right away. The danger is that lots of time is spent interviewing people and looking for implementation effects, only to discover that not very much has happened yet.
So we face many challenges. But that’s hardly surprising. In an ideal world, we would have closer sets of relationships with a defined set of potential users. In reality, we are working across a very wide range of policy issues with an overriding expectation that we should engage at an early stage and speedily. It’s a difficult but rewarding balancing act.
Nicholas Mays is Professor of Health Policy at the London School of Hygiene and Tropical Medicine and Director of PIRU. This piece is based on a presentation that Professor Mays gave at the meeting ‘Evaluation – making it timely, useful, independent and rigorous’ on 4 July 2014, organised by PIRU at the London School of Hygiene and Tropical Medicine, in association with the NIHR School for Public Health Research and the Public Health Research Consortium (PHRC).