‘We need critical friends and robust challenge, not aloofness and separation’

by anna dixon

A strong relationship between policy-makers and academic evaluators is vital, particularly to support high quality implementation of change, says Anna Dixon, the Department of Health’s Director of Strategy and Chief Analyst.

There continues to be a view that policy making is a very neat process. An issue supposedly arises and there’s an option appraisal about how we might address it. Then, following some consultation, an implementation process is designed. After that, as good policy makers, we always evaluate what we did, how it worked and those insights feed back very nicely to inform future policy making.

Alas, it’s all a bit more complicated than that. However, my message is that in health policy – as well as in other areas of government – we are serious about commissioning evaluation, and ambitious about using the results. Evaluation matters to us. The conditions for it, albeit imperfect, are improving. The impacts and benefits of evaluation can either be formative, to provide learning and feedback as a policy is rolled out or focused on impact to learn retrospectively. In practice many cover both implementation and impact.

Strong support for evaluation

Enthusiasts will be relieved that aspiration for evidence-based policy is very much alive in government.  Sir Jeremy Heywood, the Cabinet Secretary, has said that an excellent civil service should be skilled in high quality evidence-based decision-making. The Treasury is a crucial driver, requiring the Department of Health to do process, impact and cost-benefit evaluations of policy interventions, particularly where they involve significant public expenditure and regulation.

However, delivering on good intentions can be difficult. The National Audit Office (NAO) recently defined best practice as ‘evaluations that can provide evidence on attribution and causality and whether the policy delivered the intended outcomes and impact and to what extent these were due to the policy’. Doesn’t that sound very simple and easy? If only it were so.

In reality, it is incredibly difficult in the messy world of policy implementation to tease out the isolated impacts of one policy compared with all the layering effects of many policies changing as they are implemented. It is far from easy to identify any neat causality between particular policy interventions and outcomes.

The NAO found that much more could be done to use previous evaluations in developing impact assessments of new policies. A survey of central government departments found that plans for evaluation are sometimes not carried out.

Large evaluations commissioned

The Department of Health commissioned a large scale programme of evaluation of the Labour government’s NHS reforms which was coordinated by Nicholas Mays (now director of PIRU). We’re now also commissioning an evaluation of the Coalition’s reforms of the English NHS and also thinking about evaluating impacts from policy responses to the Francis Inquiry. These are substantial evaluation programmes tackling many interventions, occurring simultaneously against a background where much else is changing. It will not be easy to tease out the ‘Francis effect’ in the current economic context with many other policy initiatives taking place at the same time. As well as funding the NIHR and the Policy Research Units like PIRU, the Government recently developed the ‘What Works Centres’. These aim to help government departments and local implementers – schools and probation services and others – to access higher quality evidence of what works.

 Policy-making misunderstood

Will all this activity make a difference? I feel confident that it can lead to more successful implementation of particular interventions and can contribute to better policy-making. But it is only one input into the process. Policy is often driven by evidence of a different kind. That may be the personal experience of the Minister, deliberative exercises, practical wisdom and so on. Insights about what can work on the ground – ‘implementability’ – are also rightly important. And there is the more political dimension – what is acceptable to the public? All these elements go into the mix along with more formal research evidence.

Benefits of implementation evaluation

The influence of evaluation on implementation is more compelling than its influence on policy and demonstrating real value. We have seen this recently with the Care Quality Commission’s new hospital inspection programme. Researchers, led by Kieran Walshe, went out with hospital inspectors on the first wave which immediately fed into the design of the second wave. That’s also been evaluated and is now feeding into the approach that will be rolled out for future hospital inspections and in other sectors of health and care. These pragmatic, real time evaluations can be very useful. They are critical now for the Department of Health because its separation from NHS England means that many people who had experience of more operational roles are no longer working directly within the policy making environment.

Implementation evaluation is beginning to be reflected in the language used by government. The Treasury continues to emphasise summative evaluation, focussing on outcomes and cost benefit ratios but the policy implementation ‘gap’ is now recognised as being particularly important.  We are in a phase where ‘policy pilots’ seem to be out and we have tried ‘demonstrators’. Now we have ‘pioneers’. The language is becoming clearer that the main goal is to understand how to make something work better.

Evaluation can be more effective

What can government and academia do to increase the influence and usefulness of evaluation? We share a challenge to create engagement at the earliest possible stage – ideally the policy design stage. This means building relationships so that academics understand the policy questions and policy makers can share their intentions. So, evaluators should make sure that they talk to the relevant officials and find out who’s working on what. Success can yield opportunities to help design policy or implementation in ways that will support better evaluation.

Academics should be willing to share interim findings and work in progress, even if it is not complete. Otherwise there is a risk that they will miss the boat. On the Government side, we need to be more honest and open about the high priority evaluation gaps at our end.

In terms of rigour, Government is trying to provide better access to data. For example, organisations implementing interventions in criminal justice are able to use large linked data sets, established by the Ministry of Justice, so it is much easier to see impacts of policy changes on reoffending rates. We must make sure that our routine data collections measure the most important outcomes and that these measures are robust. Clearly, one of the challenges for evaluators is to understand the messiness of context.

Independence

The one word I have avoided is ‘independence’ of researchers. If independence means aloofness and separation, I don’t think the relationship works well.  We need to know each other: academics need to know the policy world; the policy world needs to understand academia.  In government, we need critical friends and robust challenge. The fruitful way forward for both sides is to have ongoing discussion, engagement, creating good relationships that mean, even in this messy world, that we can make greater use of evaluation to inform decision-making.

Dr Anna Dixon is Director of Strategy and Chief Analyst at the Department of Health.  This blog is based on a presentation she gave at the meeting, ‘Evaluation – making it timely, useful, independent and rigorous’ on 4 July 2014 organised by PIRU at the London School of Hygiene and Tropical Medicine, in association with the NIHR School for Public Health Research and the Public Health Research Consortium (PHRC).