Modelling lets evaluators test-drive change safely and cheaply, using a diversity of non-RCT evidence

by sally Brailsford

Enhanced decision-making, blue-skies thinking and quick trials of hypotheses are all much easier if modelling is in your evaluation tool kit, explains Sally Brailsford

Everyone thinks that they know what a model is. But we all have different conceptions. I like the definition from my colleague Mike Pidd, from Exeter University. He sees a model as ‘an external and explicit representation of a part of reality’.  People use it ‘to understand, to change, to manage, and to control that part of reality’.

We tend to acknowledge the limitations that models have, but fail to fully appreciate their potential.  ‘All models are wrong,’ as George Box said, ‘but some are useful’.

I work in Operational Research. It’s a tool kit discipline. In one part, we make use of statistics, mathematics and highly complex algorithmic models. In another, we draw pictures and play games. I use these elements to create simulation – I build a model in a computer which replicates a real system and then we can play ‘what if’ with it.

Models inform decision-making

I use models mainly for informing decision-making. Sometimes, they don’t actually need much data to be very useful. For example, there is a famous model about optimal hospital bed occupancy, created by Adrian Bagust and colleagues at Liverpool University’s Centre for Health Economics.  It includes some numbers but they are not based on any specific hospitals. It shows that if a hospital tried to keep all its beds fully occupied, then some patients would inevitably have to be turned away.

The model varies patient arrivals as occupancy increases and demonstrates how often the hospital has to turn away emergency patients. It shows that hospitals deemed inefficient, because they occasionally have empty beds, are actually operating effectively. The finding really influenced policy. It showed that, as a hospital reaches about 85 per cent occupancy, it is increasingly likely to have to turn emergency patients away. It is a simple model. It did not involve long-running, expensive randomised controlled trials. Yet it provided vital evidence and was powerful in influencing occupancy targets.

30 year clinical trial in five minutes

In another model, we looked at patients with diabetes at risk of developing retinopathy. Everyone agreed that it was a good idea to screen patients with diabetes to prevent retinopathy before it leads to blindness. However, there was a whole range of screening practices. We used data from all over the place, from the US and from the UK. The model followed patients with diabetes through the life course and through different progression stages.

We had to draw data from very early studies because it would be unethical to conduct a clinical trial that did not treat people according to best practice. We then adapted the model for different populations, with varying ethnic mixes and probabilities of diabetic incidents. We superimposed on the model a range of different screening policies to see which was most cost-effective. In effect, once we felt confident that the model was valid, we could run a clinical trial on a computer in five minutes rather than running a real clinical trial for 30 years. As a result, we discovered really valuable findings.

The beneficial difference between all the various techniques and screening programmes proved to be minor compared with the large impact of more people being screened. We realised that raising attendance, perhaps by social marketing, offered much better value than buying expensive equipment.

Guiding design of hypothetical systems

The next model is even more hypothetical. Three engineers had an exciting, blue skies idea for patients with bipolar disorder. What if, they asked, different sensors tracked a person’s behavioural patterns and, having established an individual’s ‘activity signature’, could spot small signs of a developing episode that would trigger a message that the person might need help?

We expected, rightly, that success depended on what monitoring individuals could tolerate – perhaps a bedside, touch sensor mat, or a light sensor in their sitting room, sound sensors or GPS. We built these different possibilities into the model. We could also check how accurate the algorithms would have to be, if this technology was developed. So we were guiding design of a hypothetical system.

Many, particularly those from clinical backgrounds, find it hard to accept that modelling can provide evidence upon which to make a major decision. People often expect the same kind of statistical evidence as from randomised controlled trials. Modelling does not claim to provide that level of certainty. It is a decision-support tool, helping you understand what might happen if you do something.

Appreciate modelling advantages

We should recognise the advantages of models. They are quick and cheap – you can run a clinical trial that could last decades in a matter of minutes. If you lack confidence statistically in your model, there are solutions: expert opinion and judgement can help fill the gaps. A model allows people to talk about issues in a policy setting and to articulate their assumptions. Quite often the conversations along the road are more important than the eventual model and the model is just a means to that end.

Like in the bi-polar project, you can model innovations that don’t even exist. So I often use modelling for hospitals around redesigning a system or a service. The development does not exist yet, so there are no data – you must gather all the available evidence you can and build it into your model. It lets you explore more than when using traditional methods because your assumptions can be more flexible.

Collecting primary data is hugely expensive, sometimes impossible.  You can consider all sorts of options that it would be unethical to explore in reality. As the bed occupancy model shows, the findings can be powerful and influential.

There is a saying that, if all you have is a hammer, then every problem is a nail. As researchers, we should avoid being confined by preferred methods, whatever our discipline. Modelling can be a valuable research tool.

Sally Brailsford is Professor of Management Science at the University of Southampton. Her blog is based on her presentation on 4 July 2014 at PIRU’s Conference: ‘Evaluation – making it timely, useful, independent and rigorous’.