Monthly Archives: April 2018

Investors need rigorous assessments of Social Impact Bonds

BY KATY PILLAI

A major investor highlights the vital role that research and evaluation should play in developing this form of outcomes funding.

Evaluation and research into Social Impact Bonds (SIBs) is a hot topic for Big Issue Invest. We are one of the UK’s leading social impact investment firms, having invested in approximately 350 charities and social enterprises since 2005, in our quest to dismantle poverty and create opportunity.

Our investments support areas such as access to housing, financial and social inclusion, mental and physical health and well-being, and employment, education and training.

We have made several SIB investments since 2012 and have watched the market develop. Our first-hand experience is that there is more work to be done to refine the model, but there have been impressive outcomes for programme participants and the charities delivering the contracts in which we have invested.

How we use research and evaluation in SIBs
We aim to address the structural challenges of SIBs and maximise their individual and collective social impact. Research and evaluation can help in this goal. Many commissioners and service delivery providers are unfamiliar with SIBs and I often direct them to impartial, well-informed research to build awareness and understanding.

Discussing Big Issue Invest’s learnings and experiences from individual programmes with evaluators helps us to contextualise the situation and identify emerging trends in this rapidly-developing field. These partners can develop the tools and models to test and critique our theories and insights ‘from the field’ and evaluate the wider market, whereas our frame of reference is often limited to our specific investments.

Research in social investment and SIBs is at a nascent stage. It is crucial that the right foundations are laid today to enable good-quality ‘market’ level analysis in the future. Consistency and rigour of approach will pave the way for systematic reviews and meta-analyses, essential if outcomes-based approaches are to become commonplace in the commissioner and policymaker toolkit.

We also welcome more quantitative approaches and ambitious evaluations that compare SIBs to traditional fee-for-service mechanisms or payment by results more broadly. SIBs are often conflated with outcomes-based approaches in general, which makes it very hard for investors to assess – and improve – their social impact.

Funding is, of course, needed to allow this work to take place. At the moment, evaluations are too often the balancing item in a very limited budget, constraining their ambition. We wholeheartedly support the calls for a ring-fenced fund for SIB evaluations, recognising the value of the output to commissioners, central government and potentially also philanthropic funders who might seed the fund.

Understanding what, why, how
We are an impact investor: outcomes are our reason for being, not just a by-product of our investment activity. Social impact due diligence underpins every investment decision we make.

We look closely at the theory of change for each SIB. There needs to be a coherent and credible hypothesis for how outcomes will be improved for the programme participants and – beyond that – how the programme could help to tackle the underlying issue through, for example, earlier intervention. We interrogate potential perverse incentives in order to mitigate them. Research and evaluation from previous programmes helps us to do this: we rely on it to validate the causal link between inputs, outputs and outcomes and complete our due diligence of the intervention.

It’s important that everyone involved reviews the theory of change periodically after the contract launches. One of the strengths of SIBs is that they shine a light on what works and what doesn’t, enabling real-time improvements and sharing of learnings for future contracts. If we scrimp on monitoring and evaluation, we undermine the programme and indeed the SIB model.

Evaluations therefore need to be robust, relevant and timely. We want to understand not only the results achieved but also why they are (or are not) achieved and how we can replicate and improve on them. That might be a programme evaluation or an impact evaluation, or qualitative or quantitative approach, depending on the context but we certainly need more than outcomes verification.

We seek insights into the drivers of success so they can be reflected in future projects. The GLA Rough Sleeping SIB in 2012 was divided into two lots awarded to separate providers, one funded by Big Issue Invest. We know the absolute outcomes achieved by the programme but would like to dig deeper into whether different operational or investment approaches had a bearing on success.

We are keen to work with the evaluator community to design the evaluations and contribute to them. It’s important to be confident that Big Issue Invest’s loan has achieved its social objectives – and those of our investors in turn – so we are a consumer of evaluations as well as contributors to them.

We are one of very few organisations that has worked across several SIBs in different regions and policy areas. We can contribute data, insight and practical experience and welcome the opportunity to do so. At a practical level, we can coordinate with the evaluator to minimise the data collection burden on the service provider’s staff and the programme participants. If we can bring the evaluator in to the design phase early, we can also incorporate their evaluation into the delivery model early to avoid duplication or complication down the line.

Using data and analysis to target interventions
Reliable data and analysis is essential to high-quality SIB design. For example, we are involved in an ‘edge of care’ SIB for young children where it is unfeasible to roll out an intensive (in other words, expensive) intervention to all children on social services’ radar. Rather than only work with children on the very cusp of care – when it is often too late to reverse their trajectory or the trauma they have suffered – a researcher is working with commissioner data to identify early risk factors that increase the child’s propensity to enter care. The programme will be targeted towards these high risk children as well as those on the very cusp of care. This allows the commissioner to fund an early intervention service that is also cost-effective, often challenging in outcomes-based commissioning. There is huge potential to harness data and analysis in this way to design preventative services.

The value of timely feedback
Speed of evaluations is a challenge. Evaluations are valuable to SIB stakeholders when developing follow-up programmes and carrying out due diligence. If investors have a good level of confidence in the achievable outcomes, the cost of their risk capital should be lower. That is in everyone’s interest. It doesn’t encourage evidence-based commissioning if the evidence is released after the next programme is launched!

Midline and end-line reviews as part of a formal evaluation are, of course, extremely important but they are not enough. Outcomes-based programmes also need shorter, informal feedback loops, preferably involving the evaluator. Early results and findings can be used to improve programme delivery – but not if they are shared in an end-of-programme evaluation that takes a year to publish. Ideally, we’d like a quarterly or a six monthly check-in with the evaluators that can identify and unpick performance and its drivers.

We recognise the tension between this approach and concerns that the evaluation will influence the programme outcomes. A balance needs to be struck. SIBs support people with complex needs who deserve the best possible chance of better life outcomes, so although evaluation rigour is crucial, we owe it to them to make the intervention as effective as it can be.

Importance of counterfactuals
Three inputs are usually needed to assign a value to a SIB outcome: (1) the projected costs to deliver a programme (preferably validated through a competitive procurement process); (2) the costs per outcome achieved under comparable programmes, if known; and (3) the savings case (the projected benefits for the commissioner). If you don’t have an understanding of what would have happened anyway, at least one of these calculations will be flawed. That’s why we can’t afford to disregard the counterfactual.

That doesn’t mean that every SIB needs to link payments to performance compared to a counterfactual, measured by an RCT. There are lots of factors to consider when designing the payment mechanism and there is no single ‘right’ approach. However the counterfactual can always be taken into account. Under a rate card approach, the rates should be set after considering deadweight – even if the assessment is imperfect, it is better than ignoring it completely. The counterfactual can then be assessed in the programme evaluation and used to inform the pricing of future contracts.

I am not saying SIBs should be commissioned only if there is perfect data to value the counterfactual. Rather, I am emphasising the need for new approaches that measure outcomes and cost-per-outcome to allow commissioners to make evidence-based decisions in future. Big Issue Invest is trialling approaches that allow an outcomes-based contract to be launched with imperfect information, while ensuring checks and balances limit windfall gains and losses and include mechanisms to tackle the information gap.

One option is to run an initial ‘discovery phase’ of the contract for one to two years. The discovery phase outcomes pricing estimates the counterfactual, but sets parameters to ensure that no party makes excess gains or losses. In this way, the partners have the opportunity to implement the SIB delivery model. During this time, outcomes and the counterfactual are measured rigorously. The data and analysis is then incorporated into a revised payment mechanism for the rest of the contract, after which point it operates like a ‘standard’ SIB. This approach bridges the knowledge gap without delaying a potentially high-impact programme or risking inequitable risk and return.
Where next?

SIBs bring together different worlds. The success of SIBs is dependent on partnerships where the whole is greater than the sum of the parts. They require new ways of working for everyone involved – for investors, providers and commissioners. I expect they can seem strange to evaluators as well. Forging new links and understanding the perspectives of others is crucial.

We are starting to see these worlds come together and collaborate for better outcomes. There is more interaction and understanding between researchers and evaluators, policymakers and budget holders, delivery organisations, and investors. It is early days but the outlook is promising.

Katy Pillai is Investment Director, Big Issue Invest: www.bigissueinvest.com @katyjones | @bigissueinvest

If you wish to receive our weekly blog on SIBs, please email Paula.Fry@lshtm.ac.uk and we will add you to our subscriber list.

Academics can show governments how to evaluate SIBs more rigorously

BY CHRIS FOX

A wide range of approaches can help identify causality and effectiveness even in complex environments.

We can – and we should – improve our evaluations of SIB and Payment By Results (PBR) programmes. They should focus more on causality, rather than simply contract compliance or implementation.

If we don’t focus on attribution, it will become hard to demonstrate that SIBs are more than a series of interesting pilots. We’ll miss the chance to test an alluring proposition – that SIBs could transform the large scale commissioning and delivery of health and social welfare programmes.

Getting evaluation right in this field is not – as some might suggest – intrinsically challenging. SIBs and PBR projects do not create unusual difficulties for evaluation techniques. We have the knowhow – sophisticated, diverse tools are well-developed that could settle most questions thrown up by SIBs. The real issue is: will those who champion SIBs expose such initiatives to the full rigour of the evaluative tools that exist?

Academic responsibility
The academic community can help ensure that rigour. Management consultants, contracted to perform evaluations, tend to provide what governments specify, which, so far, has been limited and fallen short of what’s required. Academics could set out a wider, more exacting range of evaluation options that are more suitable. We should show policy makers clearly how better evaluations could be achieved, particularly if the case for widespread adoption of SIBs is to be made.

This difficulty in properly assessing the impact of SIBs seems to be a particularly British problem. In the United States, most SIBs have been accompanied by fairly rigorous counterfactual evaluations, including randomised control trials (RCTs). There, the credibility of the SIB model among commissioners and investors has required demonstration of its ability to deliver tangible outcomes. This may be because, in the US, more funding has come from wealthy individuals or private foundations, with an investment ethos. In Britain, funding tends to spring from philanthropic organisations which seem interested in testing concepts over categorical outcomes.

Evaluations are too based on performance management
Whatever the reasons, SIB pay-outs in the UK typically rely more on performance management information to demonstrate the achievement of outputs. Supporters of this approach say that complicated counterfactual evaluations add to the already high transaction costs associated with SIBs. That’s understandable for individual SIBs. However, cumulatively, this approach hinders the quest to find out whether SIBs really work. It undermines the case for wider roll-out.

Evaluations can and should answer two major questions about SIBs. There’s “attribution”: whether SIBs actually achieve the outcomes desired. Second, we need to understand SIBs as a mechanism and establish how effective they are compared with other models of commissioning. This is important because there are less expensive, less complicated methods than SIBs for commissioning services in this field.

The attribution issue has become unnecessarily mired in a polarised debate about whether RCTs are suitable for SIBs projects. Opponents contend that RCTs are not particularly useful in this field because SIBs interventions tend to take place in highly complex environments. While it’s true that these interventions often occur amid complexity, that actually strengthens the case for RCTs. It becomes even more important to understand whether an intervention is indeed responsible for any of the impacts being observed.

Testing theories of change
Good RCTs would strengthen SIBs evaluations because they would be theoretically informed. They would start with a theory of change setting out the potential causal mechanisms that are of interest. In contrast, many SIBs evaluations rely on contractual frameworks and demonstrating whether they have worked, rather than testing hypothesised causality. Most good RCTs today are also accompanied by high quality implementation evaluation. So they have a dual strategy.

Well organised RCTs avoid “one-shot” design. They are actually a sequence of evaluations that build by testing, at a granular level, particular moderators of change, rather than simply focussing on the overall social outcome and trying to come to a one-shot conclusion. This is how, in reality, even medical research works. You don’t do a single RCT. You build from small scale studies through to larger scale studies.

Sequences of evaluations are good
The wider evaluation world is focussing more on sequencing evaluations and ensuring that tools employed are appropriate to the point of a programme’s development. This avoids problems that one shot evaluations can create: that you evaluate too early; that the throughputs you were promised never arrive; that you end up developing an evaluation design which is underpowered to identify the changes that you’re looking for; there are inconclusive findings that have cost a lot of money but don’t provide the hoped for insights.

I advise against the one shot model. Instead, we like to start evaluations early without diving straight in with an RCT. We focus on developing a sequence. That’s the strength of the Education Endowment Foundation evaluation model. It begins with small scale pilot studies that focus on theory of change and early implementation, then efficacy trials that are more like a small RCT, leading up finally to effectiveness trials. Only at that point – when causalities have been established – is control finally handed over to implementers.

Building commissioner confidence
This sequential approach gives commissioners confidence. You’re saying to them that this isn’t a “one shot, put all your money on the table up-front” model. It’s about gradually building knowledge and providing gate-keeping points where a commissioner can ultimately say: “This isn’t working, we need to rethink. We may need to reinvest or, even disinvest.” That’s helpful to commissioners, especially if they are being asked to back innovation that feels risky.

Small ‘n’ designs
In some cases, RCTs are not possible, but there are many alternative models of impact evaluation that could be considered for SIBs. “Small n” designs provide ways to think about causal attribution where a programme does not have sufficient numbers to allow a traditional impact evaluation design. Process tracing is an example of “small n” design, where one uses theory to identify critical points in a change process that need to be tested. Then one selects cases to test these critical points, using interviews and observations of what’s going on. This Popperian approach acknowledges that there is no absolute objective knowledge. However, it can find ‘smoking gun’ evidence that strongly suggests causality, even if that may not amount to absolute proof.

These process approaches that search out causality would be an improvement on current tests of some SIB or PBR programmes which, if they can’t do an RCT, tend to opt for process/implementation evaluations that are less demanding – usually interviewing stakeholders and writing a report, but lacking a more theory-driven approach.

More rigour is needed
I’ve set out ways in which SIB and PBR evaluations could be improved by RCTs or hybrids that avoid the unnecessarily polarised debate between the pro- and anti-RCT lobbies. Beyond RCTs, there are other approaches to evaluating causality, suitable in instances where there are small numbers of cases. We should learn from this wider discussion of evaluation techniques. Academics owe it to those investing and working in SIBs to ensure that policy makers adopt a rigorous approach to evaluation. We need to know what works and what doesn’t if SIBs are ever to be widely adopted.

Chris Fox is Professor of Evaluation and Policy Analysis and Director of the Policy Evaluation and Research Unit at Manchester Metropolitan University. He is co-author of “Payment by results and social impact bonds: Outcome-based payment systems in the UK and US”, published by Policy Press in February 2018.
https://policypress.co.uk/payment-by-results-and-social-impact-bonds

If you wish to receive our weekly blog on SIBs, please email Paula.Fry@lshtm.ac.uk and we will add you to our subscriber list.

Impact bonds could offer a paradigm shift towards more effective public services

BY EMILY GUSTAFSSON-WRIGHT

Social and Development Impact Bonds require enormous effort for the partners involved, but they have a potential to transform the financing and delivery of social services across the globe.

In winter 2015, Courtney arrived at Frontline Services, a not-for-profit US organisation that helps citizens in Cleveland, Ohio. She was 28, living in a shelter for homeless women, struggling with mental health and substance abuse issues and parenting three young children who were in the custody of the county.

Courtney had just about given up hope that she would ever be able to care for her children on her own. Until that point, the county caseworker assigned to her family had little incentive to reunite Courtney and her children because the caseworker’s primary job was to protect the children.

Living with a birth parent is almost always better for a child’s development than foster care, provided the home environment is safe and healthy. Nevertheless, before entering Frontline Services, Courtney had few ways to change the trajectory of her children’s lives. Fortunately for her children, she was walking that day into a social services experiment, one of only seven similar experiments in the US at the time. In this experiment, a social impact bond (SIB) – designed to “pay for success” – the county’s government had pledged to repay private investors for successful reductions in out-of-home placements for children whose primary caregivers were homeless.

This incentive meant that Courtney was assigned a caseworker dedicated to her – someone who would look at her particular circumstances and tailor a plan to help her turn her life around and unite her with her children. Courtney’s caseworker could work across county service providers to identify the right mix of services for her.

The SIB meant that a dedicated group of stakeholders was meeting regularly across government and non-government entities to focus on one thing – reuniting Courtney with her children, and doing the same for other families in similar circumstances.

SIB contracts focus on outcomes, so service providers tailor their services to what works for the target population. They helped Courtney to address her debts with classes in financial management and offered family counselling.

As a result, Courtney was able to reunite with her children, enrol them in supportive school environments and stop the cycle of dependency on the foster care system. The result was not only a better family outcome, but also a reduction in the enormous costs to the county, had Courtney’s children remained in the county’s care.

Impact bonds are changing developing countries
Meanwhile, nearly 12,000 km away, in a village in rural Rajasthan, India, lives a 13-year-old girl, named Punam. She comes from a poor family – her parents are labourers. Although Punam started school at age seven, she became one of India’s three million out-of-school girls, when she was forced to drop out to tend to her family’s goats.

In the same year as Courtney arrived at Frontline Services in Cleveland, Ohio, a field co-ordinator, working for an organisation called “Educate Girls”, arrived in Punam’s home in Rajasthan. He spoke with her parents, explained the benefits of educating Punam and tried to convince them to send her back to school.

Even after multiple attempts, the parents didn’t agree to send Punam back to school. The Educate Girls caseworker returned some weeks later with a volunteer from the community but again failed to persuade the parents. “What benefit will it give her or us?” they asked. “She will eventually marry and her responsibilities will revolve around doing household chores, assisting in farming life, raising children and taking care of her family.”

Nevertheless, Educate Girls made a further attempt to encourage her parents to let Punam attend school. This time, they asked the school’s principal to join them in a final visit to her home. With this added influence, Punam’s parents agreed finally to sending their daughter to school.

Why was an impact bond so important in this case? Because the contract was based on the achievement of outputs and outcomes, Educate Girls field-staff were empowered to innovate at the field level, trying to find solutions for getting Punam into school. Now, two years later, Punam, and many girls like her, are enrolled in and enjoying school thanks to Educate Girls and this Development Impact Bond (DIB), based on the same principle as a SIB, but with a third-party outcome funder, instead of the government.

These two stories capture the real human benefit that can emerge from outcome-based contracts such as SIBs or DIBs.

How impact bonds work
Let’s just re-cap for a moment how impact bonds actually work. In an impact bond, private investors supply upfront capital to service providers to deliver an intervention or program to a population in need. Upon the achievement of a set of agreed-upon results, the investors are then repaid by an outcome funder. With a SIB, this outcome funder is the government. With a DIB, outcomes are financed by a third-party organisation, such as a foundation or donor.

Since the launch of the first SIB in the UK in 2010, the impact bond market has grown exponentially. Last year, some 32 new contracts were signed. As of January, 2018, there were 108 contracted impact bonds (103 of them SIBs, 5 of them DIBs) across 25 countries, along with many more in design. All but one of the 103 SIBs were in high income countries: last year marked the contracting of the first SIB in a low- or middle-income country, the Workforce Development SIB in Colombia.

Most (42) SIBs are in the UK, the country that pioneered the impact bond model with the Peterborough SIB in 2010. The results of that SIB – aimed at rehabilitating ex-offenders – were released last year: reoffending of short-sentenced offenders dropped by 9 percent and the investors were repaid in full. The US has also established itself as a player in the field, coming in second with 19 impact bonds.

The five DIBs include Educate Girls in India, one for coffee and cocoa production in Peru, as well as one for physical rehabilitation across three countries in West Africa, a poverty graduation program in Kenya and Uganda, and the recently launched Utkrisht impact bond for Maternal and Newborn Health in Rajasthan, India.

Characteristics of SIBs
Most SIBs contracted globally are in the employment field, followed by the social welfare sector, which includes programmes to reduce homeless – “rough” – sleeping, or reduce out-of-home placements as in the case of Courtney and her children. Other areas for SIBs are health, criminal justice, education, the environment and agriculture.

Probably about 30 or 40 impact bonds are in design in high-income countries while more than 20 are being designed in low and middle-income countries. We see some difference comparing high income and low or middle-income countries. The majority of impact bonds in the latter are in the health sector, followed by employment and, then, agriculture.

What do these impact bonds look like in terms of size? The smallest one, in terms of beneficiaries, reaches 22 individuals – that’s in Canada. The largest one reaches 650,000 individuals in Washington DC, which is an environmental impact bond focusing on developing infrastructure. It is perhaps a little bit unfair to compare that one in terms of size with the rest of them because it’s a city-wide programme. The next largest in size is the Maternal and Newborn Health DIB in Rajasthan, India, with 600,000 potential beneficiaries.
However, the median impact bonds are reaching about 565 individuals, so they are quite small. Capital commitment of bonds ranges from $80,000 to $25 million. Again, that $25 million is the one in DC. The average is about $4 million and the total upfront capital invested across the impact bonds is over $300 million.

Who is benefiting?
The bonds mostly target marginalised populations, including women affected by violence, young migrants, single mothers, with a few for ex-convicts, vulnerable and young people, people diagnosed with mental health conditions, refugees and individuals with physical disabilities.
What do we know, eight years in about their performance? There have been some outcomes achieved and payments made, such as in the case of the Peterborough SIB, mentioned earlier. In an Australian SIB, 203 children were reunited with their families and the return to investors was nearly 16.5 per cent over the four years of the scheme.

Shifts in public programme behaviour
However, perhaps the more interesting observation is a real shift seen in government and service providers to thinking about outcomes as opposed to paying for inputs. Impact bonds are also driving performance management so service providers are introducing or improving systems of performance management in their programmes.

Impact bonds are incentivising collaboration, between the public and private sectors, but also across government, vertically and horizontally. They are building a culture of evaluation because outcomes must be measured and monitored. Most impact bonds so far have been focused on investment in preventive interventions. There has also been some reduction in risk for governments, which have not paid for outcomes that weren’t achieved.

What are we not seeing so far? It had been hoped that impact bonds would lead to an influx of additional private funding. However, given that government or outcome funders ultimately repay the investors, then that’s not really more money for a particular social service. Impact bonds have also yet to achieve change at scale: the majority are reaching very few individuals and are fairly small in terms of investment.

Many thought that impact bonds would focus on experimental interventions. So far, we haven’t seen that: investors have been unwilling to take that risk. We’re seeing SIBs used in the middle phase of development of interventions, rather than at the “seed” or “at scale” ends of the process. However, the flexibility that service providers are allowed in terms of their service delivery has the potential to encourage innovation. It’s also probably too soon to say whether or not impact bonds can achieve sustainable outcomes in the long run through the systematic change that’s happening but it does appear that the partners currently involved have indeed shifted their thinking.

Challenges of impact bonds
What are the challenges? This is a new form of government contracting, a new way to do business. Co-ordination of all of the stakeholders is difficult: sometimes they don’t understand each other well; just getting all those people around the table can be really difficult. There can be some political constraints and legal barriers.

Key questions remain. Can impact bonds be used at scale? Are they more effective than input-based based financing or traditional payment-by-results? Do the actors in social service provision have the capacity to adapt to the demands of financing tied to results? Can they manage the rigorous focus on performance management that this is likely to entail?

Next steps
It is worth considering what would be needed to expand the use of impact bonds or, more generally, payment by results. The evidence base needs to grow and there is a need to collect more information on services that work and on their costs. Also, potential outcome funders and investors need to be educated in not only the potential of an impact bond approach but also the challenges. There needs to be supporting legislation and regulations to facilitate paying for outcomes both at a country and local level but also within organisations.

To achieve scale, countries could establish outcome funds for particular sectors. The UK government has launched seven outcomes funds and efforts are underway in the US to develop outcomes rate cards. There are several outcome funds being developed to tackle tough social issues in the developing world as well. These would allow the outcome funder to set prices for desired social outcomes and then to contract with service providers to deliver interventions to achieve those results. Global investment funds would also benefit from contributing to this new financing mode.

In the US, $800 billion are spent annually on social services. Only one per cent of that spending is evaluated for effectiveness. In the UK, £220bn was spent on social and health services (2015/2016), yet we know very little about the effectiveness of that expenditure. Thus we need more empirical research which asks: “What do impact bonds achieve, compared with input based financing?” It is also important to know how well impact bonds perform compared with traditional results-based financing. These are both hard questions to answer rigorously and will take some time.

Impact bonds and global problems
The social and environmental problems that we face at a global level are enormous. It’s estimated that $1.4 trillion will be needed annually to achieve the Sustainable Development Goals by 2030. There is little evidence that such complex problems can be solved by continuing the same old failed approaches.

Investing in preventive measures can avoid higher costs down the road and make the public and civil society sectors more efficient. By paying for outputs and outcomes rather than paying for inputs that have unknown outcomes, spending should be more effective.

Impact bonds may not be the right solution to every problem. However, they do represent a long-overdue, paradigm shift. They’re a means to an end, an opportunity to think about, and hopefully produce, systematic change. At the very least, they may be the stepping stone to establishing the monitoring and evaluation performance standards and output planning that can ensure every individual receives the services that they need to live safe, healthy and productive lives.

Dr Emily Gustafsson-Wright is a Fellow in the Global Economy and Development Program at the Center for Universal Education, Brookings Institution, Washington DC.

If you wish to receive our weekly blog on SIBs, please email Paula.Fry@lshtm.ac.uk and we will add you to our subscriber list.