Partnerships for Global Health: Putting theory into practice

For our latest DEPTH blog we asked LSHTM researcher Kimberley Popple to share her thoughts on NGO-academic collaborations as someone who has recently moved from the former to the latter. Thank you for your insights Kimberley – lots of food for thought!

Image: Dan Dimmock for Unsplash

Evaluation, Evaluation, Evaluation

Back in January this year I made the move from practitioner in the NGO world to becoming a researcher in the academic sector. I wanted to combine the skills I had developed in Public Health research with my knowledge and experience of programme implementation in the field.  It seemed to me that there were obvious synergies and opportunities for practitioners and academics to work together to improve global health. Certainly, from my own experience, the projects that I worked on could have benefited from drawing on people with specialist skillsets in data collection and analysis and with the time to conduct literature reviews, produce evidence maps, and test the change pathways that many of the programmes were built upon.

Before moving into academia, I worked on a large portfolio of grants in Sierra Leone as part of the Ebola response.  Most of the data we collected was used solely for routine monitoring and evaluation of interventions at the project-level. Its purpose was to track progress against set indicators and to report on spending to funders. As a result, collecting data that could be easily quantified was prioritised, and quantitative data was assumed by funders to show a greater impact than qualitative data. Further, qualitative data tends to fall within the remit of the accountability teams – it is used and relied upon but not as an indicator of impact. In the Sierra Leone scenario, success of an intervention was often measured by a high number of medical consultations or a large number of attendees at a meeting, rather than focusing on data related to quality of services or patient satisfaction. I remember one example of a gender-based violence (GBV) project in Freetown which was categorised by the funder as “underperforming” as the target number of survivors had not been reached. The fact that that the women who had been reached had received high quality support across the GBV spectrum of services was seemingly less valued. 

In Uganda, I worked on a maternal health project which introduced a client-exit survey for women to participate in at the hospital after receiving maternity care. However, the survey was administered by NGO staff who were working with the marginalised populations, and in close proximity to the medical staff who had provided their care. There was little recognition of the power imbalance between interviewer and interviewee or the desirability bias that might be present as a result of the women’s fear of negative repercussions from medical staff.

Evaluations were often seen as a tick-box exercise for donors and their design was fairly rudimentary. By the time the evaluation report was written, the programme had already moved onto the next phase to align with strict funding cycles. This left little room to reflect on lessons learned and engage in a process of iterative programme design. A recent systematic review has highlighted the lack of evaluations conducted on epidemic responses in humanitarian and low-income settings, with only one tenth of responses evaluated and with large gaps in quality, content and coverage of evaluations, limiting the ability to improve future responses.

Image: Dan Dimmock for Unsplash

Is the landscape changing?

Over recent years, the international development sector has intensified its focus on evidence-based programming and evaluation.  Many NGOs have increased their research capacity with dedicated departments and research staff (for example Airbel Impact Lab at International Rescue Committee, and the Response Innovation Lab at Save the Children), giving them the expertise and space to test out new formats for implementation, and to ensure programming is based on the latest evidence of what works.

New funding streams have emerged for research in the humanitarian field, such as Elrha’s R2HC programme, and there is donor pressure to evidence learning and use data for decision-making. Donors like the UK government’s Foreign, Commonwealth & Development Office (FCDO, formerly DFID) have developed more in-depth guidance on how to develop and use evaluation frameworks to measure impact and ensure accountability, with requests to include qualitative indicators in logframes.

What can academia bring to the table?

So, is there still a role for academics to play in supporting the work of NGOs? I believe there can be, particularly in the evaluation of complex interventions. Universities train public health professionals who often go on to work in the NGO sector. Expert knowledge of process and outcome evaluations can be drawn upon to test change pathways in Theories of Change. Systematic reviews can be performed by academics with fewer time and funding constraints, reducing the need to reinvent the wheel every time to search for the latest evidence. As academics, we can add our voice to campaigns as advocates of change. And the humanitarian health sector can harness specific skill sets in conducting clinical trials and in disease modelling. My sense is that as both sectors continue to develop and evolve, it will be important to continue to reflect on the value of academic-NGO partnerships for global health.

Image: you-x-ventures for Unsplash