By Susan Igras (Georgetown University’s Institute for Reproductive Health, Center for Child and Human Development), Marina Plesons (UNDP-UNFPA-UNICEF-WHO-World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP) and the World Health Organization) and Venkatraman Chandra-Mouli (UNDP-UNFPA-UNICEF-WHO-World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP) and the World Health Organization)
Well-designed program evaluation can lead to strong evidence that informs policies and future program planning. Yet, many of the projects and programs that aim to improve adolescent sexual and reproductive health (ASRH) in low- and middle-income countries are often implemented without well-thought-out plans for evaluation. This is equally true for other health and social development programming. In the absence of evaluation evidence, the lessons that programs learn encountering and addressing policy and programmatic challenges are often not extracted and placed in the public arena. The limited evaluations that use pre/post-intervention designs and whose findings are published, then, play an overly dominant role in guiding policy and program development thinking.
Post-project evaluation, which is designed and carried out after projects have ended, offers the possibility to generate learnings about what works (and doesn’t work) to complement prospective studies of new or follow-on projects to bring information and evidence on innovative interventions or approaches that appeared to work well into the public arena.
Guidance from the World Health Organization (WHO)
As outlined in the 2019 WHO Guidance, The project has ended, but we can still learn from it: Practical Guidance for Post-Project Evaluation, evaluators must navigate a range of issues and challenges when conducting post-project evaluations, both contextual and methodological. In a post-project context, project staff with historical knowledge may have moved to new projects and are no longer available to provide information and help interpret findings. Project operations have likely ended so are not observable by evaluators, and participants may no longer be easily contactable. Establishing a comparison group may be feasible, but requires careful forethought with the help of local stakeholders. The ageing of adolescents who were program participants complicates post-project impact assessment; thus, careful consideration is necessary to address maturation and consequent changes in adolescents’ cognition and other capacities and lived experiences in the time since the project ended. Documentation may be spotty; it could lack a theory of change, baseline data, and information on context and other external factors that may have influenced implementation.
What’s an evaluator to do? Our recent Health Policy and Planning article provides an overview of the WHO Guidance by outlining key contextual and methodological challenges in conducting post-project evaluations and illustrative solutions for responding to them. The Guideline is full of case studies and field-tested insights from those who have undertaken such evaluations, addressing a technical gap in this area. Those who engage with post-project evaluations need to anticipate such challenges from the beginning, and be ready, equipped, and willing to navigate them. The goal: evaluators who feel ready and willing to conduct the most rigorous evaluation possible given available resources and project data and documentation.
Why are there so few post-project evaluations being done on promising ASRH and other projects? We believe that the notable lack of post-project evaluations in the field is partly structural; funding agencies that support health and development programming do not typically request this type of evaluation given their project funding cycles. Even when such evaluations have been carried out, there is a gap in their accessibility and thus the distillation of evaluation experiences and strategies that can be used elsewhere. Beyond structural reasons, there also appears to be a reluctance to conduct post-project evaluation because such evaluations often do not lend themselves to the most rigorous experimental design. As such, many methodological purists and funding agencies believe it is not worth the investment. Post-project evaluation thus represents a facet of the debate around what counts as credible evidence.
We believe it is time to revisit post-project evaluation as a valuable and worthwhile evaluation option. The Review conducted to inform the WHO Guidance indicates that it is possible, can be rigorous, and is useful when more traditional evaluations have not been planned or carried out. Post-project evaluation should be done more frequently. The Guidance proposes a way forward:
- Create spaces and funding for such evaluations. Researchers and evaluators, funding agencies, and project implementers should consider and act on the utility of post-project evaluations
- Create demand for post-project evaluations. In the absence of planned evaluation, funding agencies and governments should request post-project evaluations when a particular intervention or approach shows promise but lacks sufficient documentation
- Build and facilitate access to post-project evaluation reports and advice on potential solutions when challenges are encountered. Most evaluation reporting remains in the grey literature; posting evaluation reports on existing government/organisational websites and evaluation clearinghouses will greatly aid efforts to compile experiences and lessons learned.