By Gill Walt
A recently published paper by de Jongh et al. reports on a study commissioned by the Global Fund to Fight AIDS, Tuberculosis and Malaria. It offers a robust defence of the substantial financial investment donors have made with regard to these three diseases, demonstrating that their funds have had a measurable impact on health.
The authors used a systematic review approach to collecting and analysing the data, and ended up with thirteen individual papers (reporting on 11 studies) which were sufficiently detailed and rigorous to allow them to make a confident link between external funding and health impact. The great majority of studies that were perused (over 1600) did not fit the criteria selected, so the findings cannot be generalized beyond these 11 studies.
Three points came to mind in reading the paper:
First, one of the basic tenets when doing a systematic review is to identify a question which is relevant and interesting. The question often depends on who is funding the systematic review. In this case, the paper was commissioned by an important donor, and the information produced was useful (and reassuring) for a group of donor policymakers, because it justified expenditure on HIV, TB and malaria. It also provided some valid recommendations for the design of donor-funded studies. However, I did not find it a very exciting question. For example, I would like to know how the external funds were invested? Was it through vertical or integrated programmes? Did one type of programme have more impact than another, or did it not matter how interventions were delivered? How far was the health system strengthened through the extra funds? A systematic review of one of those questions would potentially offer more useful information for understanding what works and what doesn’t.
Second, the authors show clearly how complex it is to undertake a study of this sort. They succeed magnificently – this is a really useful paper for those interested in doing systematic reviews – because it demonstrates the process and the difficulties along the way. But it would be interesting to know how long it took, what resources it entailed, and the inputs of each of the seven authors. Sometimes there is too much obeisance to the method. That is why it is important to identify the question being asked, because the amount of time and energy that goes into the search strategy, selection of eligible studies, data extraction and quality appraisal – let alone constructing a ‘causal chain’ – is mammoth, and might be better spent asking different questions using different methods..
Third, and the paper’s central point. It is surprising how few studies are well designed to a rigorous scientific standard. When donors put money into programmes, they should be subject to methodological scrutiny at the outset. Funding should be set aside for designing monitoring and evaluation processes which will identify up-front, possible causal pathways, and what data is necessary to collect, and most likely to capture or explain, links between cause and effect or impact.