Date & time
In an attempt to move away from unhelpfully broad debates about whether aid works or not, scholars have increasingly drawn on aid project performance datasets to study what makes some projects succeed and others fail. The data used in these studies typically come from individual project appraisals produced by aid agencies. In this paper, we test whether outsourcing the validation of project appraisals changes the reported effectiveness of aid projects.
We do this using a dataset of Australian aid appraisals and a natural experiment, which occurred when an external contractor was tasked with verifying ratings, something previously done internally. Using difference-in-differences and contrasting assessments of ongoing projects, which were not sent to the contractor, and assessments of completed projects, which were, we show that outsourcing led to a dramatic fall in how successful projects were deemed to be. We also show that the change probably led to more accurate recording of COVID-19’s impact on Australian aid, as well as more accurate assessments of the quality of Australia’s aid to Papua New Guinea, its largest aid partner. As we do this, we take care to demonstrate that our findings are a robust to the types of methodological issues that can afflict difference-in-differences studies.
A Zoom link will be provided once you register for this event.