Photo by U.S. Navy Photo/Joshua Adam Nuzzo on wikimedia.

A late arrival

18 March 2014

Academia needs to find new ways to keep up with the pace of demand for evidence from policymakers, writes Belinda Lawton.

In the age-old race between the slow and steady, wise old tortoise of academia and the quick and nimble policymaking hare, the tortoise was supposed to come out on top. After all, evidence-based policymaking needs deep consideration; raw data mulled over, crunched, shaped by the confounding factors before a conclusion can be reached.

But today, the policymaking hares are crossing the finishing line of implementation before academia has even got off the starting blocks. Research is a country mile behind the policy agenda, rendering it a tool for the ‘told-you-so’ chorus after the fact.

The 24-hour news cycle has changed the landscape, not least of all the politics of policymaking. Instant gratification is the new norm. Whatever the problem, we want a solution, and we want it now. If academia does not start adapting to this new reality, it runs the risk of going the way of hardcopy newspapers – on the pulp heap.

While that might leave the academic world shuddering, there is an even greater challenge ahead of the policymaking community.

As the evidence trickles through, policymakers are busily making decisions that affect our collective future. The pressure on our policymakers is significant. They have to do it with less time for critical reflection, with less data, more raw emotion and the baying of the loudest and best organised groups within the community. These pressures end up skewing the balance to ‘popularism’ policy.

The other great risk to policymakers is that in making decisions without well-thought out and reviewed research, they get caught up in misunderstandings, misprints and misrepresentations. Or worse; they simply do not know there is another policy path open to them other than the one suggested by the incomplete information at hand.

My research is focused on not-for-profit, non-government hospitals and large clinics in developing countries. It should be a topic that has inspired a thousand PhDs. It has got all the elements: civic engagement, community responsibility, direct meeting of community needs, not to mention that it’s a sector that saves lives.

In terms of fitting into the aid agenda, it should be front and centre as financial imperatives bring the notion of small government back into vogue. But to position it there takes evidence. And of that, there is precious little.

The hospitals and clinics are not new. There is a long history of operation of both faith-inspired institutions like mission hospitals and secular hospitals such as the Addis Ababa Fistula Hospital. It is a complex and diverse sector, which includes institutions funded through the spectrum of avenues from corporate social responsibility ventures to rattling tins on street corners.

And yet academia as a whole has not seized the opportunity to provide solid data on an area that could strengthen developing countries’ health systems and cause a strategic re-think on whether aid funds are best channelled through governments for optimum impact.

Indeed, as Jill Olivier and Quentin Wodon demonstrated in their study ‘Playing Broken Telephone’ published in the journal Development In Practice in 2012, policymakers have been left to rely on faulty anecdotal data about the proportion of the African health system that is provided by faith-inspired institutions. As Olivier and Wodon say, even the then World Bank President James Wolfensohn repeated the common view that the church does half the work in healthcare and education in Africa.

Yet the research simply does not exist to prove or disprove this for Africa as a continent. There are a few smaller scale surveys looking at individual countries, but these do not use the same parameters so they cannot be validly combined.

So how can policymakers weigh the overarching policy questions, including whether not-for-profit, non-government healthcare facilities should be considered for funding at all? And what if their country’s government does not include these facilities as part of their overall strategic priorities for aid? Do they have enough value in terms of the proportion of people they serve overall, or the proportion of the poorest they reach, or just in terms of raw numbers of lives saved, to justify aid policymakers considering their applications for funds?

The answers are down to the individual policymaker’s own value judgements and whatever anecdotal evidence they can round up because academia has failed them on these questions by not providing impartial, well-researched data.

Internal reports, news stories and alike often do not include the details of how they got their information. How much is spin? While I am a big proponent of grey literature, when the vast majority of that literature on some issues comes from vested interests, it is down to policymakers’ own experiences, knowledge and cultural context to judge its validity – which should be far too much subjectivity for anyone’s liking.

A significant number of not-for-profit health facilities do not have the deep pockets or the political clout to effectively lobby for their cause. They are competing with other, better-established causes in a noisy, crowded world where everyone says they need more money. They may be the best providers of healthcare; they may be the best placed to reach the most vulnerable communities and to achieve the strategic objectives of aid agencies and they may be the best institutions to ensure money is not siphoned off for other purposes; but we do not know that because there is no impartial research to support or disprove any of these statements.

So what should policymakers do? My advice is start screaming, loudly, for data where it does not exist. Call out academia on its shortcomings, including its reluctance to take on fast-moving issues and research them before the dust has settled. Reject the traditional approach of journals and academics not to publish research that either does not have a firm result or shows a hypothesis did not work in the way the researcher anticipated. Demand these studies are available via open access. They can be invaluable, not least because they show you what has already been tried. Sometimes no result is exactly what you need to know when you are shaping sound policy.

Acknowledge good research where it exists and when it has helped frame policy. Be explicit about why it was useful so academics know what works. Do not assume that academics have policy front-of-mind when they are conducting research; but also do not assume they would not be willing to accommodate your information needs. Researchers may have the answers in their raw data but if they do not know there is an interest in a specific aspect it may well end up on the cutting room floor when it is being shaped into a pithy journal article or op-ed piece.

And where all else fails, do as I did and stop trying to push for change without evidence. Sign up for a PhD instead and do the hard yards to provide the impartial research data that is needed for great policy decisions. Test your hypothesis and see what happens. You never know, you might just change the world for the better through your work.

At the very least you will come out with a strengthened appreciation of the value and importance of academic research and its role in policy-making, and perhaps even help the wise, old tortoise of academia to get its nose back in front in the policy race.

Read more about Development policy in Asia and the Pacific at Policy Forum. A late arrival was published at - the website of the Asia and the Pacific Policy Society.

This article was also featured in the Autumn 2014 issue of Advance, Crawford School’s quarterly public policy magazine.

Filed under:

Updated:  20 May 2024/Responsible Officer:  Crawford Engagement/Page Contact:  CAP Web Team