You might also like
Related research centres
There are six crucial steps to success in delivering collaborative public policy, writes Mark Matthews.
If you’ve ever wondered why tackling major policy challenges is such a slow and laborious process prone to debilitating compromises and political fixes, it’s enlightening to ask the public service and academics their views of each other.
Many in the public service will talk of the slow speed to gather evidence from academics, or a lack of understanding of the political realities they have to contend with in their decision making. Academics, meanwhile, will complain about a culture of ‘quick fixes’ or their work being ignored by policymakers.
Of course, the system can produce spectacular and successful policy, but more often than not it fails to deliver the ideal outcome.
This was one of the reasons behind the establishment of The HC Coombs Policy Forum—an experiment in building capacity in collaborative public policy; activities that bring together the distinctively different but complementary capabilities that exist in government and in academia. This type of collaboration jointly deploys government and academic resources to address complex and intractable policy challenges.
Delivering collaborative public policy is different from what traditional policy ‘think tanks’ do.
Think tanks work best when they are independent from government and can offer constructive criticisms and creative suggestions because they are offline from internal politics and structural constraints.
In contrast, collaborative public policy sets out to be interdependent rather than independent. It focuses on better exploiting the synergies between the enormous breadth and depth of the expertise in universities and, in government, on the political realities of governing. Universities that build capacity in collaborative public policy are able to provide a tangible demonstration of public value via policy impact. How is this achieved? In the experience of the HC Coombs Policy Forum there are six keys to success.
Collaborative public policy requires an appetite for risk; being able to deliver exploratory and experimental work useful to policy formulation. It is hard for government and (increasingly) for academia to take risks. Hence, a unit with that distinctive mission is well placed to make a difference.
For example, in 2013 a partnership between the Forum and a state government in Australia completed work that developed a faster and more cost-effective methodology for evaluating government spending. This method is based on the use of structured hypothesis testing—as used by the intelligence community. These advances are now attracting attention internationally, including from the OECD.
Achieving those advances required risk taking; when the opportunity emerged the state government agreed to a contract variation that allowed an experimental pilot project to take place. Their reward for this risk taking is sustained long-term cost savings in delivering internal evaluation activities.
A clear focus on maximising the return on investment for governments is critical.
Avoid trying to approach the return on investment symmetrically by trying to maximise the returns for both government and academia. What constitutes success is not necessarily shared.
The incentives in academia tend to focus attention on peer-reviewed excellence, teaching and income generation (to fund research). That stance is unlikely, directly, to give governments what they seek.
Maximising the return on investment for government avoids conflicting strategic priorities, and gives a clearer demonstration of the value of academic expertise than if attempts are made to maximise the immediate returns for both government and academia.
An explicit focus on partnerships provides an effective means of building the trust and reciprocity that is critical to collaborative public policy. Bilateral arrangements involving government funding for specific purposes can provide a particularly effective basis for building these partnerships because handling investment risks involves both parties working together to make the arrangement work.
The interface between government and academia is most effective when the mix of specialists and generalists found on the government side is matched by specialists and people with generalist skills on the academic side. Generalists in academia are important because they are able to achieve the twin functions of ‘translation’ while also mitigating the risk that the specific interests of government do not readily match those in the academic research base.
It is very useful for government officials to recognise when available internal information, research capacity and expertise is limited in a particular policy area.
Awareness leads to a clear sense of governments’ perceived ‘value add’ from specific government-academic collaboration projects. The partnership between the Tasmanian Government and Crawford School of Public Policy that produced the Tasmanian Government’s Tasmania and the Asian Century white paper in 2013 illustrates this.
This was the Tasmanian Government’s first white paper in over a decade and, on their own admission, would not have been possible without an effective partnership with academia because they did not have the available information, research capacity and expertise to inform policy development.
An exclusive reliance on the concept of ‘evidence-based policy-making’, contrary to what many people assume, is not necessarily the most compelling means of framing the value proposition for collaborative public policy. A reliance on empirical evidence alone can limit governments’ ability to make decisions quickly when there are substantive uncertainties and information limitations with consequent risks to effective policy-making.
We need to move beyond the limitations to evidence-based policy-making by focusing on ways of articulating how ‘intelligence-based policy-making’—based on structured, hypothesis testing applied to patchy and ambiguous information, and to weak signals of potential future occurrences—can operate at a more general level in government.
As in science in general, the all-important ingredient of creativity in the policy formulation process is achieved by suggesting hypotheses that can be tested empirically. It is far easier to focus attention on the importance of creativity in public policy when there is a more explicit emphasis on generating hypotheses and on selecting between competing hypotheses on the basis of available evidence (even if very limited).
The intelligence-based approach encompasses evidence-based policy-making—but is not limited by the constraints of the latter. Formal hypothesis testing methods may be far better suited to coping with the need to make decisions quickly when there are uncertainties, risks and information limitations.
Universities can contribute to intelligence-based policy-making by making methodological advances in how to deal with the ever-present challenge of making decisions under uncertainty. This results in a broader and richer landscape for useful government-academic collaboration than is implied by the narrower notion of evidence-based policy-making.
The bottom line is that the greatest potential for universities to demonstrate ‘impact’ in public policy lies in developing capacity in collaborative public policy by paying attention to these six key success factors. This results in a stronger emphasis on advancing the technical methods used in public policy than is commonplace in most policy think tanks. It also takes a serious step towards bridging the gap between academic evidence and policy implementation success.
This article was also featured in the Spring 2014 issue of Advance, Crawford School’s quarterly public policy magazine.