Quentin Grafton is Director of the Centre for Water Economics, Environment and Policy (CWEEP) at Crawford School of Public Policy. In April 2010 he was appointed the Chairholder, the UNESCO Chair in Water Economics and Transboundary Water Governance.
You might also like
Related research centres
Australia’s modellers need to be transparent about their key assumptions, otherwise COVID-19 models are at risk of being used as political cover, instead of for pandemic management, Quentin Grafton, Jason Thompson, Tom Kompas, Tony Blakely, and Emma McBryde write.
Since the start of the COVID-19 pandemic, huge attention has been paid to COVID-19 modelling results. In early 2020, for instance, models were used to project the possible number of COVID-19 cases, hospitalisations, and deaths, along with its economic impacts.
This was not easy given the many uncertainties, especially early in the pandemic, with respect to key parameters – such as the basic reproduction number or R0, infection fatality rates, and inadequate data, such as the true or population infection rate.
COVID-19 models can, and have, provided invaluable insights to important questions during the pandemic, like what might be the public health and economic consequences of delayed implementation of mandated public health measures? What are the possible outcomes of public health measures of different duration and stringency? And would partial public health measures still be required, even after mass vaccination, to control future COVID-19 outbreaks?
Models were used not solely for prediction, but for explanation, and to help decision-makers to consider the possible trade-offs and consequences of alternative decisions. If model results can assist the prime minister and premiers to make better decisions, as they have done, then they remain useful and fit for purpose.
In March 2020, for instance, a large-scale agent-based model of COVID-19 transmission identified the minimal level of social distancing compliance (at least 70 per cent), required to control the first wave in Australia. At the time, government action and social engagement converged in achieving the required compliance. Consequently, the model accurately predicted the epidemic peak and the total number of cases.
A year later, this model, re-calibrated for the Delta variant, produced a less optimistic conclusion – the level of social distancing attained in Sydney in July 2021 was inadequate for outbreak control. In this instance, the model projected several possible outcomes: a ‘what-if’ outbreak suppression scenario relying on more stringent restrictions (which did not eventuate), and an alternative scenario, underpinned by a rapid vaccination rollout.
In our view, the ultimate value of COVID-19 models is encapsulated by three key questions.
First, do they work for their intended purpose? Second, are they transparent and explainable? And third, are they useful for policy and decision-makers?
Ultimately though, COVID-19 models can only be judged to be fit for purpose if they are transparent – whether it be about the data they have used, how their models have been calibrated to real-world data, the mechanics of their models, the key estimates underpinning the model (such as the R0) or other key assumptions of the modelling scenarios, such as the stringency of lockdown, the effectiveness of testing, tracing, isolation and quarantine, and vaccination rates.
In the absence of transparency, irrespective of the expertise or reputation of the modellers, model results cannot be accepted on trust. This is especially true when model results are used to justify key policy decisions about far-reaching public health measures such as population-wide stay-at-home orders or projecting the needs of intensive care unit capacity.
By this measure, Australia has performed relatively poorly when compared it its peers, such as the United Kingdom and New Zealand, where modelling assumptions and results used by decision-makers have been made publicly accessible by their governments.
That said, the joint release of Victoria’s Roadmap with modelling results on 19 September, with substantial detail provided to judge the model and results and source code being provided by the Doherty Institute on 18 September, are good examples of what we can, and should do.
Practices whereby governments may seek confidentiality clauses around model descriptions and key assumptions, and when modellers comply and agree to such clauses, inhibit transparency, and diminish trust in COVID-19 models.
A lack of transparency damages credibility and harms public confidence in decision-making purported to be based on model results. Both policymakers and modellers should ensure that models used for public decision-making are readily available for scrutiny, in real time, when decisions are announced.
Without transparency, model results may end up being used as a shield for decisions that may have been made regardless of the model results, or used to justify an agenda or decision that the model results may not support.
The crux of this problem lies not so much in modelling, or ‘the science’, but rather with the application of modelling or science to deliver a political outcome which may be, to varying degrees, be independent of model results.
The response by some British scientists to what some have perceived to be ‘captured science’ in relation to the pandemic has been to form an Independent Scientific Advisory Group of Experts (SAGE).
Their stated belief is that “…openness and transparency leads to better understanding and better decision making.” It also means that opaqueness and misinformation contribute to misunderstanding and poorer decision-making.
Two related initiatives have arisen from Australian academia recently. First is a coming-together of several university modelling teams, several members of which have penned this piece, to establish the Australian COVID-19 Modelling Initiative (AUSCMI).
The aim of AUSCMI is “…to make COVID-19 modelling more transparent and accessible for the public and policymakers”. A second response has been the establishment of Independent OzSAGE, an Australian counterpart to the United Kingdom’s Independent SAGE.
Holding decision-makers accountable for poor decisions, misrepresentation of the facts, or false claims is a privilege and a responsibility all Australians must pay to live in a vibrant liberal democracy, but COVID-19 and broader public health modellers have a special responsibility. When engaged by decision-makers they must, at the very least, make their model structure, key assumptions, and parameter values available for peer review.
This is required because models that affect decision-making on important public health measures, such as how people can move, interact, behave, and the freedoms they are granted (or denied) for the public good, must be scrutinised for errors in good faith, but also interrogated for possible misuse.
All Australians who want better public policy must seek openness from and scrutiny of government, including in relation to COVID-19 modelling, and support both scientific and public integrity. Without it, the decisions that saved tens of thousands of Australian lives in 2020 risk being undermined as the country transitions its national response to dealing with COVID-19.