A report released today by the Centre for Independent Studies (CIS) has drawn attention to the lack of quality evaluations being conducted on Indigenous programs.
The report identified 1082 Indigenous-specific programs delivered by government agencies, Indigenous organisations, not-for-profit NGOs and for-profit contractors. It found 92% have never been evaluated to see if they are achieving their objectives.
While it oversteps in some regards, this report raises a very important point: we don’t really know what works if we don’t check. That’s a lesson that applies to all areas of public policy spending, not just Indigenous affairs.
A bit of perspective
The report asserts:
Indigenous-specific funding is being wasted on programs that do not achieve results because they are not subject to rigorous evaluation.
This is a contradiction. With no rigorous evaluation, how could we know if it’s a waste or not? The point should be that we mostly don’t really know if those programs are improving outcomes. But a lack of evaluation is indeed a major problem, and we can do better.
The report only addresses Indigenous programs but it’s important to note the issues raised are not confined to Indigenous programs. I was not entirely surprised by these findings because I have seen similar patterns in other sectors, such as education spending.
A recent paper published by the US’ National Bureau of Economic Research reviewed the evidence from randomised evaluations on the impact of education programs (not confined to Indigenous programs) in developed countries. Of the 196 experiments it identified, only two were conducted in Australia.
If we were to withdraw funding from all programs conducted by Australian governments whose impact has not been verified through rigorous evaluation, then I don’t think we’d have many programs left.
That said, it may be that rigorous evaluation for Indigenous programs in Australia is of extra importance. In other areas (take education or design of the income support system), it is perhaps easier to piggy-back on the rigorous evaluations conducted in other countries; taking evidence “off-the-shelf” from overseas.
The CIS’ report is correct to draw attention to the paucity of rigorous evaluations. It feels good to spend money on Indigenous programs, just as it feels good to spend money on all worthy causes. But greater investment on evaluating those programs would almost certainly be money well spent, as long as the evaluations are of high quality.
Not all evaluations are created equal
We need to be very aware that not all evaluations are equally compelling. There can be a temptation for government departments to conduct tokenistic, low-quality evaluations that tick-the-box for a program being evaluated.
Many evaluations rely only on asking program participants or workers if they believe that a program has had a favourable impact. While such work has merit, it doesn’t actually measure impact. We don’t rely only on such evidence in medicine. Nor should we for social policy.
Such evaluations are usually inconclusive, which has the added benefit of not risking embarrassment to the minister championing the program.
We have made tentative steps toward fixing this problem. The Productivity Commission convened a roundtable of experts in 2009 on the topic of Strengthening Evidence-Based Policy in the Australian Federation.
In his submission to the roundtable, Andrew Leigh – then a professor of economics at the Australian National University, now the shadow assistant treasurer – outlined what he called a “hierarchy of evidence” that would help policymakers better understand what social programs were actually worth the money and effort:
Leigh’s proposed hierarchy itself may need more scrutiny, debate and refinement. My view is that studies relying only on matching or multiple regression are a lower grade of evidence than genuine quasi-experimental work.
The CIS report recommends:
All programs receiving taxpayer funding should be subject to independent evaluations. At the same time, governments and organisations should cease collecting data that does not make a valuable contribution towards improving the level of knowledge about the effectiveness of programs.
I think we need to go further and ensure that we conduct the best possible evaluations. This includes conducting randomised trials as part of the mix.
Nicholas Biddle, a quantitative social scientist at the Australian National University, has asked whether the challenges facing programs targeting Indigenous people in remote Australia may have similarities to those targeting poverty in developing countries.
If so, then we should consider drawing on the considerable experience of the leaders in such evaluations, such as the Abdul Latif Jameel Poverty Action Lab, a network of professors who argue for policy informed by scientific evidence. Importantly, the Indigenous community must be involved in every step.
The CIS plans to follow up their report with a detailed review of the evaluations that have been conducted of Indigenous programs.
Whatever it finds, it is clear that more prominence should be given to understanding the variation in the quality of evidence.