One former member of Australia’s government review tribunal has described robo-debt as a form of ‘extortion’. Shutterstock

We need human oversight of machine decisions to stop robo-debt drama

Federal MP Amanda Rishworth raised concerns over the weekend that Australia could be headed for another robo-debt ordeal after the government reportedly confirmed the Australian Taxation Office (ATO) will use data matching to audit childcare rebates.

Government agencies increasingly use automated tools to make or facilitate decisions that affect citizens’ lives, but it’s not always appropriate for important decisions to be made by a computer.

In the European Union, the General Data Protection Regulation (GDPR) prohibits certain types of decisions from being solely automated. It also creates rights for individuals who are affected by automated processing.

We need similar safeguards in Australia for high stakes automated decisions made by government agencies.


Read more: Algorithms have already taken over human decision making


The rise of robotic decisions

The trend toward automation of government processes is accelerating in line with the government’s commitment to digital transformation.

Automated tools are now used to make or facilitate decisions in a range of government agencies, including decisions about welfare, tax, health, visas and veterans’ affairs. Centrelink’s employment income confirmation system, known as “robo-debt”, is a high profile example of what can go wrong with automated decision making.

Automation can improve the consistency and efficiency of government processes. But if there is bias or error in the computer program or data set, a flawed decision-making logic will be applied systematically, meaning large numbers of people could be affected.

Guidelines aren’t enforceable

The government has previously published guidelines on automated government decision making, including Best Practice Principles in 2004, and the Better Practice Guide in 2007. Both reports provide important advice about how to design automated systems to align with the values of public law.

But the recommendations in these reports aren’t enforceable. They also fail to create legal protections for those affected by automated decisions.

In May, there was public consultation about an artificial intelligence (AI) ethics framework for Australia. It highlighted the need for updated ethical principles to apply to new AI technologies. It also recommended a range of tools for improving the design of AI systems, including impact and risk assessments.

But, again, these recommendations will not be enforceable, even if they are included in the final framework. The current draft stops short of restricting the use of AI for certain types of decisions.


Read more: We need to know the algorithms the government uses to make important decisions about us


A new legal framework is needed

In contrast to Australia’s non-restrictive approach, legislative controls on data protection and automated decision making included in the GDPR are an example of best practice.

Article 22 of the GDPR is of particular interest for Australia. Unless specified exemptions apply, it prohibits the use of solely automated processing for decisions that produce legal or other significant effects for individuals.

To avoid this prohibition, decisions require meaningful human involvement and oversight. Having a human “rubber stamp” a decision made by automated outputs is insufficient.

Similar protections are needed in Australia, particularly for government decisions that affect individual rights and interests. Such safeguards would limit the types of government processes that can be fully automated.

‘Robo-debt’ would require meaningful human involvement under the GDPR

Let’s take a closer look at “robo-debt” to see how a prohibition on solely automated decision making might work.

The robo-debt system uses an automated data-matching and assessment process to raise welfare debts against people who the system flags as having been overpaid. Someone who receives a debt discrepancy notice can respond by giving income evidence to Centrelink. If no information is provided, an algorithm generates a fortnightly income figure by averaging income data from the ATO.

Of course, many welfare recipients have variable income as they are engaged in casual, part-time or seasonal work. It’s not surprising that the reliance on averaged data has led to a high number of reported errors. Receiving incorrect robo-debt notices has contributed to stress, anxiety and depression for many people.

One former member of Australia’s government review tribunal has described the system as a form of “extortion”.

If Australia had GDPR-type protections, meaningful human involvement would be required before an automated debt notice was sent. Manual review by human decision makers is important to ensure that a welfare debt is in fact owed.

There should also be restrictions on fully automating other high stakes decisions by government agencies. Decisions about visas and tax debts, for example, ought to be overseen by humans.


Read more: The new digital divide is between people who opt out of algorithms and people who don't


The private sector needs regulating too

Automated decisions made by private bodies that have significant impacts on individuals require legal safeguards too. Such protections are already included under the GDPR.

Similarly, in the United States, a bill for an Algorithmic Accountability Act has been proposed. If this bill is passed, it will require certain companies that use “high-risk automated decision systems” to conduct algorithmic impact assessments.

Australia’s non-binding guidance on automated decision making is a step in the right direction, but it needs to be bolstered by legislation that restricts the types of decisions that can be fully automated. This is particularly important for government decisions with serious consequences for individuals, like robo-debt and auditing of childcare rebates.