Like other governments around the world, the Canadian federal government has turned to technology to improve the quality and efficiency of its public services and programs. Many of these improvements are powered by artificial intelligence (AI), which can raise concerns when introduced to deliver services to vulnerable communities.
To ensure responsible use of AI, the Canadian government developed the “algorithmic impact assessment” tool, which determines the impact of automated decision systems.
The algorithmic impact assessment was introduced in April 2020, and very little is known about how it was developed. But one of the projects that informed its development has garnered concern from media: the Immigration, Refugees and Citizenship Canada’s (IRCC) AI pilot project.
The AI pilot project introduced by IRCC in 2018 is an analytics-based system that sorts though a portion of temporary resident visa applications from China and India. IRCC has previously explained that because its temporary resident visa AI pilot was one of the most concrete examples of AI in government at the time, IRCC directly engaged with and provided feedback to the Treasury Board Secretariat of Canada in the development of the algorithmic impact assessment.
Not much is publicly known about IRCC’s AI pilot project. The Canadian government has been selective about sharing information on how exactly it is using AI to deliver programs and services.
A 2018 report by the Citizen Lab investigated how the Canadian government may be using AI to augment and replace human decision-making in Canada’s immigration and refugee system. During the report’s development, 27 separate access to information requests were submitted to the Government of Canada. By the time the report was published, all remained unanswered.
The case of New Zealand
While the algorithmic impact assessment is a step in the right direction, the government needs to release information about what it claims is one of the most concrete examples of AI. Remaining selectively silent may lead the Canadian government to fall victim to the allure of AI, as happened in New Zealand.
In New Zealand, a country known for its positive immigration policy, reports emerged that Immigration New Zealand had deployed a system to track and deport “undesirable” migrants. The data of 11,000 irregular immigrants — who attempt to enter the country outside of regular immigration channels — was allegedly being used to forecast how much each irregular migrant would cost New Zealand. This information included age, gender, country of origin, visa held upon entering New Zealand, involvement with law enforcement and health service usage. Coupled with other data, this information was reportedly used to identify and deport “likely troublemakers.”
Concerns surrounding Immigration New Zealand’s harm model ultimately drove the New Zealand government to take stock of how algorithms were being used to crunch people’s data. This assessment set the foundation for systematic transparency on the development and use of algorithms, including those introduced to manage migration.
Conversely, in Canada, advanced analytics are used to sort applications into groups of varying complexity. More specifically, in Canada, temporary resident visa applications are reviewed for eligibility and admissibility.
The Canadian pilot is an automated system trained on rules established by experienced officers to identify characteristics in applications that indicate a higher likelihood of ineligibility. For straightforward applications, the system approves eligibility solely based on the model’s determination, while eligibility for more complex applications is decided upon by an immigration officer. All applications are reviewed by an immigration officer for admissibility.
Levels of review
For New Zealand, publishing information on how, why and where the government was using AI offered the opportunity to provide feedback and make recommendations. These efforts led to the New Zealand government developing an Algorithm Charter on the use of algorithms by government agencies. More importantly, the public can now understand how the government is experimenting with new capabilities and offer their input.
Although IRCC has been careful in deploying AI to manage migration, there is great benefit in being transparent about its endeavours involving AI. By engaging in open innovation and making information about IRCC’s AI pilot project public, the government can start having meaningful conversations, sparking thoughtful innovation and encouraging public trust in its application of emerging technologies.