Alexandra Sims, University of Auckland, Waipapa Taumata Rau
Current laws governing policing don’t take into account the capacity of AI to process massive amounts of information quickly – leaving New Zealanders vulnerable to police overreach.
AI systems with deceptive capabilities could be misused in numerous ways by bad actors. Or, they may become prone to behaving in ways their creators never intended.
Snapchat’s AI-powered chatbot malfunctioned this week, raising questions of “sentience” among users. As AI becomes increasingly human-like, society must become AI-literate.
Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.
Companies that want to avoid the harms of AI, such as bias or privacy violations, lack clear-cut guidelines on how to act responsibly. That makes internal management and decision-making critical.
Computer scientists are overwhelmingly present in AI news coverage in Canada, while critical voices who could speak to the current and potential adverse effects of AI are lacking.
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?