Alexandra Sims, University of Auckland, Waipapa Taumata Rau
Current laws governing policing don’t take into account the capacity of AI to process massive amounts of information quickly – leaving New Zealanders vulnerable to police overreach.
AI systems with deceptive capabilities could be misused in numerous ways by bad actors. Or, they may become prone to behaving in ways their creators never intended.
Snapchat’s AI-powered chatbot malfunctioned this week, raising questions of “sentience” among users. As AI becomes increasingly human-like, society must become AI-literate.
Blindly eliminating biases from AI systems can have unintended consequences.
Dimitri Otis/DigitalVision via Getty Images
Regardless of the input, AI image generators will have a tendency to return certain kinds of results. This is where the potential for bias arises.
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence in Washington.
(AP Photo/Patrick Semansky)
Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.
In the absence of legal guidelines, companies need to establish internal processes for responsible use of AI.
Oscar Wong/Moment via Getty Images
Companies that want to avoid the harms of AI, such as bias or privacy violations, lack clear-cut guidelines on how to act responsibly. That makes internal management and decision-making critical.
Catching a ride for free?
Hulton-Deutsch Collection/CORBIS/Corbis via Getty Images
Computer scientists are overwhelmingly present in AI news coverage in Canada, while critical voices who could speak to the current and potential adverse effects of AI are lacking.
You don’t have to see the future to know that AI has ethical baggage.
Wang Yukun/Moment via Getty Images
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been?
(Shutterstock)
When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
James P. Jimirro Professor of Media Effects, Co-Director, Media Effects Research Laboratory, & Director, Center for Socially Responsible AI, Penn State