Figuring out how to regulate AI is a difficult challenge, and that’s even before tackling the problem of the small number of big companies that control the technology.
Quantum machine learning models could help us create AI systems that are almost impenetrable by hackers. But in the hands of hackers, the same technology could wreak havoc.
Companies that want to avoid the harms of AI, such as bias or privacy violations, lack clear-cut guidelines on how to act responsibly. That makes internal management and decision-making critical.
In this podcast, Labor MP Julian Hill joins Michelle Grattan to discuss the job market and getting people into work, artificial intelligence, Julian Assange, and TikTok.
Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.
Generative AI, those astonishingly powerful language- and image-generating tools taking the world by storm, come at a price: a big carbon footprint. But not all AIs are equally dirty.
Twitter uses an AI-powered centrally managed algorithm to moderate what you see. On Bluesky, you have control over the algorithm that selects what you see through so-called ‘composable moderation’.
Metaphorical black boxes shield the inner workings of AIs, which protect software developers’ intellectual property. They also make it hard to understand how the AIs work – and why things go wrong.
Antonio Pele, Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio)
Setting up AI-free ‘sanctuaries’ could allow us to reap the technology’s benefits while offering vital safeguards to our cognitive capacities and privacy.