Generative AI can seem like magic, which makes it both enticing and frightening. Scholars are helping society come to grips with the potential benefits and harms.
The use of deepfakes and AI by groups with various interests, including governments and media, is the latest and most sophisticated tool in information and disinformation campaigns.
One of the biggest headlines in the gaming community last week involved a deepfake porn scandal. Such material is one example of how generative AI can cause immense harm.
AI-generated voice-alikes can be indistinguishable from the real person’s speech to the human ear. A computer model that gives voice to the dinosaurs turns out to be a good way to tell the difference.
Earlier this year, a deepfake impersonating Ukrainian President Volodymyr Zelenskyy spread on social media – with Zelenskyy supposedly asking Ukrainians to surrender to Russia.
Fake videos generated with sophisticated AI tools are a looming threat. Researchers are racing to build tools that can detect them, tools that are crucial for journalists to counter disinformation.
A scholar who has reviewed the efforts of nations around the world to protect their citizens from foreign interference says there is no magic solution, but there’s plenty to learn and do.
Images without context or presented with text that misrepresents what they show can be a powerful tool of misinformation, especially since photos make statements seem more believable.
The abilities to detect and analyze deepfake videos is of the utmost urgency. Deepfakes are a serious threat to people’s security and our democratic institutions.
Fake videos pose a risk to democratic representation, participation, and discussion. Canadians need to be mindful of their existence as we head towards the federal election.
Assistant Professor, Educational Technology, Chair in Educational Leadership in the Innovative Pedagogical Practices in Digital Contexts - National Bank, Université Laval