Despite their dangers, highly realistic deepfake images and videos are finding positive uses in the creative arts.
AI-generated voice-alikes can be indistinguishable from the real person’s speech to the human ear. A computer model that gives voice to the dinosaurs turns out to be a good way to tell the difference.
Earlier this year, a deepfake impersonating Ukrainian President Volodymyr Zelenskyy spread on social media – with Zelenskyy supposedly asking Ukrainians to surrender to Russia.
To combat the rise in deepfakes used in fraud, employees and investigators need to be aware of the threats involved.
Deepfakes — manipulated images of people — can be difficult to distinguish from the real thing, and this has terrifying consequences for democracy.
The system behind apps like Facebook, Twitter, YouTube and WhatsApp isn’t neutral. It encodes political communication, influencing what users see.
Deepfakes could strengthen our engagement with history. But there are dangers to the practice, some obvious, others more subtle.
Fake videos generated with sophisticated AI tools are a looming threat. Researchers are racing to build tools that can detect them, tools that are crucial for journalists to counter disinformation.
A scholar who has reviewed the efforts of nations around the world to protect their citizens from foreign interference says there is no magic solution, but there’s plenty to learn and do.
Images without context or presented with text that misrepresents what they show can be a powerful tool of misinformation, especially since photos make statements seem more believable.
They’re associated with fake news and celebrity porn videos but there are some unexpected upsides to these slippery clips.
The abilities to detect and analyze deepfake videos is of the utmost urgency. Deepfakes are a serious threat to people’s security and our democratic institutions.
Even established political parties are using a host of tricks to manipulate the news.
It’s a slippery slope from satire to dangerous deepfakes.
Fake videos pose a risk to democratic representation, participation, and discussion. Canadians need to be mindful of their existence as we head towards the federal election.
The Russians won’t be alone in spreading disinformation in 2020. Their most likely imitator will be Iran. Also, Instagram could get even more infected with intentional misinformation than it has been.
A key element of the battle between truth and propaganda has nothing to do with technology. It has to do with how people are much more likely to accept something if it confirms their beliefs.
The law is out of step with technology that means anyone can manipulate your images in hyper-realistic ways.
A new technique for detecting deepfakes conceives of videos as flip-books and looks for changes in successive frames of a sequence.
Laws against ‘revenge porn’, upskirting, deepfake pornography are piecemeal, and a review will take years to conduct. Here are three things government can do now to support victims.