A new study unexpectedly found a way to help people assess social media posts with less bias and more care – pairing them up with partners who have a different perspective.
Although there was a lot of misinformation during the Voice to Parliament campaign, this is not the first time this has been used as a campaign tactic. Would a misinformation bill solve this
A human rights scholar explains how social media users can take charge of what content comes into their feed and reduce the risk of receiving misinformation.
Teaching students how to assess digital content can involve looking for clues about text origins, understanding the process of gathering and assessing evidence and grasping how content is generated.
In some cases, it can be difficult for academics to know which journals are not credible – but other times, people feel pressure to publish in these publications.
Most fact-checking focuses on social media, yet misinformation can also spread quickly through messaging apps like Whatsapp. Personalised push notifications – sent directly to your phone – could help.
Ian Anderson, USC Dornsife College of Letters, Arts and Sciences; Gizem Ceylan, Yale University, and Wendy Wood, USC Dornsife College of Letters, Arts and Sciences
Fighting misinformation doesn’t have to involve restricting content or dampening people’s enthusiasm for sharing it. The key is turning bad habits into good ones.
AI can manipulate a real event or invent one from thin air to create a ‘situation deepfake.’ These deepfakes threaten to influence upcoming elections, but you can still protect your vote.
How can fake news be managed without government overreach? Under the draft bill, platforms continue to be responsible for the content on their services – not governments.