Tighter controls are not the answer; the opportunity should be used to think differently about trust and journalism. It is critical to enable audiences to distinguish reliable, verified information.
The strong disapproval of the South African government’s handling of the pandemic is a warning that crafting persuasive pro-vaccine messages is not enough.
The majority of those punished under the laws to combat false information are opposition politicians or journalists.
Bots flooding social media with fake news about politics is bad enough. Muddying the waters in such fields as cybersecurity and health care could put lives at risk.
Users do spend some time thinking about whether information is true; the decision to share it (even if it’s fake news) depends on the topic and the type of message.
A civil rights group is suing Facebook for its failure to stop the spread of anti-Muslim hate speech on the platform.
School teaches us to read a text carefully in order to understand it. But on the web, ignoring information is a survival skill.
Over half of the 50 nations in the International Federation of Journalists survey said coverage of China had become more positive in their national media since the onset of the pandemic.
Though many people are just paying attention to these problems now, they are not new – and they even date back to ancient Rome.
Researchers found that both Kenyans and South Africans have a broadly negative view of China, possibly amplified by the pandemic.
The researchers are working on a way to train people to be better at spotting fake news.
A social psychologist explains how to avoid being misled, and how to prevent yourself – and others – from spreading inaccurate information.
In the digital age, and the COVID era, this is more important than ever.
Cable providers like Comcast carry Fox News and other channels that feed conspiracy theories and lies into Americans’ homes.
Google, Facebook, TikTok and Twitter have all agreed to a voluntary code of conduct targeting misinformation. But the only real commitment is to appear as though they’re taking action.
To rebuild lost trust in the media will require more commitment and effort than just papering over ethical cracks.
Scholars who study dangerous speech have identified common themes that can lead to violence.
New research suggests tech firms need to improve how they detect abuse in response to the evolving use of coded language.
Banning extremists from social media platforms can reduce hate speech, but the deplatforming process has to be handled with care – and it can have unintended consequences.
Social media censorship in the US fails to tackle the rising trend of ‘disinformation-for-hire’ cyber troops.