When we come across false information on social media, it is only natural to feel the need to call it out or argue with it. But my research suggests this might do more harm than good. It might seem counterintuitive, but the best way to react to fake news – and reduce its impact – may be to do nothing at all.
False information on social media is a big problem. A UK parliament committee said online misinformation was a threat to “the very fabric of our democracy”. It can exploit and exacerbate divisions in society. There are many examples of it leading to social unrest and inciting violence, for example in Myanmar and the United States.
It has often been used to try to influence political processes. One recent report found evidence of organised social media manipulation campaigns in 48 different countries. The UK is one of those countries, as demonstrated by news reports about a local branch of the Conservatives which urged activists to campaign by “weaponising fake news”.
Social media users also regularly encounter harmful misinformation about vaccines and virus outbreaks. This is particularly important with the roll-out of COVID-19 vaccines because the spread of false information online may discourage people from getting vaccinated – making it a life or death matter.
With all these very serious consequences in mind, it can be very tempting to comment on false information when it’s posted online – pointing out that it is untrue, or that we disagree with it. Why would that be a bad thing?
The simple fact is that engaging with false information increases the likelihood that other people will see it. If people comment on it, or quote tweet – even to disagree – it means that the material will be shared to our own networks of social media friends and followers.
Any kind of interaction at all – whether clicking on the link or reacting with an angry face emoji – will make it more likely that the social media platform will show the material to other people. In this way, false information can spread far and fast. So even by arguing with a message, you are spreading it further. This matters, because if more people see it, or see it more often, it will have an even greater effect.
Read more: The term 'fake news' is doing great harm
I recently completed a series of experiments with a total of 2,634 participants looking at why people share false material online. In these, people were shown examples of false information under different conditions and asked if they would be likely to share it. They were also asked about whether they had shared false information online in the past.
Some of the findings weren’t particularly surprising. For example, people were more likely to share things they thought were true or were consistent with their beliefs.
But two things stood out. The first was that some people had deliberately shared political information online that they knew at the time was untrue. There may be different reasons for doing this (trying to debunk it, for instance). The second thing that stood out was that people rated themselves as more likely to share material if they thought they had seen it before. The implication is that if you have seen things before, you are more likely to share when you see them again.
It has been well established by numerous studies that the more often people see pieces of information, the more likely they are to think they are true. A common maxim of propaganda is that if you repeat a lie often enough, it becomes the truth.
This extends to false information online. A 2018 study found that when people repeatedly saw false headlines on social media, they rated them as being more accurate. This was even the case when the headlines were flagged as being disputed by fact checkers. Other research has shown that repeatedly encountering false information makes people think it is less unethical to spread it (even if they know it is not true, and don’t believe it).
So to reduce the effects of false information, people should try to reduce its visibility. Everyone should try to avoid spreading false messages. That means that social media companies should consider removing false information completely, rather than just attaching a warning label. And it means that the best thing individual social media users can do is not to engage with false information at all.