Menu Close
Anti-Muslim supporters with signs saying “truth is the new hate speech” during a pro- and anti-Muslim gathering in March 2017 in Toronto. Shutterstock

U.K. and Australia move to regulate online hate speech, but Canada lags behind

In response to the March attacks on two mosques in New Zealand, Ralph Goodale, Canada’s public safety minister, recently released a statement following a G7 meeting in Paris.

Goodale urged social media platforms “to redouble their efforts to combat the social harms” relating to violent extremist content. While the Australian government rapidly introduced (deeply flawed) legislation to address the online distribution of violent content, the Canadian government issued only a weak plea for industry to “redouble” its efforts.

Alongside this plea, Goodale’s statement also warned that social media platforms like Facebook, Twitter and YouTube “should expect public regulation if they fail to protect the public interest.” After the attacks in Christchurch, the violent march of white supremacists in Charlottesville, Va. in 2017 and the synagogue shooting in Pittsburgh in 2018, the Canadian government still apparently believes social media platforms can be trusted to protect the public interest.

Goodale, left, gestures as he is welcomed by French Interior Minister Christophe Castaner for a G7 meeting in Paris on April 4, 2019. Michel Euler/AP

As my co-author Blayne Haggart and I have noted elsewhere, pressuring platforms to respond rapidly but without accountability to social problems, while also allowing platforms to interpret the rules themselves, is a “worst of both worlds” approach to social media regulation.


Read more: Stop outsourcing the regulation of hate speech to social media


Outsourcing regulation

That governments are continuing to outsource regulation to commercial platforms, is, sadly, nothing new. As I detail in my book Chokepoints: Global Private Regulation on the Internet, governmental reliance on platforms to act as regulators is not confined to social media. Here, “chokepoints” refers to the regulatory capacity of large platforms that can act as gatekeepers controlling online flows of information and monitoring users’ behaviour.

For instance, the dominant online payment providers — particularly Visa, MasterCard and PayPal — act as a financial chokepoint, determining (with little oversight or transparency) who gets access to financial services. Given the concentration in the online payment industry, once an organization loses access to major payment providers, it can be difficult to secure a viable commercial alternative.

After the Pittsburgh shooting, for example, PayPal removed its services from the right-wing social media platform Gab, on which the man charged in the shooting had posted anti-Semitic messages just prior to the attack. Gab has become the “alt-right” platform of choice for white supremacists that allows hateful and extremist speech that is typically banned from mainstream social media platforms.

We may applaud PayPal and other platforms for terminating their services to hate groups, and support companies’ efforts to throttle hate groups’ fundraising and recruitment efforts. However, it is deeply problematic to rely on commercial entities to arbitrate behaviour and content considered “acceptable,” or to address problems as serious as violent white supremacy.

Regulation by platforms

In a recent academic article, I argue that public calls for PayPal to remove its services from hate groups reveals serious problems about ceding broad regulatory authority to commercial platforms.

First, payment providers’ efforts against violent hate groups are troublingly reactive in response to public pressure and negative media coverage. Following the violence in Charlottesville, PayPal explained that it employs “proactive monitoring, screening and scrutiny” to identify hate groups and take action. But the social network Gab openly promoted violently racist, anti-Semitic and discriminatory speech since its creation in 2016. It was only after the shooting in Pittsburgh that PayPal terminated its services to Gab.

Second, platforms may act in response to negative media coverage to protect their corporate reputations, but also fear angering users who interpret such actions as censorship. For instance, some conservatives argue that social media platforms stifle right-wing speech, while those on the left criticize the platforms for refusing to remove neo-Nazi propaganda.

Third, and arguably most importantly, designating platforms as regulators generally neglects the role of government in online regulation. The process of how we decide to tackle regulation issues matters.

Rules governing platforms can (and should) vary by country, reflecting each country’s distinctive legal and political frameworks, domestic priorities and values. American-based platforms typically express a strong ideological support for free speech that reflects U.S. constitutional values. Other countries, like Canada, may decide on a different balance between free expression and regulated speech.

Beginning with public discussion

Regulating the internet is complex, and we must avoid knee-jerk responses to horrific events like Christchurch. A good first step is a serious public discussion of the possible ways to address violent hate speech and other problematic content online. The British government has invited public comments on its newly released white paper outlining possible government-led responses to online harm. While this white paper is being fiercely debated, the U.K. government should be applauded for at least fostering a public debate.

All options should be on the table. This means critically examining whether big platforms should be broken up to address anti-competitive behaviour, as U.S. Sen. Elizabeth Warren has proposed in regard to Facebook and others.

We also need to consider sharply restricting platforms’ latitude in collecting and data-mining their users’ personal data, as well as limiting advertising-based business models. Platforms’ reliance on advertising means that toxic content or harmful conspiracy theories can be highly profitable, thereby giving platforms little economic incentive to regulate harmful content.

Further, as the Cambridge Analytica scandal shows, platforms’ caches of users’ personal data are being misused, and also entrenching large platforms’ market power.

Publicly set rules should replace the current shadowy practice of behind-the-scenes government pressure and platforms’ unaccountable rule-making. An independent regulator, funded by the government, not industry, should monitor platforms’ compliance and impose penalties as necessary.

Canada must begin the debate that the U.K. and Australia are having on regulating harmful online content. When it comes to “redoubling” their efforts to deal with these issues, our government should take the lead, not Facebook.

Want to write?

Write an article and join a growing community of more than 182,400 academics and researchers from 4,942 institutions.

Register now