Menu Close
Photo of a computer screen reading FACEBOOK ADS CREATE AN AD
Facebook has found it challenging to address misinformation on its platform. (Shutterstock)

Facebook’s latest federal election integrity initiative is just another marketing tactic

In February 2021, Facebook Canada announced that Canadians should expect to see less political content on the social media site, claiming that Canadians “don’t want political content to take over their News Feed.” How Facebook reached this conclusion remains a mystery, and it is an interesting development because Facebook has always been political.

Many Facebook ads that ran in Canada and elsewhere contained racist and xenophobic messages that are political in nature. Indeed, the social media company profited from such ads, including some purchased by the recently deleted Facebook page “Old Stock Canadian.”

In Canada, the company also announced that it began testing “a variety of ways to rank political content in people’s feeds using different signals, and then decide on the approaches we’ll use going forward.” However, it seems that Canadians are used as a political experiment for the social media company, and this could worsen the existing problem of echo chambers or filter bubbles.

Integrity and Facebook

Facebook’s policy preceded the announcement of the federal election by several months. The big questions that Facebook might like to answer are: How will Facebook rank political content? Who will rank it, and whom will it serve? Isn’t everything “political,” like COVID-19 policies?

I want to focus here on Facebook’s other announcement: the launch of its Canadian Election Integrity Initiative (CEII) for the upcoming 2021 election. The website contains several sections including “Working with Experts in Academia,” “Tackling Coordinated Inauthentic Behaviour” and “Combating Misinformation & Promoting Credible Information.”

I think the site contains promotional materials used as a strategy to polish Facebook’s image after the Cambridge Analytica scandal following the results of the 2016 election in the United States. For example, in the section titled “Working with Experts in Academia” details the connections between Harvard and Carleton Universities and Kevin Chan, Facebook’s lobbyist in Ottawa and the Head of its Public Policy. The information in this section consists of promotional materials.

Kevin Chan, the Head of Public Policy at Facebook, talks about Facebook’s steps to protect election integrity at McGill University in 2018.

Another section — “Tackling Coordinated Inauthentic Behaviour” (CIB) claims the following:

“Over the past four years, we’ve shared our findings about CIB we detect and remove from our platforms. As part of our regular CIB reports, we share info about the networks we take down over the course of a month to make it easier for people to see the progress we’re making in one place.”

What Facebook has overlooked here is that the reports only include the aggregate data, a few screenshots and other general claims. We don’t know, for example, the details of the information operations targeting or originating from Canada. In other words, we only have to take Facebook’s word for these snapshot activities.

This is in contrast to Twitter that periodically releases bulk data on state-run trolls as part of its Civic Integrity Initiative. Though Twitter is not completely transparent, it is doing much more than Facebook, whose data and algorithms remain shrouded with mystery. More recently, Twitter flagged one of Chrystia Freeland’s tweets as misleading.

Even the CrowdTangle tool that Facebook recently offered to academic researchers suffers from many limitations. Under the pretext of protecting users’ privacy, Facebook avoids releasing data that is important for identifying and systematically studying individuals and groups for the public good.

Facebook ad dominance

In the section titled “Combating Misinformation & Promoting Credible Information,” Facebook also claims to apply to standards:

“We remove content that violates our Community Standards … such as misinformation about COVID-19 and vaccines. We also remove misinformation that could prevent people from voting, like misrepresenting the dates, location, times and methods for voting.… For false claims that don’t violate our Community Standards, we rely on our global network of more than 80 independent fact-checking partners in over 60 languages to identify, review and rate viral misinformation across our platforms.”

The main issue here is that the problematic content that is mostly examined should go viral first before it could be reviewed. What about millions of other posts that can be problematic but do not reach the viral level?

To understand whether Facebook is applying the above rules, I searched its Ad Library, an open-access platform that enables any user to view active and inactive ads. I looked at the ads because Facebook is directly profiting from them, and common sense says that they are moderated.

I searched for political ads in Canada that were active as of Aug. 19, 2021, and found about 800 relevant ads. Interestingly, 70 per cent (560) of these ads were purchased by the Liberal Party of Canada and Justin Trudeau’s Facebook pages, while many others were purchased by other Liberal candidates constituting over 80 per cent of the available Facebook ads. In other words, the Liberal Party is the dominant player on Facebook.

While the majority of these ads only have promotional materials, some of them contain misleading content that should be at least flagged by Facebook, according to their above announcement. For example, one Facebook ad purchased by Andrea Kaiser, a Liberal candidate for Niagara Falls, circulates an editorial cartoon by the Hamilton Spectator’s Graeme Mackay. The cartoon shows the Conservative leader Erin O'Toole as supportive of anti-vaxxers who have COVID-19 — O'Toole said he “encourages” Canadians to get vaccinated, but opposes mandatory vaccinations.

Transparency and Facebook

Among the other claims made by Facebook is related to “content transparency,” stating that the social media company “launched a Context Button to provide people with more background information about the publishers and articles they see in News Feed so they can decide what to read, trust and share.”

For example, in an older post related to its Community Standards, Facebook mentions that it does not allow any page to “engage in inauthentic behavior … [in connection to] the identity, purpose or origin of the entity that they represent.” However, this does not seem to be the case when it comes to some Facebook pages like “Canada Proud,” which claims to be a non-profit organization yet has purchased several divisive ads to support the Conservative Party.

In other words, Facebook is still allowing some organizations and groups to spread their polarizing and often misleading political messages on its platform, despite its public integrity and transparency pledges during our upcoming election. We all know that Facebook’s business model is centred around profit, but it can be useful to be more transparent with the public. And maybe drop the word “integrity” from its new initiative.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now