Menu Close
A lack of transparency by Facebook Canada officials about how the Facebook News Feed works means upcoming elections in Canada could be influenced by fake news. ((AP Photo/Thibault Camus)

What Facebook could do to stop fake news about Canadian elections

It was the Canadian version of Mark Zuckerberg’s recent appearance before the U.S. Congress: Two Facebook executives showed a lot of contrition when they testified April 19 before a parliamentary committee.

And like their American counterparts, Canadian lawmakers spent a lot of time berating Facebook for its past laxness with its users’ data. But few asked forward-thinking questions about another important problem with Facebook: How can the social media giant prevent bots and fake news from disrupting elections in Canada?

The same committee heard earlier testimony from Canada’s Privacy Commissioner Daniel Therrien, who warned data from Facebook users in Canada could be used to influence the electoral process here, just as it has in the U.S., the U.K. and elsewhere.


Read more: Preventing social media from interfering in Canadian elections


A federal election is slated for October 2019. But before that, at least four provinces (Ontario, Quebec, Alberta, New Brunswick and maybe British Columbia, which has a minority government), will also hold votes.

Computer-generated propaganda

The role of political bots in Canada has been examined by University of Ottawa’s Elizabeth Dubois and Concordia University’s Fenwick McKelvey. In their groundbreaking report, they show computational propaganda could be “creating the conditions for a voter suppression campaign resembling the Robocalling Scandal.”

Privacy commissioner Daniel Therrien appears before a Commons privacy and ethics committee on the breach of personal information involving Cambridge Analytica and Facebook. THE CANADIAN PRESS/Justin Tang

During his testimony before the parliamentary committee, Kevin Chan, Facebook Canada’s head of public policy, said his company is using artificial intelligence to spot fake accounts used in the past to flood the social network with politically dubious posts. He also said Facebook is rolling out advertising transparency measures that will double-check the identity of political advertisers on its platform.

But these efforts only address part of the problem. Because, as the NDP’s Charlie Angus pointed out during Chan’s testimony: “Facebook is the news for the vast majority of people.”

The leading source of news

Indeed, for several years, it’s been clear Canadians get most of their information through Facebook. As a journalism professor, I’ve been curious to know exactly what information reaches people on Facebook.

With print, radio, television and even online news sites, researchers can easily measure the space or airtime devoted to politics, sports, arts or any subject. This type of media content analysis has been done for decades.

In the late 19th century, German sociologist Max Weber saw a link between media content and the “cultural temperature” of society. American political scientist Harold Lasswell later constructed the communication model at the heart of most media content analysis in the 20th century with the following formula, coined in 1948:

“Who, said what, in which channel, to whom, with what effect?”

But this sort of content analysis is impossible to do on Facebook. Only Facebook knows who said what to whom. And only Facebook has the power to measure to what effect.

In 2014, I tried to figure out who said what to whom on Facebook. I devised a research project to determine what kind of information Facebook selects for people to see on their news feeds.

Where does Facebook news come from?

My plan was simple. I wanted to ask a representative sample of Facebook users in my home province of Québec to let me record, three or four times a day, the Top 25 items appearing on their news feeds. I then wanted to examine what kind of news people get when they connect to Facebook: Which media outlets it comes from, what content is posted and what proportion of the Top 25 items in the news feeds was actually news.

Of course, I would have collected data only with the participants’ consent. But that wasn’t enough. I also needed Facebook’s permission. So I asked them.

I spoke with Kent Foster, who was just starting as Senior Program Manager of Academic Relations with Facebook. He asked me to send a detailed research proposal, which I did.

A couple of weeks later, Foster replied:

“Your proposal was circulated and thoroughly reviewed internally by several teams. The subject of your research touches on a very sensitive topic of data privacy. While I explained that you are proposing to obtain consent from participants in your study, there is no internal group that is interested in supporting this type of collaboration.”

Back then, I didn’t make much of this refusal. But the Cambridge Analytica scandal shed new light on my aborted research project.

Canadian Chris Wylie, who once worked for the U.K.-based political consulting firm Cambridge Analytica, was the whistleblower who exposed how Facebook data was used to build psychological profiles so voters in the U.S. could be targeted with ads and stories. (AP Photo/Matt Dunham)

The app at the heart of the scandal, developed by University of Cambridge researcher Aleksandr Kogan, was very different from what I had in mind in 2014. This Is Your Digital Life collected data about you through your friends. Even if you never installed it, but one of your friends did, it could know everything about you.

In contrast, apart from participants’ demographic information (age, gender, location, education) to help me build a representative sample, my proposed project didn’t tap into the rich set of personal and sensitive data made available by Facebook. The type of data I wanted to collect was limited to what appears on one’s news feed.

Facebook protecting itself

After the Cambridge Analytica fiasco hit, I reopened Foster’s email and read it with fresh eyes. The “very sensitive topic of data privacy” he was writing about now seems bogus. In fact, it appears Facebook was not protecting its users. It was protecting itself.

I wasn’t interested in users’ personal data. I was interested in Facebook’s data. I now believe Facebook refused my request because accessing this data would have allowed me to understand how its news feed algorithm worked.

Facebook recently announced it was open for research. It wants to let scholars assess social media’s impact on elections. An independent commission is to be set up to review research proposals and act as a trusted third party between scholars and the social media company.

Great news! I will resubmit my proposal and maybe enrich it by looking at links between a user’s activity on Facebook and the content of their news feed. Facebook’s answer will be telling in how transparent it has truly become. In my opinion, it should let academics examine the raw data it sends to Canadian voters.

Lisa Eadicicco argues in Time magazine that in order to protect its users, Facebook needs to be more transparent. I couldn’t agree more.

Transparency breeds trust. And trust (by its users) in the long run benefits Facebook. But if Facebook doesn’t open its doors to researchers, Canadian politicians should force the door open through legislation.

It might be too late to do it for the Ontario election in June, but it surely can be worked out for the Québec election in October. At the very least, it should be possible for the next federal election in October 2019.

Only then will we really be able to measure who says what to whom and to what effect in Canada.

Want to write?

Write an article and join a growing community of more than 182,100 academics and researchers from 4,941 institutions.

Register now