tag:theconversation.com,2011:/us/topics/frances-haugen-111137/articlesFrances Haugen – The Conversation2022-04-25T21:07:54Ztag:theconversation.com,2011:article/1819232022-04-25T21:07:54Z2022-04-25T21:07:54ZElon Musk’s plans for Twitter could make its misinformation problems worse<figure><img src="https://images.theconversation.com/files/459590/original/file-20220425-13-feqjsz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C4004&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Elon Musk's moment of triumph is a moment of uncertainty for the future of one of the world's leading social media platforms.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/USMuskTwitter/360b354555564c63931e87a4eee568c6/photo">AP Photo/John Raoux</a></span></figcaption></figure><p>Elon Musk, the world’s richest person, <a href="https://www.wsj.com/articles/twitter-and-elon-musk-strike-deal-for-takeover-11650912837">acquired Twitter</a> in a US$44 billion deal on April 25, 2022, 11 days after announcing his bid for the company. Twitter announced that the public company will become <a href="https://www.prnewswire.com/news-releases/elon-musk-to-acquire-twitter-301532245.html">privately held after the acquisition is complete</a>. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a> for his initial bid for the company, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<p>Musk has also indicated his intention to combat twitter bots, or automated accounts that post rapidly and repeatedly in the guise of people. He has called for <a href="https://twitter.com/elonmusk/status/1517215736606957573">authenticating users as real human beings</a>. </p>
<p>Given <a href="https://doi.org/10.1145/3131365.3131385">challenges such as doxxing</a> and other malicious personal harms online, it’s important for user authentication methods to preserve privacy. This is particularly important for activists, dissidents and whistleblowers who face threats for their online activities. Mechanisms such as <a href="https://www.ijert.org/decentralized-access-control-technique-with-anonymous-authentication">decentralized protocols</a> can enable authentication without sacrificing anonymity. </p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to <a href="https://finance.yahoo.com/news/musk-proposes-twitter-blue-subscription-024424750.html">generate revenue for Twitter from subscriptions</a> rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This could make Twitter a sort of freewheeling opinion site for paying subscribers. In contrast, until now Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s acquisition of Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry automatically creates a conflict of interest, not to mention affects the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Musk to do the same with Twitter.</p>
<p><em>This is an updated version of <a href="https://theconversation.com/elon-musks-bid-spotlights-twitters-unique-role-in-public-discourse-and-what-changes-might-be-in-store-181374">an article</a> originally published on April 15, 2022.</em></p><img src="https://counter.theconversation.com/content/181923/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change now that Musk has gained control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1813742022-04-15T14:42:22Z2022-04-15T14:42:22ZElon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store<figure><img src="https://images.theconversation.com/files/458321/original/file-20220415-22-vd2ph3.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5760%2C3828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CapitolRiotInvestigationTech/d85dc445f8e84d0c9d08c8402a0d300a/photo">AP Photo/Richard Drew</a></span></figcaption></figure><p>Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has <a href="https://www.npr.org/2021/11/29/1059756077/jack-dorsey-steps-down-as-twitter-ceo">largely remained the same since its founding</a> in 2006. On April 14, 2022, Elon Musk, the world’s richest person, <a href="https://www.bloomberg.com/news/articles/2022-04-14/elon-musk-launches-43-billion-hostile-takeover-of-twitter">made an offer to buy Twitter</a> and take the public company private. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a>, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.</p>
<p>[<em>Over 150,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-150ksignup">Sign up today</a>.]</p><img src="https://counter.theconversation.com/content/181374/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change if Musk gains control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1734172021-12-20T13:15:06Z2021-12-20T13:15:06ZFacebook became Meta – and the company’s dangerous behavior came into sharp focus in 2021: 4 essential reads<figure><img src="https://images.theconversation.com/files/438279/original/file-20211217-23365-1bmrls4.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C5290%2C3547&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facebook renamed itself Meta in 2021, but the year was more notable for revelations about the company's bad behavior.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/VirusOutbreak-ReturntoOffice/8669bf4970514cc780bfcdc6c17fa6f2/photo">AP Photo/Tony Avelar</a></span></figcaption></figure><p>Meta, née Facebook, had a rough year in 2021, in <a href="https://www.cnn.com/2021/11/10/business/cnn-poll-facebook/index.html">public opinion</a> if not <a href="https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-Third-Quarter-2021-Results/default.aspx">financially</a>. Revelations from whistleblower Frances Haugen, first detailed in a Wall Street Journal <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">investigative series</a> and then presented in <a href="https://www.c-span.org/video/?515042-1/whistleblower-frances-haugen-calls-congress-regulate-facebook">congressional testimony</a>, show that the company was aware of the harm it was causing.</p>
<p>Growing concerns about misinformation, emotional manipulation and psychological harm came to a head this year when Haugen released internal company documents showing that the company’s own research confirmed the societal and individual harm its Facebook, Instagram and WhatsApp platforms cause.</p>
<p>The Conversation gathered four articles from our archives that delve into research that explains Meta’s problematic behavior. </p>
<h2>1. Addicted to engagement</h2>
<p>At the root of Meta’s harmfulness is its set of algorithms, the rules the company uses to choose what content you see. The algorithms are designed to boost the company’s profits, but they also allow misinformation to thrive.</p>
<p>The algorithms work by increasing engagement – in other words, by provoking a response from the company’s users. Indiana University’s <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">Filippo Menczer</a>, who studies the spread of information and misinformation in social networks, explains that engagement plays into people’s tendency to favor posts that seem popular. “When social media tells people an item is going viral, <a href="https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420">their cognitive biases kick in</a> and translate into the irresistible urge to pay attention to it and share it,” he wrote.</p>
<p>One result is that low-quality information that gets an initial boost can garner more attention than it otherwise deserves. Worse, this dynamic can be gamed by people aiming to spread misinformation.</p>
<p>“People aiming to manipulate the information market have created fake accounts, like trolls and social bots, and organized fake networks,” Menczer wrote. “They have flooded the network to create the appearance that a conspiracy theory or a political candidate is popular, tricking both platform algorithms and people’s cognitive biases at once.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-whistleblower-frances-haugen-testified-that-the-companys-algorithms-are-dangerous-heres-how-they-can-manipulate-you-169420">Facebook whistleblower Frances Haugen testified that the company's algorithms are dangerous – here's how they can manipulate you</a>
</strong>
</em>
</p>
<hr>
<h2>2. Kneecapping teen girls’ self-esteem</h2>
<p>Some of the most disturbing revelations concern the harm Meta’s Instagram social media platform causes adolescents, particularly teen girls. University of Kentucky psychologist <a href="https://scholar.google.com/citations?user=tuYEhtgAAAAJ&hl=en">Christia Spears Brown</a> explains that Instagram can lead teens to objectify themselves by focusing on how their bodies appear to others. It also can lead them to make unrealistic comparisons of themselves with celebrities and filtered and retouched images of their peers.</p>
<p>Even when teens know the comparisons are unrealistic, they end up feeling worse about themselves. “Even in studies in which participants knew the photos they were shown on Instagram were retouched and reshaped, <a href="https://theconversation.com/facebook-has-known-for-a-year-and-a-half-that-instagram-is-bad-for-teens-despite-claiming-otherwise-here-are-the-harms-researchers-have-been-documenting-for-years-168043">adolescent girls still felt worse about their bodies after viewing them</a>,” she wrote.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rd2yC63DMBE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">“The choices being made inside of Facebook are disastrous for our children,” whistleblower Frances Haugen told Congress.</span></figcaption>
</figure>
<p>The problem is widespread because Instagram is where teens tend to hang out online. “Teens are more likely to log on to Instagram than any other social media site. It is a ubiquitous part of adolescent life,” Brown writes. “Yet studies consistently show that the more often teens use Instagram, the worse their overall well-being, self-esteem, life satisfaction, mood and body image.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/states-sue-meta-for-knowingly-hurting-teens-with-facebook-and-instagram-here-are-the-harms-researchers-have-documented-168043">States sue Meta for knowingly hurting teens with Facebook and Instagram − here are the harms researchers have documented</a>
</strong>
</em>
</p>
<hr>
<h2>3. Fudging the numbers on harm</h2>
<p>Meta has, not surprisingly, pushed back against claims of harm despite the revelations in the leaked internal documents. The company has provided research that shows that <a href="https://about.fb.com/news/2021/09/what-the-wall-street-journal-got-wrong/">its platforms do not cause harm</a> in the way many researchers describe, and claims that the overall picture from all research on harm is unclear.</p>
<p>University of Washington computational social scientist <a href="https://scholar.google.com/citations?user=Y5000VQAAAAJ&hl=en">Joseph Bak-Coleman</a> explains that Meta’s research can be both accurate and misleading. The explanation lies in averages. Meta’s studies look at effects on the average user. Given that Meta’s social media platforms have billions of users, <a href="https://theconversation.com/the-thousands-of-vulnerable-people-harmed-by-facebook-and-instagram-are-lost-in-metas-average-user-data-172119">harm to many thousands of people can be lost</a> when all of the users’ experiences are averaged together.</p>
<p>“The inability of this type of research to capture the smaller but still significant numbers of people at risk – the tail of the distribution – is made worse by the need to measure a range of human experiences in discrete increments,” he wrote.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-thousands-of-vulnerable-people-harmed-by-facebook-and-instagram-are-lost-in-metas-average-user-data-172119">The thousands of vulnerable people harmed by Facebook and Instagram are lost in Meta's 'average user' data</a>
</strong>
</em>
</p>
<hr>
<h2>4. Hiding the numbers on misinformation</h2>
<p>Just as evidence of emotional and psychological harm can be lost in averages, evidence of the spread of misinformation can be lost without the context of another type of math: fractions. Despite substantial efforts to track misinformation on social media, it’s impossible to know the scope of the problem without knowing the number of overall posts social media users see each day. And that’s information Meta doesn’t make available to researchers.</p>
<p>The overall number of posts is the denominator to the misinformation numerator in the fraction that tells you how bad the misinformation problem is, explains UMass Amherst’s <a href="https://scholar.google.com/citations?user=1lvJXKQAAAAJ&hl=en">Ethan Zuckerman</a>, who studies social and civic media.</p>
<p>[<em>Over 140,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-140ksignup">Sign up today</a>.]</p>
<p>The denominator problem is compounded by the distribution problem, which is the need to figure out where misinformation is concentrated. “Simply counting instances of misinformation found on a social media platform <a href="https://theconversation.com/facebook-has-a-misinformation-problem-and-is-blocking-access-to-data-about-how-much-there-is-and-who-is-affected-164838">leaves two key questions unanswered</a>: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation?” he wrote.</p>
<p>This lack of information isn’t unique to Meta. “No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform,” Zuckerman wrote.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-has-a-misinformation-problem-and-is-blocking-access-to-data-about-how-much-there-is-and-who-is-affected-164838">Facebook has a misinformation problem, and is blocking access to data about how much there is and who is affected</a>
</strong>
</em>
</p>
<hr>
<p><em>Editor’s note: This story is a roundup of articles from The Conversation’s archives.</em></p><img src="https://counter.theconversation.com/content/173417/count.gif" alt="The Conversation" width="1" height="1" />
Meta felt the heat in 2021 as whistleblower revelations, congressional ire and demands for data knocked the company back on its heels. Here’s a look at research into the problems Meta poses for society.Eric Smalley, Science + Technology EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1731002021-12-03T16:17:43Z2021-12-03T16:17:43ZFacebook: latest court case shows how Europe is clamping down on big tech<p>Facebook’s approach to users’ data has just been dealt <a href="https://curia.europa.eu/jcms/jcms/p1_3584224/en/">a major blow</a> from the European court of justice (ECJ). In an answer to a question from Germany’s highest court, the ECJ’s advocate general – whose opinion is not binding but is <a href="https://www.elgaronline.com/view/journals/cilj/5-1/cilj.2016.01.05.xml">generally followed</a> by the court – has made an essential clarification to Europe’s data protection law to confirm that consumer associations can bring actions on behalf of individuals. </p>
<hr>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/facebook-latest-court-case-shows-how-europe-is-clamping-down-on-big-tech-173100&bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><em>You can listen to more articles from The Conversation, narrated by Noa, <a href="https://theconversation.com/uk/topics/audio-narrated-99682">here</a>.</em></p>
<hr>
<p>If followed by the ECJ, this will make it much easier for people to defend their rights against tech giants in future. Coming on the back of <a href="https://theconversation.com/google-loses-appeal-against-2-4-billion-fine-tech-giants-might-now-have-to-re-think-their-entire-business-models-171628">a decision</a> by the European general court against Google several weeks ago for using its platform power to restrict competitors, it is the latest example of European regulators making the business climate increasingly chilly for the companies that control our data – in sharp contrast to the US.</p>
<h2>Facebook and consent</h2>
<p>The current case is about the way that Facebook, now known as Meta, in its early years encouraged users to play quizzes and games such as FarmVille, before sharing the results with all their friends. In an <a href="https://www.taylorwessing.com/en/insights-and-events/insights/2020/05/bgh-legt-in-sachen-vzbv-gegen-facebook">action brought</a> by the Federation of Germany Consumer Organisations (VZBV), that was originally heard in 2014, it claimed that Facebook’s data protection notice did <a href="https://www.reuters.com/article/us-facebook-germany-idUSKBN2341BZ">not clearly explain</a> to users how their data could be shared. It wants the company to be forbidden from using similar consent forms in future. </p>
<p>VZBV won the original case and on appeal, before it was <a href="https://curia.europa.eu/juris/showPdf.jsf?text=&docid=230961&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=248605">heard by</a> Germany’s highest court in May 2020. The judges agreed that Facebook had misled users with the notice, but <a href="https://curia.europa.eu/juris/showPdf.jsf?text=&docid=230961&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=2486053">sought an opinion</a> from the ECJ on Facebook’s argument that only individuals and not consumer organisations can bring complaints under the EU’s General Data Protection Regulation (GDPR), which governs this area. </p>
<p>The advocate general’s recommendation, ahead of a final ECJ decision in 2022, reflects the fact that individuals do not typically start legal proceedings against large companies for a small breach of a rather technical regulation. Suing big firms on behalf of society is what consumers’ organisations do, so it would limit people’s protection if this was disallowed. </p>
<p>Facebook’s approach to games is not the only time there have been questions about how it obtained users’ consent over data. It <a href="https://www.vzbv.de/urteile/ordnungsgeld-facebook-muss-100000-euro-zahlen">famously sent</a> unsolicited emails to users’ contacts when they joined the social network. It also placed “like” buttons on third party websites and harvested the data without seeking users’ consent. </p>
<p>One by one, national European regulators have ruled these practices illegal, but always long after the fact. When Facebook was ordered <a href="https://www.vzbv.de/urteile/ordnungsgeld-facebook-muss-100000-euro-zahlen">to pay €100,000</a> (£85,138) by German regulators in 2016 for sending unsolicited emails, for instance, it was clearly too late to affect the company’s behaviour on that individual issue. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A picture of phone apps with FarmVille in the middle" src="https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/435595/original/file-20211203-15-2fk72q.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Harvesting time …</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/london-united-kingdom-october-01-2018-1195534930">OpturaDesign</a></span>
</figcaption>
</figure>
<p>VZBV has been at the forefront of fighting to make tech giants accountable for customer data since the early 2010s, though not always successfully. It <a href="https://www.cbsnews.com/news/germany-facebook-court-case-privacy-settings-terms-of-use-brought-vzbz/">failed in an attempt</a> to stop Facebook claiming its platform is “free and will always be”, while making users pay with their private data. It was also unable to require the company to allow users to <a href="https://edition.cnn.com/2014/09/16/living/facebook-name-policy/">adopt a pseudonym</a>. Facebook had resisted citing safety concerns, but perhaps also because data on identifiable consumers <a href="https://www.theguardian.com/technology/2012/aug/02/facebook-share-price-slumps-20-dollars">is more valuable</a> than anonymous ones.</p>
<h2>The GDPR and future regulations</h2>
<p>As Facebook and other social media companies have continued to develop new techniques to <a href="https://www.reuters.com/article/us-facebook-privacy-tracking-idUSKBN1HM0DR">harvest consumer data</a>, the GDPR was adopted by the EU in 2018 as a general framework to clarify the rules. It gives users more control and rights over their own data, requiring clear consent before it can be used. </p>
<p>Pending a decision on consumer organisations, the ECJ has <a href="https://curia.europa.eu/jcms/upload/docs/application/pdf/2021-06/cp210103en.pdf">already recently decided</a> that national privacy watchdogs can directly fine tech firms under the GDPR for breaches affecting their citizens. Facebook had claimed only the Irish authority was competent, since its EU headquarters are there. A forthcoming <a href="https://www.internetjustsociety.org/one-way-ticket-to-luxembourg-facebook-v-bundeskartellamt-at-the-ecj">ECJ case</a> will look at giving similar powers to antitrust authorities.</p>
<p>The EU rules around big tech are also set <a href="https://www.politico.eu/article/europe-digital-markets-act-dma-digital-services-act-dsa-regulation-platforms-google-amazon-facebook-apple-microsoft/">to be strengthened</a> in 2022 with the <a href="https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package">Digital Services Act</a> and <a href="https://www.euractiv.com/section/digital/news/eu-parliaments-key-committee-adopts-digital-markets-act/">Digital Markets Act</a>. This package of extra restrictions is set to include curbing the uncontrolled spread of unverified and often hateful content, with the potential for penalties of 10% of a company’s annual revenue. </p>
<p>And for <a href="https://www.bbc.co.uk/news/technology-58340333">all the talk</a> of a bonfire of EU data protection rules after Brexit, the forthcoming UK Online Safety Bill goes <a href="https://www.politico.com/news/agenda/2021/11/02/facebook-europe-privacy-content-laws-518514">arguably even further</a> in the same direction, with not only similar fines but potential prison sentences for executives over breaches. The bill may <a href="https://www.thetimes.co.uk/article/online-safety-bill-to-make-tech-giants-tackle-scams-5v3d85q9v">even make</a> Facebook responsible for scams by other companies advertising on the platform. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Facebook icon next to a virus" src="https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=380&fit=crop&dpr=1 600w, https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=380&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=380&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=478&fit=crop&dpr=1 754w, https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=478&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/435597/original/file-20211203-27-1517pqm.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=478&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Tougher rules on extreme content are around the corner.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/stone-united-kingdom-april-4-2020-1693209826">Ascannio</a></span>
</figcaption>
</figure>
<p>Major EU countries such as Germany, France and the Netherlands <a href="https://www.ft.com/content/e0248106-e6d5-4b2a-aaef-b52d464dcc03">also want</a> the Digital Services Act to block what has become big tech’s major strategy to attract new users: identifying non-profitable but successful internet companies, and buying their technology and user base. The UK is now decisively on the same path, as the Competition and Market Authority <a href="https://www.ft.com/content/af93369a-56fe-4d79-ad68-42e40404291f">just ordered</a> Facebook/Meta to sell Giphy, the largest repository of GIFs on the internet, which <a href="https://slate.com/technology/2021/11/meta-told-to-sell-giphy-in-first-major-antitrust-move-against-facebooks-parent-company.html">it bought</a> in 2020 for US$400 million (£301 million).</p>
<p>European regulators are therefore unravelling tech giants’ business models one decision <a href="https://theconversation.com/google-loses-appeal-against-2-4-billion-fine-tech-giants-might-now-have-to-re-think-their-entire-business-models-171628">after the other</a>. European data regulation is also becoming the de facto <a href="https://www.cambridge.org/core/journals/american-journal-of-international-law/article/gdpr-as-global-data-protection-regulation/CB416FF11457C21B02C0D1DA7BE8E688">global standard</a> because to be allowed to operate in Europe (which generates <a href="https://www.politico.com/news/agenda/2021/11/02/facebook-europe-privacy-content-laws-518514">a quarter</a> of Facebook’s annual profits), global tech often has to obey the stricter European rules across the board.</p>
<p>The European logic is that harvesting private data is often a rip-off. People care about privacy but <a href="https://curia.europa.eu/jcms/upload/docs/application/pdf/2021-06/cp210103en.pdf">give away</a> their data in exchange for almost nothing, and the government should protect them. American regulators consider this patronising, with the Supreme Court ruling almost 20 years ago that <a href="https://supreme.justia.com/cases/federal/us/540/02-682/">a dominant firm</a> is free to exploit its consumers. Recent whistleblower Frances Haugen has provoked some soul searching in the US, but will probably <a href="https://www.politico.com/news/agenda/2021/11/02/facebook-europe-privacy-content-laws-518514">ultimately struggle</a> to secure meaningful changes to the rules around data and content. </p>
<p>With the likes of the UK now strongly following the path of the EU, the US is becoming increasingly isolated in this area. Meta is still free to make money out of their existing Facebook users in Europe. But as <a href="https://www.theverge.com/22743744/facebook-teen-usage-decline-frances-haugen-leaks">younger generations</a> leave Facebook for the likes of TikTok and Snapchat, it faces increasing difficulties in reaching them and gathering the necessary information to sell their profiles to advertisers. It may therefore be time for companies like Facebook to find new sources of revenue.</p><img src="https://counter.theconversation.com/content/173100/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Renaud Foucart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media firms in Europe are well on the way to a thousand cuts.Renaud Foucart, Senior Lecturer in Economics, Lancaster University Management School, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1721192021-11-24T13:42:27Z2021-11-24T13:42:27ZThe thousands of vulnerable people harmed by Facebook and Instagram are lost in Meta’s ‘average user’ data<figure><img src="https://images.theconversation.com/files/433195/original/file-20211122-27-1p8hg0b.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C3997&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Mark Zuckerberg's company says the kids are all right, but the data it presents is only about how the average social media user is doing.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/FacebookFalloutReininginBigTech/c3b8869675664f0abbbe61d214d5cf96/photo">AP Photo/Eric Risberg</a></span></figcaption></figure><p>Fall 2021 has been filled with a steady stream of media coverage arguing that Meta’s Facebook, WhatsApp and Instagram social media platforms pose a threat to <a href="https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739">users’ mental health</a> and well-being, <a href="https://www.nbcnews.com/tech/tech-news/facebook-knew-radicalized-users-rcna3581">radicalize</a>, <a href="https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-60-minutes-polarizing-divisive-content/">polarize</a> users and <a href="https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-documents-misinformation-spread/">spread misinformation</a>. </p>
<p>Are these technologies – <a href="https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-Third-Quarter-2021-Results/default.aspx">embraced by billions</a> – killing people and eroding democracy? Or is this just another moral panic? </p>
<p>According to <a href="https://about.fb.com/news/2021/09/what-the-wall-street-journal-got-wrong/">Meta’s PR team</a> and a handful of <a href="https://www.nytimes.com/2021/10/10/opinion/instagram-facebook-mental-health-study.html">contrarian academics</a> and <a href="https://www.nytimes.com/2021/10/13/opinion/instagram-teenagers.html">journalists</a>, there is evidence that social media does not cause harm and the overall picture is unclear. They cite apparently conflicting studies, imperfect access to data and the difficulty of establishing causality to support this position.</p>
<p>Some of these researchers have surveyed social media users and found that social media use appears to have at most <a href="https://doi.org/10.1073/pnas.1902058116">minor negative consequences</a> on individuals. These results seem inconsistent with years of <a href="https://www.usnews.com/news/healthiest-communities/articles/2019-09-12/social-media-use-may-increase-teens-risk-of-mental-health-issues">journalistic reporting</a>, Meta’s <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">leaked internal data</a>, common sense intuition and <a href="https://www.nytimes.com/2021/10/05/technology/teenage-girls-instagram.html">people’s lived experience</a>.</p>
<p>Teens struggle with self-esteem, and it doesn’t seem far-fetched to suggest that browsing Instagram could make that worse. Similarly, it’s hard to imagine so many people refusing to get vaccinated, becoming hyperpartisan or succumbing to conspiracy theories in the days before social media.</p>
<p>So who is right? As a researcher who <a href="https://scholar.google.com/citations?user=Y5000VQAAAAJ&hl=en">studies collective behavior</a>, I see no conflict between the research (methodological quibbles aside), leaks and people’s intuition. Social media can have catastrophic effects, even if the average user only experiences minimal consequences.</p>
<h2>Averaging’s blind spot</h2>
<p>To see how this works, consider a world in which Instagram has a rich-get-richer and poor-get-poorer effect on the well-being of users. A majority, those already doing well to begin with, find Instagram provides social affirmation and helps them stay connected to friends. A minority, those who are struggling with depression and loneliness, see these posts and wind up feeling worse. </p>
<p>If you average them together in a study, you might not see much of a change over time. This could explain why findings from surveys and panels are able to claim minimal impact on average. More generally, small groups in a larger sample have a hard time changing the average.</p>
<p>Yet if we zoom in on the most at-risk people, many of them may have moved from occasionally sad to mildly depressed or from mildly depressed to dangerously so. This is precisely what Facebook whistleblower Frances Haugen reported in her congressional testimony: Instagram creates a <a href="https://www.theguardian.com/technology/2021/oct/12/instagram-eating-disorders-teen-girls-parents">downward spiraling feedback loop</a> among the most vulnerable teens.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A teen watches an Instagram post of a young woman applying makeup" src="https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=449&fit=crop&dpr=1 600w, https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=449&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=449&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/433184/original/file-20211122-25-kauafv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Large-scale population studies can miss effects experienced by a subset of people; for example, vulnerable teen girls on Instagram.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/GrowingUpDigitalSmartphonePsychiatry/23cc607736124a00ae50dcac5fc634f5/photo">AP Photo/Haven Daley</a></span>
</figcaption>
</figure>
<p>The inability of this type of research to capture the smaller but still significant numbers of people at risk – the <a href="https://www.statisticshowto.com/upper-tail-and-lower-tail/">tail of the distribution</a> – is made worse by the need to measure a range of human experiences in discrete increments. When people rate their well-being from a low point of one to a high point of five, “one” can mean anything from breaking up with a partner who they weren’t that into in the first place to urgently needing crisis intervention to stay alive. These nuances are buried in the context of population averages. </p>
<h2>A history of averaging out harm</h2>
<p>The tendency to ignore harm on the margins isn’t unique to mental health or even the consequences of social media. Allowing the bulk of experience to obscure the fate of smaller groups is a common mistake, and I’d argue that these are often the people society should be most concerned about. </p>
<p>It can also be <a href="https://doi.org/10.1038/465686a">a pernicious tactic</a>. Tobacco companies and scientists alike once argued that premature death among some smokers was not a serious concern because most people who have smoked a cigarette do not die of <a href="https://global.oup.com/academic/product/doubt-is-their-product-9780195300673?cc=us&lang=en&">lung cancer</a>. </p>
<p>Pharmaceutical companies have defended their aggressive marketing tactics by claiming that the vast majority of people treated with opioids <a href="https://www.vox.com/policy-and-politics/2017/6/7/15724054/opioid-epidemic-lawsuits-purdue-oxycontin">get relief from pain without dying of an overdose</a>. In doing so, they’ve swapped the vulnerable for the average and steered the conversation toward benefits, often measured in a way that obscures the very real damage to a minority – but still substantial – group of people.</p>
<p>[<em>Get our best science, health and technology stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-best">Sign up for The Conversation’s science newsletter</a>.]</p>
<p>The lack of harm to many is not inconsistent with severe harm caused to a few. With most of the world now using some form of social media, I believe it’s important to listen to the voices of concerned parents and struggling teenagers when they point to Instagram as a source of distress. Similarly, it’s important to acknowledge that the COVID-19 pandemic has been prolonged because <a href="http://dx.doi.org/10.1136/bmjgh-2020-004206">misinformation on social media has made some people afraid</a> to take a safe and effective vaccine. These lived experiences are important pieces of evidence about the harm caused by social media.</p>
<h2>Does Meta have the answer?</h2>
<p>Establishing causality from observational data is challenging, so challenging that progress on this front garnered the <a href="https://www.nobelprize.org/prizes/economic-sciences/2021/popular-information/">2021 Nobel in economics</a>. And social scientists are not well positioned to run randomized controlled trials to definitively establish causality, particularly for social media platform design choices such as altering how content is filtered and displayed. </p>
<p>But Meta is. The company has petabytes of data on human behavior, many social scientists on its payroll and the ability to run randomized control trials in parallel with <a href="https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds">millions of users</a>. They run such experiments all the time to understand how best to <a href="https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/">capture users’ attention</a>, down to every button’s color, shape and size. </p>
<p>Meta could come forward with irrefutable and transparent evidence that their products are harmless, even to the vulnerable, if it exists. Has the company chosen not to run such experiments or has it run them and decided not to share the results? </p>
<p>Either way, Meta’s decision to instead release and emphasize data about average effects is telling.</p><img src="https://counter.theconversation.com/content/172119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joseph Bak-Coleman receives funding from the University of Washington Center for an Informed Public, University of Washington eScience Institute, and the Knight Foundation. </span></em></p>Research from Meta and some scientists shows no harm from social media, but other research and whistleblower testimony show otherwise. Seemingly contradictory, both can be right.Joseph Bak-Coleman, Postdoctoral Fellow at the Center for an Informed Public, University of WashingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1698112021-11-07T13:11:14Z2021-11-07T13:11:14ZAs a global infrastructure giant, Facebook must uphold human rights<figure><img src="https://images.theconversation.com/files/429967/original/file-20211103-23-e238vw.jpg?ixlib=rb-1.1.0&rect=0%2C26%2C6000%2C3961&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Seen on the screen of a device in Sausalito, Calif., Facebook CEO Mark Zuckerberg announces the company's new corporate name, Meta, during a virtual event.</span> <span class="attribution"><span class="source"> (AP Photo/Eric Risberg) </span></span></figcaption></figure><p>Facebook — <a href="https://www.cnn.com/2021/10/28/tech/facebook-mark-zuckerberg-keynote-announcements/index.html">its new corporate name is Meta</a> — has always wanted to get to know you. Its public goal has ostensibly been to connect people. It’s been wildly successful in doing so by building out what can only be called everyday infrastructure around the world. </p>
<p>There are <a href="https://www.statista.com/statistics/947869/facebook-product-mau/">3.5 billion people</a> worldwide using Facebook’s suite of products, which includes Messenger, Instagram and WhatsApp. As the infrastructure provider, Facebook knows a lot about who its users are, and what they do.</p>
<p>Recently, the company has announced a US$10 billion investment in the “metaverse” — an immersive version of the internet that can only increase Facebook’s hold on citizens via the data it collects about us.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A woman with blonde hair speaks into a microphone with one arm raised." src="https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430036/original/file-20211103-23-13jvm2u.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Former Facebook employee Frances Haugen speaks during a Senate hearing in Washington, D.C.</span>
<span class="attribution"><span class="source">(AP Photo/Alex Brandon)</span></span>
</figcaption>
</figure>
<p>This announcement comes at a time when everyone wants to do something about Facebook. Recent reporting on corporate ethics, fuelled by whistle-blower Frances Haugen’s <a href="https://www.washingtonpost.com/context/facebook-whistleblower-frances-haugen-s-senate-testimony/8d324185-d725-4d99-9160-9ce9e13f58a3/">document dump and testimony in the United States Senate</a> — along with a <a href="https://theconversation.com/what-caused-the-unprecedented-facebook-outage-the-few-clues-point-to-a-problem-from-within-169249">six-hour blackout</a> of its services worldwide in October — demonstrate both the scale of Facebook’s reach and the consequences of letting the status quo persist. </p>
<p>But before we fix anything, we need to consider the logic behind determining what ought to be fixed.</p>
<h2>A human rights focus</h2>
<p>In order to effectively regulate data-intensive, privately held global infrastructure like Facebook, we need to prioritize human rights concerns. Upholding human rights can act as the underlying logic for any regulatory framework, and in doing do, provide it with an established, universal ethical heft.</p>
<p>Focusing on human rights means prioritizing the <a href="https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?id=68382&section=2">basic values</a> embodied in the United Nations’ <a href="https://www.un.org/en/about-us/universal-declaration-of-human-rights">Universal Declaration of Human Rights</a>: protecting human dignity, ensuring autonomy and equality and “brotherhood” (or, in 2020s parlance, community).
It means understanding that these rights are <a href="https://www.ohchr.org/en/issues/pages/whatarehumanrights.aspx">indivisible and interdependent</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A cell phone user thumbs through the privacy settings on a Facebook account" src="https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=392&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=392&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=392&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=492&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=492&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430066/original/file-20211103-19-vb9poh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=492&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Facebook has changed our lives by morphing into a global infrastructure platform.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Sean Kilpatrick</span></span>
</figcaption>
</figure>
<p>The benefits and harms of social media affect human beings — the subjects for whom human rights are intended. Facebook, and other companies like it, have changed our lives by becoming global infrastructure, affecting how, when and if we engage with others. Through this process, our lives have become “datafied.”</p>
<p>We need to think more purposefully about how to embed human rights in our digital policies as we increasingly live and find meaning within online environments and contexts. As the UN’s <a href="https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf">Guiding Principles</a> on Business and Human Rights affirm, states have a duty to protect human rights. Businesses, however, also have the responsibility to respect human rights.</p>
<h2>A global communications giant</h2>
<p>The focus on calls for reform to date, including Haugen’s explosive Senate testimony, has been centred around content on the social network Facebook built and is best known for. But Facebook is much more than that. </p>
<p>The blackout showed that Facebook is an essential piece of global communications infrastructure. The corporation formerly known as Facebook, and its properties Instagram and WhatsApp, facilitates <a href="https://www.theguardian.com/technology/2021/oct/05/facebook-outage-highlights-global-over-reliance-on-its-services">small business and informal economies</a> around the world. <a href="https://www.theguardian.com/technology/2021/oct/05/facebook-outage-highlights-global-over-reliance-on-its-services">It provides login</a> <a href="https://www.marketplace.org/2018/10/04/heres-how-see-what-you-log-facebook/">credentials</a> to <a href="https://www.seattletimes.com/business/why-you-shouldnt-use-facebook-to-log-in-to-other-sites-and-apps/">thousands of other apps</a>. </p>
<figure class="align-center ">
<img alt="A woman with long hair in a crowd of other women takes a photo with her phone." src="https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430062/original/file-20211103-23-36ba4y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A small business owner attends a Facebook event in March 2018 in St. Louis, Mo., aimed at helping small businesses and job seekers gain additional digital skills.</span>
<span class="attribution"><span class="source">(Sarah Conard/AP Images for Facebook)</span></span>
</figcaption>
</figure>
<p>Some developing countries in Africa even rely on Facebook <a href="https://globalmedia.mit.edu/2020/04/21/the-rise-and-fall-and-rise-again-of-facebooks-free-basics-civil-and-the-challenge-of-resistance-to-corporate-connectivity-projects/">as a portal</a> to the internet <a href="https://www.bbc.com/news/world-asia-55929654">for significant</a> portions of <a href="https://social.techcrunch.com/2018/04/25/internet-org-100-million/">their populations</a>. </p>
<p>And in the very near future, Meta intends to bring another <a href="https://www.wired.com/story/facebook-renews-ambitions-connect-world/">billion people online</a> through various internet infrastructure projects.</p>
<p>So how do we regulate a tech giant like Facebook to ensure human rights are upheld? Many cases for regulation have focused on the right of freedom of expression, because that’s how most of us consciously experience it. However, a focus on content moderation is <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">a losing game</a> at best. </p>
<h2>Human rights tied to freedom of expression</h2>
<p>I’ve written previously about how Facebook has stepped into the void on adjudicating freedom of expression on its network through the <a href="https://oversightboard.com/">Facebook Oversight Board</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-is-stepping-in-where-governments-wont-on-free-expression-156189">Facebook is stepping in where governments won't on free expression</a>
</strong>
</em>
</p>
<hr>
<p>But freedom of expression is not independent of other rights. The Oversight Board’s <a href="https://www.lawfareblog.com/empirical-look-facebook-oversight-board">own docket</a> shows that deciding on cases involving freedom of expression does not happen in a vacuum. Other rights — such as the right to non-discrimination, the right to security of the person and the right to life — need to be considered.</p>
<p>Various proposals for how to regulate Facebook and social media are already out there, advocating for <a href="https://www.theglobeandmail.com/opinion/article-we-have-the-regulatory-tools-we-need-to-fix-facebook/">transparency and accountability</a>, <a href="https://www.nytimes.com/2021/10/06/opinion/facebook-whistleblower-section-230.html">changes to U.S. regulations</a> that currently provide immunity to social media platforms and creating “<a href="https://venturebeat.com/2021/02/06/from-the-election-lie-to-gamestop-how-to-stop-social-media-algorithms-from-hurting-us/">toxicity taxes</a>” in order to tackle the dilemma of content moderation. </p>
<p>The Canadian government now <a href="https://www.theglobeandmail.com/business/article-feds-have-chance-to-protect-canadians-from-digital-platform-harms/">has a chance to fix</a> problematic legislation it had previously proposed to curb social media content, which has the potential to erode <a href="https://www.cbc.ca/news/opinion/opinion-online-harms-proposed-legislation-threatens-human-rights-1.6198800">other human rights</a> in the process. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/planned-social-media-regulations-set-a-dangerous-precedent-155844">Planned social media regulations set a dangerous precedent</a>
</strong>
</em>
</p>
<hr>
<p>Meanwhile, the U.S. Federal Trade Commission and many states are following the trust-busting strategy, an approach that is <a href="https://apnews.com/article/technology-business-facebook-inc-federal-trade-commission-district-of-columbia-4533fd62e9dea3c7c858f46ac4bc7026">currently stalled</a> in the courts.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1269976722159566853"}"></div></p>
<h2>Global assent</h2>
<p>Part of the problem is that people around the world continue searching for <a href="https://doi.org/10.1038/s42256-019-0088-2">ethical frameworks</a> to manage the relationship between technology and society when we already have a successful model readily available to us: international human rights. It’s one one of the few global, ethical frameworks in existence that has overwhelming assent.</p>
<p>The other part of the problem is that we have mostly <a href="https://ssrn.com/abstract=3700267">assumed that rights in the analog world should apply online</a>. This means that territorial states are places of relevance and and enforcement. But Facebook’s infrastructure is global — it’s not a state. UN Special Rappoteurs are pointing out how the analogue and digital don’t always align in terms of <a href="https://undocs.org/A/HRC/37/62">privacy</a> and <a href="https://undocs.org/A/HRC/47/25">expression</a>, but this is just the beginning.</p>
<p>Anything that happens in the online world has a global impact, as we’ve seen with the European Union’s <a href="https://www.economist.com/europe/2021/04/24/the-eu-wants-to-become-the-worlds-super-regulator-in-ai">General Data Protection Regulation</a>. It’s clear that the impetus for protecting human rights is critical, no matter who is potentially violating them. But how to go about designing human rights protections in the name of autonomy, dignity, equality and community is not currently being contemplated when it comes to our digital spaces.</p>
<p>We must acknowledge the global and everyday reach of Facebook’s infrastructure. We need to understand how Facebook, and other tech companies like it, are dramatically shaping our experiences in ways that are both visible and invisible. </p>
<p>Understanding Facebook as a form of public infrastructure simply means acknowledging that it provides us with something essential: services that enable other services and activities, services we cannot get in the same way elsewhere. </p>
<p>Some have suggested that we <a href="https://www.theatlantic.com/magazine/archive/2021/11/facebook-authoritarian-hostile-foreign-power/620168/">treat Facebook as a hostile country</a> to properly contain it. This seems unnecessary. Facebook is an example of a new type of global infrastructure that needs to protect and respect human rights.</p><img src="https://counter.theconversation.com/content/169811/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This research is supported by the Schwartz Reisman Institute for Technology and Society at the University of Toronto.</span></em></p>In order to effectively regulate data-intensive, privately held global infrastructure like Facebook, human rights needs to be a primary focal point.Wendy H. Wong, Professor of Political Science and Canada Research Chair in Global Governance and Civil Society, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1702562021-10-20T14:31:01Z2021-10-20T14:31:01ZTo protect our privacy and free speech, Canada needs to overhaul its approach to regulating online harms<figure><img src="https://images.theconversation.com/files/427367/original/file-20211019-19039-1jyybcs.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C7571%2C5032&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Canada's proposed internet regulation measures focus almost exclusively on speech.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>In the wake of the leaks by <a href="https://apnews.com/article/facebook-frances-haugen-congress-testimony-af86188337d25b179153b973754b71a4">Facebook whistleblower Frances Haugen</a>, at least one thing remains clear: social media companies cannot be left to their own devices for addressing harmful content online.</p>
<p>But Canada is currently on a path to <a href="https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html">regulating “online harms”</a> that global experts — like the <a href="https://globalnetworkinitiative.org/canada-online-harms-proposal/">Global Network Initiative</a>, <a href="https://rankingdigitalrights.org/2021/09/27/public-submission-to-the-canadian-governments-proposed-approach-to-regulating-online-harms/">Ranking Digital Rights</a>, <a href="https://techpolicy.press/five-big-problems-with-canadas-proposed-regulatory-framework-for-harmful-online-content/">internet scholar Daphne Keller</a>, <a href="https://www.michaelgeist.ca/2021/09/onlineharmsconsult/">legal scholar Michael Geist</a> and others — have decried as among the worst in the world.</p>
<p>Why was this law proposed in Canada, and why now? Immediately after the storming of the U.S. Capitol on Jan. 6, Justin Trudeau’s Liberal government <a href="https://www.theglobeandmail.com/politics/article-federal-officials-revising-plan-to-regulate-social-media-in-light-of/">began to make good</a> on an election promise from 2019 to introduce a law modelled after the <a href="https://germanlawarchive.iuscomp.org/?p=1245">German Network Enforcement Act</a> — commonly known as NetzDG.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/planned-social-media-regulations-set-a-dangerous-precedent-155844">Planned social media regulations set a dangerous precedent</a>
</strong>
</em>
</p>
<hr>
<p>Despite Canada’s longstanding role as a champion of <a href="https://www.canada.ca/en/canadian-heritage/services/canada-united-nations-system/reports-united-nations-treaties.html">human rights</a> and <a href="https://www.canada.ca/en/global-affairs/news/2020/11/media-freedom-coalition-ministerial-communique.html">internet freedom</a>, the law proposed has numerous flaws that call the country’s reputation into question.</p>
<h2>A lack of nuance</h2>
<p>The Canadian law would have 24-hour content blocking requirements for illegal content just like the German law, which has <a href="https://globalfreedomofexpression.columbia.edu/publications/the-digital-berlin-wall-how-germany-accidentally-created-a-prototype-for-global-online-censorship/">provided a blueprint for online censorship by authoritarian regimes</a>.</p>
<p>But the law would go much further than Germany’s NetzDG, and not in a good way. NetzDG requires removal of “manifestly unlawful” content within 24 hours but gives platforms <a href="https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforcement-act-amended-to-better-fight-online-hate-speech/">seven days</a> to assess content that falls in legally gray areas. There is no nuance like this in Canada’s proposed blocking requirements, and that’s a problem.</p>
<p>Canada’s requirement is bound to lead to over-removal and the censorship of legitimate speech, especially given that companies can face massive fines of up to five per cent of gross global revenues or $25 million under the proposed law. There is also mounting evidence that automated removal decisions by platforms are biased <a href="http://hrlr.law.columbia.edu/hrlr/fosta-in-legal-context/">against marginalized</a> and <a href="https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-speech-bias-black-african-american-facebook-twitter">racialized communities</a>, causing further harms to the very people that this law aims to protect.</p>
<h2>Intrusive obligations</h2>
<p>The proposed law could well require websites and social media companies to proactively monitor and filter <a href="https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/discussion-guide.html#a4a">five types of content</a> posted online ranging from “terrorist” content to intimate images shared without consent. It would also force websites to disclose personally identifying information to law enforcement and intelligence agencies.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/381818/original/file-20210201-13-1g0n3ld.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://theconversation.com/ca/podcasts">Click here to listen to Don’t Call Me Resilient</a></span>
</figcaption>
</figure>
<p>Entire websites could be blocked in Canada, with enormous implications for the rights to free expression and access to information in Canada and beyond.</p>
<p>But requiring websites and social media platforms to proactively monitor content and feed data on their users to the police is tantamount to <a href="https://www.undocs.org/A/HRC/38/35">pre-publication censorship</a>, according to David Kaye, former special rapporteur on the promotion and protection of the right to freedom of opinion and expression.</p>
<p>It also effectively transforms online service providers into an investigative tool and “<a href="https://www.euractiv.com/section/data-protection/news/german-online-hate-speech-reform-criticised-for-allowing-backdoor-data-collection/">suspicion database</a>” for law enforcement. </p>
<p>When combined, these intrusive obligations pose an unacceptable risk to the privacy of Canadians and have no place in the laws of a free and democratic society.</p>
<h2>What happens in Canada won’t stay in Canada</h2>
<p>The <a href="https://cippic.ca/en/news/CIPPIC_calls_on_government_to_reconsider_online_harms_legislation">Canadian Internet Policy and Public Interest Clinic at the University of Ottawa</a>, and many other non-governmental organizations ranging from <a href="https://citizenlab.ca/2021/09/comments-on-the-federal-governments-proposed-approach-to-address-harmful-content-online/">Citizen Lab</a> to the <a href="https://internetsociety.ca/submission-to-the-department-of-canadian-heritage-consultation-on-internet-harms/">Internet Society of Canada</a> and the <a href="https://ccla.org/fundamental-freedoms/cclas-submission-on-canadas-proposed-approach-to-addressing-harmful-content-online/">Canadian Civil Liberties Association</a>, have all filed comments describing the problems with the law.</p>
<p>What happens in Canada won’t stay in Canada. Just as with the <a href="https://www.wired.com/story/google-fights-canada-order-global-search-results/">landmark ruling in <em>Google Inc vs. Equustek Solutions Inc</em></a>, which enabled worldwide online takedowns and <a href="https://www.techdirt.com/articles/20210506/12170946745/wireds-big-230-piece-has-narrative-to-tell.shtml">spawned international imitators</a>, other countries will leap on Canada’s example to pass similar laws that advance their own governmental interests.</p>
<p>Canada needs a new approach to regulating online harms that respects human rights. We must change course before authoritarian regimes replicate Canada’s approach for intrusive surveillance, censorship and other human rights abuses.</p>
<h2>Harmful content cannot be addressed in isolation</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man holds a sign in German" src="https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/427342/original/file-20211019-23-15u8k8m.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A sign from a protest against content filtering laws in Germany reads ‘Only totalitarian states need upload-filtering.’</span>
<span class="attribution"><span class="source">(Markus Spiske/Unsplash)</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>A fundamental problem with the Canadian online harms legislation is that it deals with the most controversial aspect of internet governance — the issue of online speech regulation — in isolation.</p>
<p>Unlike its global peers in <a href="https://www.theguardian.com/technology/2021/aug/19/facebook-antitrust-case-ftc-monopoly">the United States</a> and <a href="https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en">the European Union</a>, there has been no conversation in Canada about the bigger picture of big tech regulation. </p>
<p>Canada hasn’t reckoned with the business models of behemoth social media platforms <a href="https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/">premised on surveillance capitalism</a> and the problems of <a href="https://www.mediatechdemocracy.com/work/the-state-of-competition-policy-in-canada">anti-competitive actions</a> by technology companies. </p>
<p>Nor has the government devoted a fraction of the political energy it is spending on online harms to reforming Canada’s outdated online privacy laws.</p>
<h2>Human digital rights</h2>
<p>After Trudeau’s Liberal government called for a snap election, his party promised to <a href="https://www.theglobeandmail.com/politics/article-liberals-parliamentary-agenda-lists-three-internet-regulation-bills-as/">introduce legislation to regulate online harms within 100 days</a>.</p>
<p>Some promises are best not kept. This is one of them.</p>
<p>The digital rights community needs to hold Canada to account and urge Canada to slow down, think things through, and come up with a model of internet regulation that should be emulated — and not avoided — around the world.</p>
<iframe height="200px" width="100%" frameborder="no" scrolling="no" seamless="" src="https://player.simplecast.com/8e5484a0-56c5-49c4-b0c7-cf7458c63316?dark=true"></iframe>
<p><a href="https://theconversation.com/being-watched-mass-surveillance-amplifies-racist-policing-and-threatens-the-right-to-protest-dont-call-me-resilient-ep-10-167522">Episode 10: Being Watched: How surveillance amplifies racist policing and threatens the right to protest</a>.</p>
<p><iframe id="tc-infographic-572" class="tc-infographic" height="100" src="https://cdn.theconversation.com/infographics/572/661898416fdc21fc4fdef6a5379efd7cac19d9d5/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p><img src="https://counter.theconversation.com/content/170256/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yuan Stevens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Canada needs to overhaul its approach to addressing online harms if it wants to remain a human rights leader and champion of internet freedom.Yuan Stevens, Legal Researcher at the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), L’Université d’Ottawa/University of OttawaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1699472021-10-19T19:12:59Z2021-10-19T19:12:59ZIs it even possible to regulate Facebook effectively? Time and again, attempts have led to the same outcome<figure><img src="https://images.theconversation.com/files/427161/original/file-20211019-19-1l9g2o3.jpg?ixlib=rb-1.1.0&rect=6%2C26%2C4367%2C2885&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Matt Rourke/AP</span></span></figcaption></figure><p>The Australian government’s <a href="https://theconversation.com/this-is-why-australia-may-be-powerless-to-force-tech-giants-to-regulate-harmful-content-169826">recent warning</a> to Facebook over misinformation is just the latest salvo in the seemingly constant battle to hold the social media giant to account for the content posted on its platform.</p>
<p>It came in the same week as the US Senate heard <a href="https://www.bbc.com/news/world-us-canada-58805965">whistleblowing testimony</a> in which former Facebook executive Frances Haugen alleged the company knew of harmful consequences for its users but chose not to act.</p>
<p>Governments all over the world have been pushing for years to make social media giants more accountable, both in terms of the quality of information they host, and their use of users’ data as part of their business models. </p>
<p>The Australian government’s <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_LEGislation/Bills_Search_Results/Result?bId=r6680">Online Safety Act</a> will <a href="https://perma.cc/95A5-T79H">come into effect in January 2022</a>, giving the eSafety Commissioner unprecedented powers to crack down on abusive or violent content, or sexual images posted without consent.</p>
<p>But even if successful, this legislation will only deal with a small proportion of the issues that require regulation. On many such issues, social media platforms have attempted to regulate themselves rather than submit to legislation. But whether we are talking about legislation or self-regulation, past experiences do not engender much confidence that tech platforms can be successfully regulated and regulation put in action easily.</p>
<p>Our <a href="https://aisel.aisnet.org/ecis2021_rip/35">research</a> has examined previous attempts to regulate tech giants in Australia. We analysed 269 media articles and 282 policy documents and industry reports published from 2015 to 2021. Let’s discuss a couple of relevant case studies. </p>
<h2>1. Ads and news</h2>
<p>In 2019, the Australian Competition and Consumer Commission (ACCC) <a href="https://www.accc.gov.au/publications/digital-platforms-inquiry-final-report">inquiry into digital platforms</a> described Facebook’s algorithms, particularly those that determine the positioning of advertising on Facebook pages, as “opaque”. It concluded media companies needed more assurance about the use of their content.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/consumer-watchdog-calls-for-new-measures-to-combat-facebook-and-googles-digital-dominance-120077">Consumer watchdog calls for new measures to combat Facebook and Google's digital dominance</a>
</strong>
</em>
</p>
<hr>
<p>Facebook initially welcomed the inquiry, but then <a href="https://www.accc.gov.au/system/files/Facebook_0.pdf">publicly opposed it</a> when the government argued the problems related to Facebook’s substantial market power in display advertising, and Facebook and Google’s dominance of news content generated by media companies, were too important to be left to the companies themselves. </p>
<p>Facebook argued there was <a href="https://www.accc.gov.au/system/files/Facebook.pdf">no evidence of an imbalance of bargaining power</a> between it and news media companies, adding it would have no choice but to withdraw news services in Australia if forced to pay publishers for hosting their content. The standoff resulted in Facebook’s <a href="https://theconversation.com/facebook-has-pulled-the-trigger-on-news-content-and-possibly-shot-itself-in-the-foot-155547">infamous week-long embargo on Australian news</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-easy-way-to-rein-in-facebook-and-google-stop-them-gobbling-up-competitors-170104">The easy way to rein in Facebook and Google: stop them gobbling up competitors</a>
</strong>
</em>
</p>
<hr>
<p>The revised and amended News Media Bargaining Code was <a href="https://www.accc.gov.au/system/files/Final%20legislation%20as%20passed%20by%20both%20houses.pdf">passed by the parliament in February</a>. Both the government and Facebook declared victory, the former having managed to pass its legislation, and the latter ending up striking its own bargains with news publishers without having to be held legally to the code.</p>
<h2>2. Hate speech and terrorism</h2>
<p>In 2015, to deal with violent extremism on social media the Australian government initially worked with the tech giant to develop joint AI solutions to improve the technical processes of content identification to deal with countering violent extremism.</p>
<p>This voluntary solution worked brilliantly, until it did not. In March 2019, mass shootings at mosques in Christchurch were live-streamed on Facebook by an Australian-born white supremacist terrorist, and the recordings subsequently circulated on the internet. </p>
<p>This brought to light <a href="https://www.stuff.co.nz/national/christchurch-shooting/111473473/facebook-ai-failed-to-detect-christchurch-shooting-video">the inability Facebook’s artificial intelligence algorithms</a> to detect and remove the live footage of the shooting and how fast it was shared on the platform. </p>
<p>The Australian government responded in 2019 by <a href="https://www.ag.gov.au/crime/abhorrent-violent-material">amending the Criminal Code</a> to require social media platforms to remove abhorrent or violent material “in reasonable time” and, where relevant, refer it to the Australian Federal Police. </p>
<h2>What have we learned?</h2>
<p>These two examples, while strikingly different, both unfolded in a similar way: an initial dialogue in which Facebook proposes an in-house solution involving its own algorithms, before a subsequent shift towards mandatory government regulation, which is met with resistance or bargaining (or both) from Facebook, and the final upshot which is piecemeal legislation that is either watered down or only covers a subset of specific types of harm. </p>
<p>There are several obvious problems with this. The first is that only the tech giants themselves know how their algorithms work, so it is difficult for regulators to oversee them properly. </p>
<p>Then there’s the fact that legislation typically applies at a national level, yet Facebook is a global company with billions of users across the world and a platform that is incorporated into our daily lives in all sorts of ways.</p>
<p>How do we resolve the impasse? One option is for regulations to be drawn up by independent bodies appointed by governments and tech giants to drive the co-regulation agenda globally. But relying on regulation alone to guide tech giants’ behaviour against potential abuses might not be sufficient. There is also the need for self-discipline and appropriate corporate governance - potentially enforced by these independent bodies. </p>
<hr>
<p><em>This article originally stated Google publicly opposed the ACCC Digital Platforms Inquiry. A Google spokesperson told The Conversation that while Google <a href="https://about.google/intl/ALL_au/google-in-australia/aug-17-letter/">raised concerns</a> about specific aspects of the first draft of the News Media Bargaining Code that arose from the inquiry’s recommendations, it did not oppose the inquiry.</em></p><img src="https://counter.theconversation.com/content/169947/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Olga Kokshagina is affiliated with the French Digital Council (CNNUM): <a href="https://cnnumerique.fr/">https://cnnumerique.fr/</a></span></em></p><p class="fine-print"><em><span>Stan Karanasios does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Efforts to rein in the social media giant’s power have followed the same script: dialogue, then attempts at self-regulation, then a bitter dispute over legislation, followed by compromise.Olga Kokshagina, Researcher - Innovation & Entrepreneurship, RMIT UniversityStan Karanasios, Associate professor, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1695202021-10-18T14:40:23Z2021-10-18T14:40:23ZWhy Facebook and other social media companies need to be reined in<figure><img src="https://images.theconversation.com/files/426552/original/file-20211014-26-yxj8cq.jpg?ixlib=rb-1.1.0&rect=22%2C0%2C2968%2C1989&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>In September, the <em>Wall Street Journal</em> released <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">the Facebook Files</a>. Drawing on thousands of documents leaked by whistle blower and former employee Frances Haugen, the Facebook Files show that the company knows their practices harm young people, but fails to act, choosing corporate profit over public good. </p>
<p>The Facebook Files are damning for the company, which also owns Instagram and WhatsApp. However, it isn’t the only social media company that compromises young people’s internationally protected rights and well-being by prioritizing profits. </p>
<p>As researchers and experts on <a href="http://www.equalityproject.ca/research/research-publications/">children’s rights</a>, <a href="https://techlaw.uottawa.ca/initiatives/equality">online privacy and equality</a> and the <a href="https://le.ac.uk/media/research/featured-projects/digital-sexual-cultures-feminist-research-and-engagement-consortium">online risks, harms and rewards</a> that young people face, the news over the past few weeks didn’t surprise us.</p>
<h2>Harvested personal data</h2>
<p>Harvesting and commodifying personal data (including children’s data) underpins the <a href="https://www.atlantis-press.com/article/25885539.pdf">internet financial model</a> — a model that social psychologist and philosopher Shoshana Zuboff has dubbed <a href="https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy">surveillance capitalism </a>. </p>
<p>Social media companies make money under this model by collecting, analyzing and selling the personal information of users. To increase the flow of this valuable data they work to engage more people, for more time, through more interactions. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-is-surveillance-capitalism-and-how-does-it-shape-our-economy-119158">Explainer: what is surveillance capitalism and how does it shape our economy?</a>
</strong>
</em>
</p>
<hr>
<p>Ultimately, the value in harvested personal data lies in the detailed personal profiles the data supports — profiles that are used to feed the algorithms that <a href="https://www.theatlantic.com/technology/archive/2010/10/how-the-facebook-news-feed-algorithm-shapes-your-friendships/64996/">shape our newsfeeds</a>, <a href="https://www.theverge.com/2018/12/4/18124718/google-search-results-personalized-unique-duckduckgo-filter-bubble">personalize our search results</a>, help <a href="https://www.vice.com/en/article/pkekvb/cost-cutting-algorithms-are-making-your-job-search-a-living-hell">us get a job</a> (or hinder) and <a href="https://www.wired.com/story/facebooks-targeted-ads-are-more-complex-than-it-lets-on/">determine the advertisements we receive</a>. </p>
<p>In a self-reinforcing turn, these same data are used to shape our online environments to encourage disclosure of even more data — and the process repeats. </p>
<figure class="align-center ">
<img alt="A desktop shows a man with binoculars that have the Facebook F on them." src="https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/426551/original/file-20211014-21-1qjm5w8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social media companies monitor young people to bombard them with unsolicited content in service of corporate profits.</span>
<span class="attribution"><span class="source">(Glen Carrie/Unsplash)</span></span>
</figcaption>
</figure>
<h2>Surveillance capitalism</h2>
<p>Recent <a href="https://5rightsfoundation.com/uploads/Pathways-how-digital-design-puts-children-at-risk.pdf">research confirms that the deliberate design, algorithmic and policy choices</a> made by social media companies (that lie at the heart of surveillance capitalism) directly expose young people to harmful content. However, the harms of <a href="https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/">surveillance capitalism</a> extend well beyond this.</p>
<p>Our research in both Canada and the United Kingdom has repeatedly uncovered young people’s concern with how social media companies and policy-makers are failing them. Rather than respecting young people’s rights to expression, to be free from discrimination and to participate in decisions affecting themselves, social media companies monitor young people to bombard them with unsolicited content in service of corporate profits. </p>
<p>As a result, <a href="https://press.uottawa.ca/egirls-ecitizens.html">young people have often reported to us</a> that they feel pressured to conform to stereotypical profiles used to steer their behaviour and shape their environment for profit.</p>
<p>For example, teen girls have told us that even though using Instagram and Snapchat created anxiety and insecurity about their bodies, they found it almost impossible to “switch off” the platforms. They also told us how the limited protection provided by default privacy settings leaves them vulnerable to unwanted “dick pics” and requests to send intimate images to men they don’t know. </p>
<p>Several girls and their parents told us that this can sometimes lead to extreme outcomes, including <a href="https://www.psychologytoday.com/us/blog/when-your-adult-child-breaks-your-heart/201709/understanding-school-refusal">school refusal</a>, self harm and, in a few cases, <a href="https://doi.org/10.1097/YCO.0000000000000547">attempting suicide</a>. </p>
<p>The surveillance capitalism financial model that underlies social media ensures that companies do everything they can to keep young people engaged. </p>
<p>Young people have told us that they want more freedom and control when using these spaces — so they are as public or private as they like, without fear of being monitored or profiled, or that their data are being farmed out to corporations.</p>
<figure class="align-center ">
<img alt="Young girl lays in bed, sad on her phone" src="https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/426569/original/file-20211014-28-1ro5v9f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Teen girls have told us that even though using Instagram and Snapchat created anxiety and insecurity about their bodies, they found it almost impossible to ‘switch off’ the platforms.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Teenagers also told us how they rarely bother to report harmful content to the platforms. This isn’t because they don’t know how, but instead because they <a href="http://www.lco-cdo.org/wp-content/uploads/2017/07/DIA-Commissioned-Paper-eQuality.pdf">have learned from experience that it doesn’t help</a>. Some platforms were too slow to respond, others didn’t respond at all and some said that what was reported didn’t breach community standards, so they weren’t willing to help. </p>
<h2>Removing toxic content hurts the bottom line</h2>
<p>These responses aren’t surprising. For years, we have known about the <a href="https://www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content">lack of resources</a> to moderate content and deal with online harassment. </p>
<p>Haugen’s recent testimony at a <a href="https://www.commerce.senate.gov/2021/10/protecting%20kids%20online:%20testimony%20from%20a%20facebook%20whistleblower">Senate Committee on Commerce, Science and Transportation</a> hearing and <a href="https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant">earlier reports about other social media platforms</a> highlight an even deeper profit motivation. Profit depends on meaningful social engagement, and harmful, toxic and divisive content drives engagement. </p>
<p>Basically, removing toxic content would hurt the corporate bottom line. </p>
<h2>Guiding principles that centre children’s rights</h2>
<p>So, what should be done in light of the recent, though not unprecedented, revelations in the Facebook Files? The issues are undoubtedly complex, but we have come up with a list of guiding principles that centre children’s rights and prioritize what young people have told us about what they need:</p>
<ol>
<li><p>Young people must be directly engaged in the development of relevant policy. </p></li>
<li><p>All related policy initiatives should be evaluated on an ongoing basis using a children’s rights assessment framework.</p></li>
<li><p>Social media companies should be stopped from launching products for children and from collecting their data for profiling purposes.</p></li>
<li><p>Governments should invest more resources into providing fast, free, easy-to-access informal responses and support for those targeted by online harms (learning from existing models like Australia’s <a href="https://www.esafety.gov.au/">eSafety Commissioner</a> and Nova Scotia’s <a href="https://novascotia.ca/cyberscan/">CyberScan unit</a>).</p></li>
<li><p>We need laws that ensure that social media companies are both transparent and accountable, especially when it comes to content moderation.</p></li>
<li><p>Government agencies (including police) should enforce existing laws against hateful, sexually violent and harassing content. Thought should be given to expanding platform liability for provoking and perpetuating these kinds of content. </p></li>
<li><p>Educational initiatives should prioritize familiarizing young people, the adults who support them and corporations with children’s rights, rather than focusing on a “safety” discourse that makes young people responsible for their own protection. This way, we can work together to disrupt the surveillance capitalism model that endangers them in the first place.</p></li>
</ol><img src="https://counter.theconversation.com/content/169520/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kaitlynn Mendes receives funding from the UK's Arts and Humanities Research Council (AH/W000423/1)</span></em></p><p class="fine-print"><em><span>Jacquelyn Burkell receives funding from the Social Sciences and Humanities Research Council of Canada. </span></em></p><p class="fine-print"><em><span>Jane Bailey receives funding from the Social Sciences and Humanities Research Council of Canada.</span></em></p><p class="fine-print"><em><span>Valerie Steeves receives funding from the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research.</span></em></p>What can and should be done in light of response to the Facebook Files? The issues are undoubtedly complex, but solutions need to centre on children’s rights and prioritize what young people need.Kaitlynn Mendes, Professor of Gender, Media and Sociology, Western UniversityJacquelyn Burkell, Associate Professor, Information and Media Studies, Western UniversityJane Bailey, Professor of Law and Co-Leader of The eQuality Project, L’Université d’Ottawa/University of OttawaValerie Steeves, Full Professor, Department of Criminology, L’Université d’Ottawa/University of OttawaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1694202021-10-07T12:23:43Z2021-10-07T12:23:43ZFacebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous – here’s how they can manipulate you<figure><img src="https://images.theconversation.com/files/425065/original/file-20211006-19-17853wo.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4200%2C2797&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Whistleblower Frances Haugen called Facebook's algorithm dangerous.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CongressFacebookWhistleblower/94e0a7294c644f869f79e7b96f7844b2/photo">Matt McClain/The Washington Post via AP</a></span></figcaption></figure><p>Former Facebook product manager Frances Haugen testified before the U.S. Senate on Oct. 5, 2021, that the company’s social media platforms “<a href="https://www.youtube.com/watch?v=rd2yC63DMBE">harm children, stoke division and weaken our democracy</a>.” </p>
<p>Haugen was the primary source for a <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">Wall Street Journal exposé</a> on the company. She called Facebook’s algorithms dangerous, said Facebook executives were aware of the threat but put profits before people, and called on Congress to regulate the company.</p>
<p>Social media platforms rely heavily on people’s behavior to decide on the content that you see. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing. <a href="https://www.axios.com/trolls-misinformation-facebook-twitter-iran-dd1a13b4-de1f-48cd-91a6-cac66202344b.html">Troll farms</a>, organizations that spread provocative content, exploit this by copying high-engagement content and <a href="https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/">posting it as their own</a>, which helps them reach a wide audience.</p>
<p>As a <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">computer scientist</a> who studies the ways large numbers of people interact using technology, I understand the logic of using the <a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">wisdom of the crowds</a> in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.</p>
<h2>From lions on the savanna to likes on Facebook</h2>
<p>The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, <a href="https://www.investopedia.com/terms/p/prediction-market.asp">collective predictions</a> are normally more accurate than individual ones. Collective intelligence is used to predict <a href="https://augur.net/">financial markets, sports</a>, <a href="https://iemweb.biz.uiowa.edu/">elections</a> and even <a href="https://www.centerforhealthsecurity.org/our-work/Center-projects/disease-prediction-project.html">disease outbreaks</a>. </p>
<p>Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like <a href="https://doi.org/10.1086/208859">familiarity</a>, <a href="https://dictionary.apa.org/mere-exposure-effect">mere exposure</a> and <a href="https://www.psychologytoday.com/us/blog/stronger-the-broken-places/201708/the-bandwagon-effect">bandwagon effect</a>. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later. </p>
<p>Your brain picks up clues from the environment – including your peers – and uses <a href="https://global.oup.com/academic/product/simple-heuristics-that-make-us-smart-9780195143812">simple rules</a> to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.</p>
<p>Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds. </p>
<h2>Not everything viral deserves to be</h2>
<p>Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong <a href="http://doi.org/10.1002/asi.24121">popularity bias</a>. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences. </p>
<p>Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you like, comment on and share – in other words, content you engage with. The goal of the algorithms is to maximize engagement by finding out what people like and ranking it at the top of their feeds. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/doWZHFnVPQ8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A primer on the Facebook algorithm.</span></figcaption>
</figure>
<p>On the surface this seems reasonable. If people like credible news, expert opinions and fun videos, these algorithms should identify such high-quality content. But the wisdom of the crowds makes a key assumption here: that recommending what is popular will help high-quality content “bubble up.” </p>
<p>We <a href="http://doi.org/10.1038/s41598-018-34203-2">tested this assumption</a> by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. The reason is that engagement is not a reliable indicator of quality when few people have been exposed to an item. In these cases, engagement generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item is large enough, it will keep getting amplified. </p>
<p>Algorithms aren’t the only thing affected by engagement bias – it can <a href="https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/">affect people</a> too. Evidence shows that information is transmitted via “<a href="https://doi.org/10.1371/journal.pone.0184148">complex contagion</a>,” meaning the more times people are exposed to an idea online, the more likely they are to adopt and reshare it. When social media tells people an item is going viral, their cognitive biases kick in and translate into the irresistible urge to pay attention to it and share it.</p>
<h2>Not-so-wise crowds</h2>
<p>We recently ran an experiment using <a href="https://fakey.iuni.iu.edu/">a news literacy app called Fakey</a>. It is a game developed by our lab that simulates a news feed like those of Facebook and Twitter. Players see a mix of current articles from fake news, junk science, hyperpartisan and conspiratorial sources, as well as mainstream sources. They get points for sharing or liking news from reliable sources and for flagging low-credibility articles for fact-checking. </p>
<p>We found that players are <a href="https://doi.org/10.37016/mr-2020-033">more likely to like or share and less likely to flag</a> articles from low-credibility sources when players can see that many other users have engaged with those articles. Exposure to the engagement metrics thus creates a vulnerability.</p>
<p><iframe id="HoqGE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/HoqGE/5/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. </p>
<p>First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which social media users can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as <a href="https://doi.org/10.1007/s42001-020-00084-7">echo chambers</a>. </p>
<p>Second, because many people’s friends are friends of one another, they influence one another. A <a href="https://doi.org/10.1126/science.1121066">famous experiment</a> demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. </p>
<p>Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “<a href="https://www.webopedia.com/TERM/L/link_farming.html">link farms</a>” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own <a href="https://theconversation.com/misinformation-on-social-media-can-technology-save-us-69264">vulnerabilities</a>. </p>
<p>People aiming to manipulate the information market have created <a href="https://www.washingtonpost.com/technology/2020/10/13/black-fake-twitter-accounts-for-trump/">fake accounts</a>, like trolls and <a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">social bots</a>, and <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">organized</a> <a href="https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html">fake networks</a>. They have <a href="http://doi.org/10.1038/s41467-018-06930-7">flooded the network</a> to create the appearance that a <a href="https://www.newsweek.com/2020/10/23/qanon-conspiracy-theories-draw-new-believers-scientists-take-aim-misinformation-pandemic-1538901.html">conspiracy theory</a> or a <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">political candidate</a> is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even <a href="https://doi.org/10.1038/s41586-019-1507-6">altered the structure of social networks</a> to create <a href="https://doi.org/10.1371/journal.pone.0147617">illusions about majority opinions</a>. </p>
<p>[<em>Over 110,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p>
<h2>Dialing down engagement</h2>
<p>What to do? Technology platforms are currently on the defensive. They are becoming more <a href="https://www.nytimes.com/2020/10/09/technology/twitter-election-ban-features.html">aggressive</a> during elections in <a href="https://www.socialmediatoday.com/news/facebook-outlines-its-evolving-efforts-to-combat-misinformation-ahead-of-ne/597129/">taking down fake accounts and harmful misinformation</a>. But these efforts can be akin to a game of <a href="https://www.marketplace.org/shows/marketplace-tech/facebook-plays-whack-a-mole-with-foreign-election-interference/">whack-a-mole</a>. </p>
<p>A different, preventive approach would be to add <a href="https://www.theguardian.com/commentisfree/2020/jun/29/social-distancing-social-media-facebook-misinformation">friction</a>. In other words, to slow down the process of spreading information. High-frequency behaviors such as automated liking and sharing could be inhibited by <a href="https://www.cloudflare.com/learning/bots/how-captchas-work/">CAPTCHA</a> tests, which require a human to respond, or fees. Not only would this decrease opportunities for manipulation, but with less information people would be able to pay more attention to what they see. It would leave less room for engagement bias to affect people’s decisions.</p>
<p>It would also help if social media companies adjusted their algorithms to rely less on engagement signals and more on quality signals to determine the content they serve you. Perhaps the whistleblower revelations will provide the necessary impetus.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/facebooks-algorithms-fueled-massive-foreign-propaganda-campaigns-during-the-2020-election-heres-how-algorithms-can-manipulate-you-168229">article originally published on Sept. 20, 2021</a>.</em></p><img src="https://counter.theconversation.com/content/169420/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, DARPA and AFOSR.</span></em></p>You have evolved to tap into the wisdom of the crowds. But on social media, your cognitive biases can lead you astray, something organized disinformation campaigns count on.Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.