tag:theconversation.com,2011:/global/topics/facebook-108/articlesFacebook – The Conversation2024-03-28T05:59:11Ztag:theconversation.com,2011:article/2267562024-03-28T05:59:11Z2024-03-28T05:59:11ZInstagram and Threads are limiting political content. This is terrible for democracy<figure><img src="https://images.theconversation.com/files/584705/original/file-20240327-24-b0sz75.jpg?ixlib=rb-1.1.0&rect=556%2C440%2C4940%2C3476&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/neon-signage-xv7-GlvBLFw">Prateek Katyal/Unsplash</a></span></figcaption></figure><p>Meta’s Instagram and Threads apps are “slowly” rolling out a change that will <a href="https://about.instagram.com/blog/announcements/continuing-our-approach-to-political-content-on-instagram-and-threads">no longer recommend political content</a> by default. The company defines political content broadly as being “potentially related to things like laws, elections, or social topics”.</p>
<p>Users who follow accounts that post political content will still see such content in the normal, algorithmically sorted ways. But by default, users will not see any political content in their feeds, stories or other places where <em>new</em> content is recommended to them. </p>
<p>For users who want political recommendations to remain, Instagram has a new setting where users can turn it back on, making this an “opt-in” feature.</p>
<p>This change not only signals Meta’s retreat from politics and news more broadly, but also challenges any sense of these platforms being good for democracy at all. It’s also likely to have a chilling effect, stopping content creators from engaging politically altogether.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-curry-nights-to-coal-kills-dresses-how-social-media-drives-politicians-to-behave-like-influencers-190246">From curry nights to ‘coal kills’ dresses: how social media drives politicians to behave like influencers</a>
</strong>
</em>
</p>
<hr>
<h2>Politics: dislike</h2>
<p>Meta has long had a problem with politics, but that wasn’t always the case.</p>
<p>In 2008 and 2012, political campaigning <a href="https://www.tandfonline.com/doi/full/10.1080/19331681.2016.1163519">embraced social media</a>, and Facebook was seen as especially important in Barack Obama’s success. The Arab Spring was painted as a social-media-led “Facebook Revolution”, although Facebook’s role in these events was <a href="https://www.pewresearch.org/journalism/2012/11/28/role-social-media-arab-uprisings/">widely overstated</a>, </p>
<p>However, since then the spectre of political manipulation in the wake of the 2018 Cambridge Analytica scandal has soured social media users toward politics on platforms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/cambridge-analytica-scandal-facebooks-user-engagement-and-trust-decline-93814">Cambridge Analytica scandal: Facebook's user engagement and trust decline</a>
</strong>
</em>
</p>
<hr>
<p>Increasingly polarised politics, vastly increased mis- and disinformation online, and Donald Trump’s preference for social media over policy, or truth, have all taken a toll. In that context, Meta has already been reducing <a href="https://about.fb.com/news/2021/02/reducing-political-content-in-news-feed/">political content recommendations</a> on their main Facebook platform since 2021. </p>
<p>Instagram and Threads hadn’t been limited in the same way, but also ran into problems. Most recently, the Human Rights Watch <a href="https://www.hrw.org/news/2023/12/20/meta-systemic-censorship-palestine-content">accused Instagram</a> in December last year of systematically censoring pro-Palestinian content. With the new content recommendation change, Meta’s response to that accusation today would likely be that it is applying its political content policies consistently.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A person holding a smartphone displaying an instagram profile at a high angle against a city backdrop." src="https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/584952/original/file-20240328-30-jfkoff.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Instagram has no shortage of political content from advocacy and media organisations.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/high-angle-photo-of-person-holding-turned-on-smartphone-with-tall-buildings-background-WUmb_eBrpjs">Jakob Owens/Unsplash</a></span>
</figcaption>
</figure>
<h2>How the change will play out in Australia</h2>
<p>Notably, many Australians, especially in younger age groups, <a href="https://www.canberra.edu.au/about-uc/media/newsroom/2023/june/digital-news-report-australia-2023-tiktok-and-instagram-increase-in-popularity-for-news-consumption,-but-australians-dont-trust-algorithms">find news on Instagram</a> and other social media platforms. Sometimes they are specifically seeking out news, but often not. </p>
<p>Not all news is political. But now, on Instagram by default no news recommendations will be political. The serendipity of discovering political stories that motivate people to think or act will be lost.</p>
<p>Combined with Meta <a href="https://www.theguardian.com/australia-news/2024/mar/01/facebook-news-tab-shut-down-end-australia-journalism-funding-deals">recently stating</a> they will no longer pay to support the Australian news and journalism shared on their platforms, it’s fair to say Meta is seeking to be as apolitical as possible.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-will-metas-refusal-to-pay-for-news-affect-australian-journalism-and-our-democracy-224872">How will Meta's refusal to pay for news affect Australian journalism – and our democracy?</a>
</strong>
</em>
</p>
<hr>
<h2>The social media landscape is fracturing</h2>
<p>With Elon Musk’s <a href="https://theconversation.com/why-elon-musk-is-obsessed-with-casting-x-as-the-most-authentic-social-media-platform-210956">disastrous Twitter rebranding to X</a>, and TikTok <a href="https://theconversation.com/if-tiktok-is-banned-in-the-us-or-australia-how-might-the-company-or-china-respond-225889">facing the possibility of being banned</a> altogether in the United States, Meta appears as the most stable of the big social media giants.</p>
<p>But with Meta positioning Threads as a potential new town square while Twitter/X burns down, it’s hard to see what a town square looks like without politics. </p>
<p>The lack of political news, combined with a lack of any news on Facebook, may well mean young people see even less news than before, and have less chance to engage politically. </p>
<p>In a Threads discussion, Instagram Head Adam Mosseri made the <a href="https://www.threads.net/@mosseri/post/CuZ6opKtHva">platform’s position clear</a>:</p>
<blockquote>
<p>Politics and hard news are important, I don’t want to imply otherwise. But my take is, from a platform’s perspective, any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them.</p>
</blockquote>
<p>Like for Facebook, for Instagram and Threads politics is just too hard. The political process and democracy can be pretty hard, but it’s now clear that’s not Meta’s problem.</p>
<h2>A chilling effect on creators</h2>
<p>Instagram’s <a href="https://about.instagram.com/blog/announcements/continuing-our-approach-to-political-content-on-instagram-and-threads">announcement</a> also reminded content creators their accounts may no longer be recommended due to posting political content.</p>
<p>If political posts were preventing recommendation, creators could see the exact posts and choose to remove them. Content creators <a href="https://yalebooks.yale.edu/book/9780300264753/not-getting-paid-to-do-what-you-love/">live or die by the platform’s recommendations</a>, so the implication is clear: avoid politics. </p>
<p>Creators already spend considerable time trying to interpret what content platforms prefer, building <a href="https://doi.org/10.1177/1461444819854731">algorithmic</a> <a href="https://doi.org/10.1177/01634437221077174">folklore</a> about which posts do best.</p>
<p>While that folklore is sometimes flawed, Meta couldn’t be clearer on this one: political posts will prevent audience growth, and thus make an already precarious living harder. That’s the definition of a political chilling effect.</p>
<p>For the audiences who turn to creators because they are <a href="https://scholarsbank.uoregon.edu/xmlui/bitstream/handle/1794/26365/ada08-commu-abi-2015.pdf?sequence=1&isAllowed=y">perceived to be relatable and authentic</a>, the absence of political posts or positions will likely stifle political issues, discussion and thus ultimately democracy. </p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/C41CueKvYaF/?hl=en\u0026img_index=3","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<h2>How do I opt back in?</h2>
<p>For Instagram and Threads users who want these platforms to still share political content recommendations, follow these steps:</p>
<ul>
<li>go to your Instagram profile and click the three lines to access your settings.</li>
<li>click on Suggested Content (or Content Preferences for some).</li>
<li>click on Political content, and then select “Don’t limit political content from people that you don’t follow”.</li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-apps-have-billions-of-active-users-but-what-does-that-really-mean-226021">Social media apps have billions of 'active users'. But what does that really mean?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/226756/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tama Leaver receives funding from the Australian Research Council. He is a Chief Investigator in the ARC Centre of Excellence for the Digital Child.</span></em></p>A new change to Meta’s apps will see users no longer recommended political content by default. The ramifications of this will be far-reaching.Tama Leaver, Professor of Internet Studies, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2261182024-03-22T02:10:41Z2024-03-22T02:10:41ZConspiracy theorist tactics show it’s too easy to get around Facebook’s content policies<figure><img src="https://images.theconversation.com/files/583342/original/file-20240321-26-joql1y.jpg?ixlib=rb-1.1.0&rect=40%2C148%2C4257%2C2849&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/kuala-lumpur-malaysia-august-25-2013-1168328122">MavardiBahar/Shutterstock</a></span></figcaption></figure><p>During the COVID pandemic, social media platforms were swarmed by far-right and anti-vaccination communities that spread dangerous conspiracy theories.</p>
<p>These included the false claims that <a href="https://www.bbc.com/news/54893437">vaccines are a form of population control</a>, and that the virus was a <a href="https://theconversation.com/qanon-conspiracy-theories-about-the-coronavirus-pandemic-are-a-public-health-threat-135515">“deep state” plot</a>. Governments and the World Health Organization redirected precious resources from vaccination campaigns to debunk these falsehoods. </p>
<p>As the tide of misinformation grew, platforms were accused of not doing enough to stop the spread. To address these concerns, Meta, the parent company of Facebook, made several policy announcements in 2020–21. However, it hesitated to remove “<a href="https://www.facebook.com/notes/751449002072082/?hc_location=ufi">borderline</a>” content, or content that didn’t cause direct physical harm, save for one <a href="https://about.fb.com/news/2020/04/covid-19-misinfo-update/">policy change</a> in February 2021 that expanded the content removal lists.</p>
<p>To stem the tide, Meta continued to rely more heavily on algorithmic moderation techniques to reduce the visibility of misinformation in users’ feeds, search and recommendations – known as shadowbanning. They also used fact-checkers to label misinformation.</p>
<p>While shadowbanning is widely seen as a <a href="https://theconversation.com/what-is-shadowbanning-how-do-i-know-if-it-has-happened-to-me-and-what-can-i-do-about-it-192735">concerningly opaque technique</a>, our <a href="https://journals.sagepub.com/doi/10.1177/1329878X241236984">new research</a>, published in the journal Media International Australia, instead asks: was it effective?</p>
<h2>What did we investigate?</h2>
<p>We used two measures to answer this question. First, after identifying 18 Australian far-right and anti-vaccination accounts that consistently shared misinformation between January 2019 and July 2021, we analysed the performance of these accounts using key metrics.</p>
<p>Second, we mapped this performance against five content moderation policy announcements for Meta’s flagship platform, Facebook.</p>
<p>The findings revealed two divergent trends. After March 2020 the <em>overall</em> performance of the accounts – that is, their <em>median</em> performance – suffered a decline. And yet their <em>mean</em> performance shows increasing levels after October 2020. </p>
<p>This is because, while the majority of the monitored accounts underperformed, a few accounts overperformed instead, and strongly so. In fact, they continued to overperform and attract new followers even after the alleged policy change in February 2021.</p>
<hr>
<p><iframe id="85UaE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/85UaE/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<h2>Shadowbanning as a badge of pride</h2>
<p>To examine why, we scraped and thematically analysed comments and user reactions from posts on these accounts. We found users had a high motivation to stay engaged with problematic content. Labelling and shadowbanning were viewed as motivating challenges.</p>
<p>Specifically, users frequently used “<a href="https://doi.org/10.1177/01634437221111923">social steganography</a>” – using deliberate typos or code words for key terms – to evade algorithmic detection. We also saw <a href="https://www.tandfonline.com/doi/full/10.1080/21670811.2021.1938165">conspiracy “seeding”</a> where users add links to archiving sites or less moderated sites in comments to re-distribute content Facebook labelled as misinformation, and to avoid detection.</p>
<p>In one example, a user added a link to a <a href="https://www.pewresearch.org/short-reads/2023/02/17/key-facts-about-bitchute/">BitChute</a> video with keywords that dog-whistled support for QAnon style conspiracies. As terms such as “vaccine” were believed to trigger algorithmic detection, emoji or other code names were used in their place:</p>
<blockquote>
<p>A friend sent me this link, it’s [sic.] refers to over 4000 deaths of individuals after getting 💉 The true number will not come out, it’s not in the public’s interest to disclose the amount of people that have died within day’s [sic.] of jab.</p>
</blockquote>
<p>While many conspiracy theories were targeted at government and public health authorities, platform suppression of content fuelled further conspiracies regarding big tech and their complicity with “Big Pharma” and governments.</p>
<p>This was evident in the use of keywords such as MSM (“mainstream media”) to reference QAnon style agendas: </p>
<blockquote>
<p>MSM are in on this whole thing, only report on what the elites tell them to. Clearly you are not doing any research but listening to msm […] This is a completely experimental ‘vaccine’.</p>
</blockquote>
<p>Another comment thread showed reactions to Meta’s <a href="https://about.fb.com/news/2020/08/addressing-movements-and-organizations-tied-to-violence/">dangerous organisations policy update</a>, where accounts that regularly shared QAnon-content were labelled “extremist”. In the reactions, MSM and “the agenda” appeared frequently. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/qanon-is-spreading-outside-the-us-a-conspiracy-theory-expert-explains-what-that-could-mean-198272">QAnon is spreading outside the US – a conspiracy theory expert explains what that could mean</a>
</strong>
</em>
</p>
<hr>
<p>Some users recommended that sensitive content be moved to alternative platforms. We observed one anti-vaccination influencer complain that their page was being shadowbanned by Facebook, and calling on their followers to recommend a “good, censorship free, livestreaming platform”.</p>
<p>The replies suggested moderation-lite sites such as <a href="https://rumble.com/">Rumble</a>. Similar recommendations were made for Twitch, a livestreaming site popular with gamers which has since attracted <a href="https://www.nytimes.com/2021/04/27/us/politics/twitch-trump-extremism.html">far-right political influencers</a>.</p>
<p>As one user said:</p>
<blockquote>
<p>I know so many people who get censored on so many apps especially Facebook and Twitch seems to work for them. </p>
</blockquote>
<h2>How can content moderation fix the problem?</h2>
<p>These tactics of coordination to detect shadowbans, resist labelling and fight the algorithm provide some insight into why engagement didn’t dim on some of these “overperforming” accounts despite all the policies Meta put in place. </p>
<p>This shows that Meta’s suppression techniques, while partially effective in containing the spread, do nothing to prevent those invested in sharing (and finding) misinformation from doing so.</p>
<p>Firmer policies on content removal and user banning would help address the problem. However, <a href="https://about.fb.com/news/2022/07/oversight-board-advise-covid-19-misinformation-measures/">Meta’s announcement last year suggests</a> the company has little appetite for this. Any loosening of policy changes will all but ensure this misinformation playground will continue to thrive.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-researcher-asked-covid-anti-vaxxers-how-they-avoid-facebook-moderation-heres-what-they-found-186406">A researcher asked COVID anti-vaxxers how they avoid Facebook moderation. Here's what they found</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/226118/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amelia Johns has received funding from Meta content policy award for some of the research presented in this article. She has also received funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Emily Booth is supported by funding from the Australian Department of Home Affairs and the Defence Innovation Network.</span></em></p><p class="fine-print"><em><span>Francesco Bailo has received funding from Meta content policy award for some of the research presented in this article. He receives funding from the Defence Innovation Network. </span></em></p><p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from the Australian Department of Home Affairs, the Defence Science and Technology Group, the Defence Innovation Network and the Australian Academy of Science.</span></em></p>New research shows that even after Facebook made changes to stem the tide of dangerous pandemic misinformation, some accounts continued to thrive.Amelia Johns, Associate Professor, Digital and Social Media, School of Communication, University of Technology SydneyEmily Booth, Research assistant, University of Technology SydneyFrancesco Bailo, Lecturer, Digital and Social Media, University of SydneyMarian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2260212024-03-21T06:12:11Z2024-03-21T06:12:11ZSocial media apps have billions of ‘active users’. But what does that really mean?<figure><img src="https://images.theconversation.com/files/583295/original/file-20240321-26-3vpdrd.jpg?ixlib=rb-1.1.0&rect=628%2C519%2C4539%2C2925&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/group-of-people-standing-on-brown-floor-HN6uXG7GzTE">Creative Christians/Unsplash</a></span></figcaption></figure><p>Our digital world is bigger and more connected than ever. Social media isn’t just a daily habit – <a href="https://wearesocial.com/au/blog/2024/01/digital-2024-5-billion-social-media-users/">with more than 5 billion users globally</a>, it’s woven into the very fabric of our existence.</p>
<p>These platforms offer entertainment, connection, information and support, but they’re also battlegrounds for misinformation and online harassment. </p>
<p>Platforms like Facebook, YouTube, Instagram and TikTok vie for our attention, each boasting user counts in the billions. But what do these numbers actually tell us, and should we care?</p>
<h2>What is an active user or a unique user?</h2>
<p>Behind the impressive statistics lies a complex reality. While global social media usership has hit the 5 billion mark, representing <a href="https://datareportal.com/reports/digital-2024-global-overview-report">about 62% of the world’s population</a>, these figures mask the intricacies of online participation.</p>
<p>In Australia, the average person juggles <a href="https://www.genroe.com/blog/social-media-statistics-australia/13492">nearly seven social media accounts</a> across multiple platforms. This challenges the assumption that user counts equate to unique individuals.</p>
<p>It is also important to differentiate between accounts and active users. Not all accounts represent actual engagement in the platform’s community.</p>
<p>An “active user” is typically someone who has logged into a platform within a specific timeframe, such as the past month, indicating engagement with the platform’s content and features. They’re measured with analytics tools provided by the platform itself, or with third-party software. </p>
<p>The tools track the number of unique users – that is, individual accounts – who have interacted with or been exposed to specific content, whether a post, story or advertising campaign. </p>
<p>Social media companies use these metrics to showcase the potential reach of their platform to marketers. It’s key to their business model, as advertising revenue is typically their main source of income. </p>
<p>However, the reliability of these statistics is debatable. Factors such as <a href="https://www.dw.com/en/fact-check-how-do-i-spot-fake-social-media-accounts-bots-and-trolls/a-60313035">bot accounts</a>, inactive accounts and duplicates can inflate numbers, offering a distorted view of a platform’s user base.</p>
<p>Moreover, the criteria for an “active user” vary across platforms. This makes it difficult to make comparisons between user bases and to truly understand online audiences.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A person holding up a smartphone at a busy nightclub." src="https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583298/original/file-20240321-22-ifb91e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sheer user numbers can make a social media platform influential, but there’s nuance in how we measure impact.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/a-person-taking-a-picture-with-a-cell-phone-D4kALj_9CEE">Michael Effendy/Unsplash</a></span>
</figcaption>
</figure>
<h2>User count isn’t always relevance</h2>
<p><a href="https://datareportal.com/reports/digital-2024-global-overview-report">TikTok boasts a staggering 1.5 billion users globally</a>. This doesn’t even include users on its Chinese counterpart, Douyin. It is also often at the centre of <a href="https://theconversation.com/tiktok-has-a-startling-amount-of-sexual-content-and-its-way-too-easy-for-children-to-access-216114">controversies</a> and <a href="https://medium.com/datasociety-points/the-politics-and-optioncs-of-the-tiktok-ban-d88bdcb532d">geopolitical tensions</a>.</p>
<p>For example, <a href="https://theconversation.com/attempts-to-ban-tiktok-reveal-the-hypocrisy-of-politicians-already-struggling-to-relate-to-voters-225870">TikTok has repeatedly faced threats of bans</a> in significant markets such as the United States, raising questions about future access. But with such a vast user base, TikTok’s impact on culture and trends – particularly among young people – is clear and far-reaching.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/if-tiktok-is-banned-in-the-us-or-australia-how-might-the-company-or-china-respond-225889">If TikTok is banned in the US or Australia, how might the company – or China – respond?</a>
</strong>
</em>
</p>
<hr>
<p>However, the true impact of platforms is further muddied by algorithms – the complex formulas that dictate the content we see and engage with. Designed to keep us scrolling and interacting, they significantly shape our online experiences.</p>
<p>They also complicate how “active” a user might appear. Someone could seem more engaged simply because the algorithm promotes content they interact with more often.</p>
<p>So, while a high active-user count might indicate a platform’s popularity and reach, it doesn’t fully capture its influence or social relevance. True engagement goes beyond numbers, delving into the depth of user interaction, the quality of the content, and the cultural impact these platforms wield.</p>
<h2>Different strokes for different ages</h2>
<p>When we look at the users’ demographics, we see <a href="https://wearesocial.com/au/blog/2024/01/digital-2024-5-billion-social-media-users/">distinct preferences across age groups</a>. </p>
<p>Among the younger crowd, specifically Gen Z, <a href="https://wearesocial.com/au/blog/2024/01/digital-2024-5-billion-social-media-users/">TikTok vastly outpaces Instagram</a> with <a href="https://explodingtopics.com/blog/tiktok-demographics">one in four users under the age of 20</a>. </p>
<p>Meanwhile, <a href="https://sproutsocial.com/insights/new-social-media-demographics/">Snapchat and Instagram</a> are the preferred platforms for people aged 18–29. </p>
<p>Facebook, with its massive user base of more than 3 billion and a <a href="https://datareportal.com/essential-facebook-stats">median user age of 32</a>, is the platform of choice for millennials, Gen X and boomers.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ok-boomer-how-a-tiktok-meme-traces-the-rise-of-gen-z-political-consciousness-165811">'OK Boomer': how a TikTok meme traces the rise of Gen Z political consciousness</a>
</strong>
</em>
</p>
<hr>
<p>People in their 30s and older <a href="https://datareportal.com/reports/digital-2024-global-overview-report">tend to use LinkedIn</a> and X (formerly Twitter) more than platforms like Snapchat.</p>
<p>But all these social media platforms tend to vary in their primary focus, from news and professional connections (like LinkedIn) to predominantly serving entertainment (like TikTok).</p>
<p>This means demographic trends also reveal how each platform impacts users differently, catering to varied content preferences – whether it’s for entertainment, staying updated on news and events, or connecting with friends and family. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A group of women at a nice restaurant taking a selfie together." src="https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/583296/original/file-20240321-30-s182sm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ultimately, social media really is about community, not global relevance.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/3-women-smiling-and-standing-near-table-_3Pyr85zcE8">Rendy Novantino/Unsplash</a></span>
</figcaption>
</figure>
<h2>User count isn’t what matters</h2>
<p>For content creators and news media, delving into user statistics is crucial if they want to reach their target audiences.</p>
<p>However, despite headlines often focusing on vast user numbers, do these figures actually matter to the everyday social media user? <a href="https://apo.org.au/node/322860">Research I’ve done with colleagues</a> suggests they don’t.</p>
<p>For individuals navigating these digital spaces, it’s not about which platform boasts the highest user count and is therefore deemed “important”.</p>
<p>Instead, the focus is on maintaining connections within their social circles. This preference is rooted in cultural practices, meaning it aligns with the habits, preferences and values of their own community or cultural group.</p>
<p>In other words, people are drawn to social media platforms that are popular or widely accepted among their family, friends, social allies and broader cultural community. This suggests the essence of social media lies in the quality of interactions rather than the platform’s global standing.</p>
<p>Whether for staying informed, being entertained, or nurturing relationships, people gravitate to spaces where their community or “tribe” gathers. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/its-hard-to-imagine-better-social-media-alternatives-but-scuttlebutt-shows-change-is-possible-190351">It's hard to imagine better social media alternatives, but Scuttlebutt shows change is possible</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/226021/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Milovan Savic receives funding from Australian Research Council </span></em></p>Platforms like Facebook, Instagram and TikTok vie for our attention and boast billions of users. Ultimately, what matters is connection.Milovan Savic, Research Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2262192024-03-20T04:06:40Z2024-03-20T04:06:40ZTerrorist content lurks all over the internet – regulating only 6 major platforms won’t be nearly enough<figure><img src="https://images.theconversation.com/files/583026/original/file-20240320-17-wn83c.jpg?ixlib=rb-1.1.0&rect=4%2C241%2C2619%2C1761&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/burning-car-unrest-antigovernment-crime-581564755">Bumble Dee/Shutterstock</a></span></figcaption></figure><p>Australia’s eSafety commissioner <a href="https://www.abc.net.au/news/2024-03-19/social-media-esafety-commissioner-terrorist-violent-extremist/103603518">has sent legal notices</a> to Google, Meta, Telegram, WhatsApp, Reddit and X (formerly Twitter) asking them to show what they’re doing to protect Australians from online extremism. The six companies <a href="https://www.esafety.gov.au/newsroom/media-releases/tech-companies-grilled-on-how-they-are-tackling-terror-and-violent-extremism">have 49 days to respond</a>.</p>
<p>The notice comes at a time when governments are increasingly cracking down on major tech companies to address online harms like <a href="https://theconversation.com/australia-has-fined-x-australia-over-child-sex-abuse-material-concerns-how-severe-is-the-issue-and-what-happens-now-215696">child sexual abuse material</a> or <a href="https://www.cbsnews.com/news/mark-zuckerberg-apologizes-parents-victims-online-exploitation-senate-hearing/">bullying</a>.</p>
<p>Combating online extremism presents unique challenges different from other content moderation problems. Regulators wanting to establish effective and meaningful change must take into account what research has shown us about extremism and terrorism.</p>
<h2>Extremists are everywhere</h2>
<p>Online extremism and terrorism have been pressing concerns for some time. A stand-out example was the 2019 Christchurch terrorist attack on two mosques in Aotearoa New Zealand, which was live streamed on Facebook. It led to the <a href="https://www.beehive.govt.nz/release/nz-and-france-seek-end-use-social-media-acts-terrorism">“Christchurch Call” to action</a>, aimed at countering extremism through collaborations between countries and tech companies.</p>
<p>But despite such efforts, <a href="https://www.rand.org/pubs/perspectives/PEA1458-2.html">extremists still use online platforms</a> for networking and coordination, recruitment and radicalisation, knowledge transfer, financing and mobilisation to action.</p>
<p>In fact, extremists use the same online infrastructure as everyday users: marketplaces, dating platforms, gaming sites, music streaming sites and social networks. Therefore, all regulation to counter extremism needs to consider the rights of regular users, as well.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/christchurch-attacks-5-years-on-terrorists-online-history-gives-clues-to-preventing-future-atrocities-225273">Christchurch attacks 5 years on: terrorist’s online history gives clues to preventing future atrocities</a>
</strong>
</em>
</p>
<hr>
<h2>The rise of ‘swarmcasting’</h2>
<p>Tech companies have responded with initiatives like the <a href="https://gifct.org/membership">Global Internet Forum to Counter Terrorism</a>. It shares information on terrorist online content among its members (such as Facebook, Microsoft, YouTube, X and others) so they can take it down on their platforms. These approaches aim to <a href="https://gifct.org/hsdb/">automatically identify and remove</a> terrorist or extremist content.</p>
<p>However, a moderation policy focused on individual pieces of content on individual platforms fails to capture much of what’s out there.</p>
<p>Terrorist groups commonly use a <a href="https://static.rusi.org/20190716_grntt_paper_06.pdf">“swarmcasting” multiplatform approach</a>, leveraging 700 platforms or more to distribute their content.</p>
<p>Swarmcasting involves using “beacons” on major platforms such as Facebook, Twitter and Telegram to direct people to locations with terrorist material. This beacon can be a hyperlink to a blog post on a website like Wordpress or Tumblr that then contains further links to the content, perhaps hosted on Google Drive, JustPaste.It, BitChute and other places where users can download it.</p>
<p>So, while extremist content may be flagged and removed from social media, it remains accessible online thanks to swarmcasting. </p>
<h2>Putting up filters isn’t enough</h2>
<p>The process of identifying and removing extremist content is far from simple. For example, at a recent US Supreme Court hearing over internet regulations, <a href="https://law.stanford.edu/podcasts/the-netchoice-cases-reach-the-supreme-court/">a lawyer argued</a> platforms could moderate terrorist content by simply removing anything that mentioned “al Qaeda”.</p>
<p>However, internationally recognised terrorist organisations, their members and supporters do not solely distribute policy-violating extremist content. Some may be discussing non-terrorist activities, such as those who engage in humanitarian efforts.</p>
<p>Other times their content is borderline (awful but lawful), such as misogynistic dog whistles, or even “hidden” <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/isj.12454">in a different format</a>, such as memes.</p>
<p>Accordingly, platforms can’t always cite policy violations and are compelled to use other methods to counter such content. They report using various content moderation techniques such as redirecting users, <a href="https://www.pbs.org/newshour/politics/google-to-expand-misinformation-prebunking-initiative-in-europe">pre-bunking misinformation</a>, promoting counterspeech and <a href="https://www.bbc.com/news/technology-57697779">offering warnings</a>, or <a href="https://theconversation.com/what-is-shadowbanning-how-do-i-know-if-it-has-happened-to-me-and-what-can-i-do-about-it-192735">implementing shadow bans</a>. Despite these efforts, online extremism continues to persist.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/disinformation-threatens-global-elections-heres-how-to-fight-back-223392">Disinformation threatens global elections – here's how to fight back</a>
</strong>
</em>
</p>
<hr>
<h2>What is extremism, anyway?</h2>
<p>All these problems are further compounded by the fact we lack a <a href="https://www.unodc.org/e4j/en/terrorism/module-4/key-issues/defining-terrorism.html">commonly accepted definition</a> for terrorism or extremism. All definitions currently in place are contentious.</p>
<p>Academics attempt to seek clarity by using <a href="https://www.ijcv.org/index.php/ijcv/article/view/3809">relativistic definitions</a>, such as</p>
<blockquote>
<p>extremism itself is context-dependent in the sense that it is an inherently relative term that describes a deviation from something that is (more) ‘ordinary’, ‘mainstream’ or ‘normal’. </p>
</blockquote>
<p>However, what is something we can accept as a universal normal? Democracy is not the global norm, nor are equal rights. Not even our understanding of <a href="https://blogs.lse.ac.uk/humanrights/2016/09/14/are-human-rights-really-universal-inalienable-and-indivisible/">central tenets of human rights</a> is globally established.</p>
<h2>What should regulators do, then?</h2>
<p>As the eSafety commissioner attempts to shed light on how major platforms counter terrorism, we offer several recommendations for the commissioner to consider.</p>
<p>1. Extremists rely on more than just the major platforms to disseminate information. This highlights the importance of expanding the current inquiries beyond just the major tech players.</p>
<p>2. Regulators need to consider the differences between platforms that resist compliance, those that comply halfheartedly, and those that struggle to comply, such as small content storage providers. Each type of platform <a href="https://ksp.techagainstterrorism.org/">requires different regulatory approaches</a> or assistance. </p>
<p>3. Future regulations should encourage platforms to transparently collaborate with academia. The global research community is well positioned <a href="https://gifct.org/wp-content/uploads/2021/07/GIFCT-TaxonomyReport-2021.pdf">to address these challenges</a>, such as by developing actionable definitions of extremism and novel countermeasures.</p><img src="https://counter.theconversation.com/content/226219/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award funded by the Australian Government. Marten Risius has received project funding from the Global Internet Forum to Counter Terrorism (GIFCT). </span></em></p><p class="fine-print"><em><span>Stan Karanasios has received funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union. Stan is a Distinguished Member of the Association for Information Systems.</span></em></p>Online extremism is a unique challenge – terrorists use methods that can’t be captured by standard content moderation. So, what can we do about it?Marten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandStan Karanasios, Associate Professor, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2257742024-03-14T05:54:48Z2024-03-14T05:54:48ZThe Jacqui Lambie Network is the latest victim of ‘cybersquatting’. It’s the tip of the iceberg of negative political ads online<p>Firebrand senator Jacqui Lambie is furious. Amid the Tasmanian election campaign (in which she’s running candidates), her party, the Jacqui Lambie Network, has fallen victim to one of the many pitfalls in the world of online political advertising.</p>
<p>Her party’s website is lambienetwork.com.au. You might understand her anger, then, after <a href="https://www.abc.net.au/news/2024-03-14/jacqui-lambie-slams-liberals-over-website/103581992">finding out</a> the Tasmanian Liberal party created a website to campaign against her, called lambienetwork.com. It’s a blink-and-you’ll-miss-it difference.</p>
<p>This is a textbook example of what’s known as cybersquatting. It’s when internet domain names that are similar to existing trademarked material or the names of people or organisations are bought up by competitors to use against the original. In fact, the major parties have purchased <a href="https://www.crikey.com.au/2022/04/08/crikeys-australian-political-party-domain-register/">a heap</a> of domain names.</p>
<p>As political parties desperately battle for voters’ attention in a world full of distractions and <a href="https://www.abc.net.au/news/2023-02-08/trust-slump-as-division-rules/101939406">dwindling trust in government</a>, cybersquatting is one of many online tools in the toolkit. But the toolkit is full of blunt instruments that may only be effective on a minority of people. The true damage is being done to the majority, who have less and less faith in politics and its institutions.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/all-governments-are-guilty-of-running-political-ads-on-the-public-purse-heres-how-to-stop-it-191766">All governments are guilty of running political ads on the public purse. Here's how to stop it</a>
</strong>
</em>
</p>
<hr>
<h2>A crowded, manufactured landscape</h2>
<p>In commercial marketing, there’s a focus on long-term brand building. In political marketing, there’s just one goal: winning.</p>
<p>With such high pressure, and little time to hit objectives, parties and candidates use highly emotive messaging and narratives to drive rapid attention and engagement, and hopefully convince people to vote for them.</p>
<p>With markets splintered into ever-smaller segments, based at times on very specific needs, <a href="https://theconversation.com/facebook-videos-targeted-texts-and-clive-palmer-memes-how-digital-advertising-is-shaping-this-election-campaign-115629">social media</a> has helped move voters quickly and developed narratives around leaders’ personal brands. </p>
<p>Instagram was used successfully by former prime minister Scott Morrison with <a href="https://www.sbs.com.au/language/punjabi/en/article/prime-minister-scott-morrison-makes-scomosas-says-would-have-liked-to-share-them-with-narendra-modi/fzx9zmmkg">his Scomosas</a> and attempt at Bunnings DIY. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1266952463464071171"}"></div></p>
<p>His successor, Anthony Albanese, has replicated that strategy, letting us get a glimpse of who he really is, even having a <a href="https://twitter.com/TotoAlbanese?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1529271741683339264%7Ctwgr%5E2db6b443e67a568315e7a33f81e6cd31f916b63d%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.perthnow.com.au%2Fpolitics%2Fanthony-albanese%2Fanthony-albaneses-dog-toto-gains-huge-following-on-twitter-c-6934822">Twitter/X account for his dog Toto</a>. This is aimed at developing resonance and building up likeability for his brand. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1531395641582047232"}"></div></p>
<p>Of course, as any royal watcher or user of social media can tell you, <a href="https://theconversation.com/yes-kate-middletons-photo-was-doctored-but-so-are-a-lot-of-images-we-see-today-225553">curated images are exactly that</a>: manufactured, for us. So we are trusting this method less and less. This will only get worse the longer voters are exposed to it.</p>
<p>Stories such as that in the 2022 federal election of Labor-aligned groups <a href="https://www.abc.net.au/news/2022-04-08/aec-investigating-union-tiktok-accounts-ahead-of-election/100969896">considering paying influencers</a> to post friendly content, doesn’t help either. </p>
<p>As a result, when we see content posted by an influencer, we’re now more likely to be sceptical. Do they really like this product, or are they just being paid to say they do?</p>
<h2>‘Angertainment’ is highly effective</h2>
<p>So it’s back to square one. Enter negativity, or “angertainment”.</p>
<p>Reality shows are full of it. One example is <a href="https://www.girlmuseum.org/media-analysis-the-villain-edit/#:%7E:text=When%20a%20participant%20is%20edited%20in%20a%20way,footage%20of%20someone%20is%20presented%20to%20the%20audience.">the villain edit</a>, where certain contestants are framed to be the antagonist for the sake of drama. There’s also the cued music to make us feel this is the “season-defining moment”. </p>
<p>They do this for the same reasons politicians have done it for decades. It works. It gets our attention. We get engaged. We change our vote. Ratings of these shows don’t lie. </p>
<p>In the past, this was called “wedge politics”, as it wedged one group of voters against others. A party or candidate could then become that group’s champion, and hello election victory. Simple narrative construction. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-tracked-election-ad-spending-for-4-000-facebook-pages-heres-what-theyre-posting-about-and-why-cybersecurity-is-the-bigger-concern-182286">We tracked election ad spending for 4,000 Facebook pages. Here's what they're posting about – and why cybersecurity is the bigger concern</a>
</strong>
</em>
</p>
<hr>
<p>This was easy when competition for our attention was less fierce. John Howard’s 2001 election-opening “<a href="https://theconversation.com/issues-that-swung-elections-tampa-and-the-national-security-election-of-2001-115143">we decide</a>” statement about immigration was pure wedge politics. </p>
<p>The aim is still the same now, but in a competitive environment for our attention and retention, modern methods have allowed for new ways to reach the average voter. Having not seen them before, people are <a href="https://theconversation.com/why-scare-campaigns-like-mediscare-work-even-if-voters-hate-them-62279">more susceptible to believing</a> them. </p>
<p>Clive Palmer has used <a href="https://www.theguardian.com/australia-news/2021/sep/08/clive-palmer-and-craig-kelly-using-spam-text-messages-to-capture-rightwing-vote-ahead-of-election-expert-says">spam text messages</a> over the years to grab some attention, although it hasn’t necessarily translated into electoral success.</p>
<p>A more inventive use of the internet to campaign was Pauline Hanson’s <a href="https://www.onenation.org.au/please-explain">cartoon series</a>. The first three episodes racked up <a href="https://www.smh.com.au/culture/tv-and-radio/pauline-hanson-as-a-superhero-these-cartoons-could-be-the-future-20211123-p59b9u.html">750,000 views</a> in two weeks on YouTube. </p>
<p>Both Labor and Liberal have had a strong presence on Snapchat. In 2016, the Liberals were among the first to <a href="https://www.marketingmag.com.au/social-digital/liberal-party-makes-world-history-first-sponsored-snapchat-lens-political-advertising/">make a filter</a> on the app. Labor was the <a href="https://www.smh.com.au/technology/how-are-politicians-using-social-media-to-campaign-20220418-p5ae6q.html">only major party</a> to use it during the 2022 federal election campaign.</p>
<p>These are all new ways of communicating a party’s key messages, including scare or smear campaigns. </p>
<p>Think “Mediscare”, so well done by Labor in 2016 via SMS, and then the revenge sequel of <a href="https://www.theguardian.com/australia-news/2019/jun/08/it-felt-like-a-big-tide-how-the-death-tax-lie-infected-australias-election-campaign">death taxes</a> in 2019 by the Coalition. They used Facebook groups very well. </p>
<p>Angertainment is now seen as being more likely to get the message across, and thereby victory, than anything else. </p>
<p>A significant aspect of these campaigns was disinformation, including the misrepresentation or impersonation of candidates. Senator David Pocock was a key target in the ACT in 2022, but <a href="https://www.abc.net.au/news/2022-04-27/david-pocock-lodges-complaint-over-advance-australia-corflutes/101016990">successfully ran a challenge</a> through the Australian Electoral Commission. </p>
<p>But this is 2024, and two years is an aeon in social media. The Jacqui Lambie Network (JLN) website trick we saw this week is an old-school one. Unlike some of the other strategies, it’s not effective. It is, however, childish. </p>
<p>So why bother? The attacking party would be obvious to most, if not by the authorised name as required by electoral laws. This dilutes the effect and it likely reinforces the reasons to vote for the JLN. </p>
<p>But political parties do it to capitalise on those who don’t realise they’re receiving a message in bad faith. Even if it’s a minority, it’s someone. In a tight political climate, it might be enough to tip the scales in their favour.</p>
<p>The collateral damage, of course, is the spread of misinformation and public disillusionment with politics and elections.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/few-restrictions-no-spending-limit-and-almost-no-oversight-welcome-to-political-advertising-in-australia-181248">Few restrictions, no spending limit, and almost no oversight: welcome to political advertising in Australia</a>
</strong>
</em>
</p>
<hr>
<h2>Can we stop this?</h2>
<p>We can, easily. </p>
<p>Cybersquatting is in a grey area legally. There are gaps in the relevant legislation that make it very difficult for those affected to get websites taken down. They’re often managed by international organisations with laborious processes.</p>
<p>But the government can ban cyber hijacking or squatting of politicians or parties’ web addresses or social channels. It can restrict negative advertising, and bring in green ticks to verify truthful advertising. </p>
<p>Government can also ensure social media companies take more responsibility for content, and tolerate fewer excuses for poor behaviour. This isn’t restricting freedom of speech, only restricting disinformation. Some independents <a href="https://mumbrella.com.au/new-bill-tabled-to-bring-much-needed-accountability-to-political-advertising-806487">have already</a> introduced bills in parliament on this issue.</p>
<p>If it’s so easy, why hasn’t it been done? Because that requires political support. Considering politicians are the ones who benefit most from the existing framework, we don’t need a negative ad to tell us how unlikely they are to do anything about it anytime soon.</p><img src="https://counter.theconversation.com/content/225774/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Hughes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As political parties desperately battle for voters’ attention, cybersquatting is one of many online tools in the toolkit. It’s only effective at further diminishing trust in government.Andrew Hughes, Lecturer, Research School of Management, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2253492024-03-08T03:59:22Z2024-03-08T03:59:22ZMeta’s lost revenue is a huge hit for public interest journalism, which was already reeling from cutbacks<p>Public interest journalism was already under significant stress in Australia. And now the pressure is ratcheting up even further.</p>
<p>While still experiencing the pandemic’s aftershocks, the industry has simultaneously been hit with the increasing cost of doing business, rising costs of living and declining advertising spend. All of this has made it harder to report the news that matters, educates and informs.</p>
<p>With Meta announcing last week it will not renew its commercial agreements with news outlets in Australia – <a href="https://www.afr.com/companies/media-and-marketing/meta-refuses-to-pay-for-news-setting-up-war-with-publishers-20240301-p5f93l">worth an estimated A$70 million per year</a> – it’s not an understatement to say it’s been a bad time for journalism.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-wont-keep-paying-australian-media-outlets-for-their-content-are-we-about-to-get-another-news-ban-224857">Facebook won't keep paying Australian media outlets for their content. Are we about to get another news ban?</a>
</strong>
</em>
</p>
<hr>
<h2>Where news in Australia is vanishing</h2>
<p>Public interest journalism is a vital service in a healthy democratic society. It creates social cohesion, informs decision making and strengthens democracy. </p>
<p>The funding provided by tech giants under the agreements made as part of the landmark News Media Bargaining Code in 2021 provided a significant source of revenue for media companies. </p>
<p>One <a href="https://www.aph.gov.au/DocumentStore.ashx?id=8a3c8ae4-f0f4-43b4-a9d0-a03b11a5fe06&subId=719769">regional news company estimated</a> in its submission to the Regional Newspapers Inquiry that once the agreements were fully implemented, the revenue would fund up to 30% of its editorial wages.</p>
<p>But as that money dries up, it’s clear Australia’s public interest journalism sector must find a new way to survive and thrive. And that method must be supported by data that clearly identifies the areas of Australia most lacking in comprehensive, accurate journalism.</p>
<p>The Public Interest Journalism Initiative (PIJI) has been <a href="https://www.google.com/url?q=https://theconversation.com/local-news-sources-are-closing-across-australia-we-are-tracking-the-devastation-and-some-reasons-for-hope-139756&sa=D&source=docs&ust=1709860462898241&usg=AOvVaw1kmAEBvPw5XOWUaSWBh0Fj">tracking public interest news production in Australia since 2019</a>, and <a href="https://piji.com.au/news-mapping/reports-analysis/">our research</a> reveals a clear divide across metropolitan and regional audiences and markets. Regional and remote areas of Australia have fewer news outlets generally, compared to areas along the east coast and around capital cities.</p>
<hr>
<p><strong>Density of print, digital and radio local news producers by local government area</strong></p>
<p><iframe id="GRTJK" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/GRTJK/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>Overall, PIJI has identified almost 500 changes in news production around Australia since 2019, with the majority of these being contractions. This includes media outlets closing, shrinking their services or ending their print editions.</p>
<p>But the decline is not limited to rural and regional areas. Our data also identify thinning in metropolitan markets, with 135 contractions compared to 61 expansions. However, the data also suggest the nature of the changes in metropolitan markets is different from that of regional markets.</p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=358&fit=crop&dpr=1 600w, https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=358&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=358&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=450&fit=crop&dpr=1 754w, https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=450&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/580633/original/file-20240308-26-6nsuxr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=450&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The changing Australian news landscape since 2019. The first column represents the total changes from 2019 to date; the second column reflects how many changes have occurred in the last year; and the third column reflects how many changes have occurred in the last quarter.</span>
<span class="attribution"><span class="source">Author provided</span></span>
</figcaption>
</figure>
<hr>
<p>Fifty-three percent of contractions in major cities were local suburban newspapers ending their print editions and shifting to digital-only delivery. And just over a third of contractions were outlets that ceased operations altogether, a share that has been steadily increasing. </p>
<p>In regional areas, we’ve seen more substantial changes with outlets closing (51% of regional contractions) or decreasing their service by cutting the frequency of publications or the level of output (21%). The shifting of content from print to digital represented just 16% of the changes seen in the regions.</p>
<p>Concerningly, we have also identified areas where news is completely lacking – so-called “news deserts”. According to our <a href="https://piji.com.au/news-mapping/reports-analysis/">latest quarterly data</a>, there are no print, digital or radio local news producers in five Australian local government areas. </p>
<p>Excluding radio, we could not identify any print or digital local news outlets in 29 local government areas. </p>
<p>Many of those areas are regional and remote areas – highlighting once again the discrepancy between metropolitan and regional news coverage.</p>
<hr>
<p><strong>Net change in local news producers by local government area</strong></p>
<p><iframe id="upx2E" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/upx2E/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<h2>More data on the industry is vital</h2>
<p>This data also underscores where future support should be directed.</p>
<p>Local and especially regional news urgently needs support in the face of significant industry upheaval and transformation. There is a clear need for long-term engagement and collaboration between government and researchers – both independent and government-based – given the complexity of issues facing the industry.</p>
<p><a href="https://piji.com.au/news-mapping/australian-news-data-project/">Longitudinal data and independent analysis</a> will be of the utmost importance in this. Analysis must be at arms length from both government and industry, but should engage with each side, informed by daily practice and policy. </p>
<p>Impartial, third-party research will also assist with understanding and assessing the impact of any policy interventions, as well as tracking and informing industry transformation, whether that be changing business models or new start-ups. </p>
<p>We have known this for some time. In April 2022, the <a href="https://www.aph.gov.au/Parliamentary_Business/Committees/House/Communications/Regionalnewspapers/Report/section?id=committees/reportrep/024888/78985">Regional Newspapers Inquiry</a> pointed to the need for core, longitudinal industry data.</p>
<p>This is why PIJI <a href="https://piji.com.au/news-mapping/australian-news-index/">has gathered timely data on market changes</a> in news production across Australia, the location of these outlets and how they are connected with one another. This assists communities, researchers, industry leaders and policymakers to better understand the health of Australia’s news media landscape.</p>
<p>Such data can provide the impetus for policy decisions that will support news businesses and producers. Innovation is sorely needed in this area to address journalism’s broken business model.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-will-metas-refusal-to-pay-for-news-affect-australian-journalism-and-our-democracy-224872">How will Meta's refusal to pay for news affect Australian journalism – and our democracy?</a>
</strong>
</em>
</p>
<hr>
<h2>What could help?</h2>
<p>One potential new avenue of revenue would be the development of a not-for-profit journalism sector in Australia. </p>
<p>This has been <a href="https://www.google.com/url?q=https://piji.com.au/wp-content/uploads/2023/06/dickson-g-2021.-proposals-to-provide-news-organisations-tax-deductible-gifts.pdf&sa=D&source=docs&ust=1709861337658904&usg=AOvVaw2CZRa69cCCjnpbd83Rhikq">repeatedly recommended</a> in parliamentary and regulatory inquiries over the past decade. There is evidence from overseas, particularly the United States, to suggest a not-for-profit news sector would increase media diversity and address the lack of commercially viable options in investigative journalism or less-represented geographical, cultural and linguistic markets.</p>
<p>The <a href="https://www.pc.gov.au/inquiries/current/philanthropy">Productivity Commission’s inquiry into philanthropy</a> appears to be giving this option some consideration; its draft report, released last year, proposed extending deductible gift recipient status to public interest journalism. PIJI would welcome the support this could offer news producers and outlets.</p>
<p>There is also potential in commercial measures like research and development tax rebates for public interest journalism. Again, we can be guided by success overseas – Canada implemented a similar rebate system a few years ago.</p>
<p>Evidence and a clear focus on the role of news as a public good must lead the way in identifying paths forward to service our communities.</p><img src="https://counter.theconversation.com/content/225349/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anna Draffin is the chief executive of the Public Interest Journalism Initiative, a neutral, independent think tank focused on the diversity and sustainability of public interest journalism in Australia. PIJI’s activities are funded by philanthropy. Its news mapping work is also currently supported by the federal government through the Department of Infrastructure, Transport, Regional Development, Communications and the Arts.</span></em></p><p class="fine-print"><em><span>Gary Dickson works for the Public Interest Journalism Initiative, which has received funding from the Department of Infrastructure, Transport, Regional Development, Communications and the Arts for news data research. </span></em></p><p class="fine-print"><em><span>Maia Germano does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Australia needs to prioritise finding new funding to support journalism, particularly in remote and regional areas.Anna Draffin, Professional associate, University of CanberraGary Dickson, Research fellow, University of Technology SydneyMaia Germano, Sessional tutor, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2249662024-03-07T03:15:46Z2024-03-07T03:15:46ZFirst Newshub, now TVNZ: the news funding model is broken – but this would fix it<p>The announcement last week that <a href="https://www.rnz.co.nz/news/national/510398/newshub-to-shut-down-in-june">Newshub would be shut down</a> was not the “canary in the coalmine” some suggested – it was the explosion. If it is not to be the first of many, then New Zealand needs a new model for its fourth estate.</p>
<p>The fate of Newshub and today’s <a href="https://www.rnz.co.nz/news/national/511058/live-tvnz-to-cut-up-to-68-jobs-in-proposed-restructure">projected newsroom cuts at TVNZ</a> threaten to leave a significant gap in the news sector, particularly television. But beyond that, the causes and solutions are very much up for debate.</p>
<p>There are both specific institutional factors and deeper structural trends at play within the television and news sectors. And Newshub’s <a href="https://newsroom.co.nz/2024/02/29/a-plan-to-rescue-newshub-on-a-beer-budget/">tangled financial history</a> serves as a reminder of the dangers of foreign ownership of strategic media assets. </p>
<p>Beyond the shifting fortunes of one company, however, the local news ecology has faced wider structural problems. The imminent loss of so many working news producers and journalists makes finding workable solutions even more urgent.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1765445020255912375"}"></div></p>
<h2>Fragmenting audiences</h2>
<p>Over the past 25 years, the TV sector’s share of the advertising market has <a href="https://www.asa.co.nz/industry/asa-advertising-turnover-report/">roughly halved</a>, from 34.3% in 1999 to just 17.7% by 2022.</p>
<p>The capture of advertising revenue by Google and Meta (the parent of Facebook and Instagram) has played a key role. Google alone now accounts for almost <a href="https://www.rnz.co.nz/news/in-depth/510750/bailout-warning-went-to-minister-melissa-lee-s-office-before-newshub-s-collapse">two-thirds</a> of the roughly NZ$1.8 billion digital advertising spend in New Zealand.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/with-the-end-of-newshub-the-slippery-slope-just-got-steeper-for-nz-journalism-and-democracy-224625">With the end of Newshub, the slippery slope just got steeper for NZ journalism and democracy</a>
</strong>
</em>
</p>
<hr>
<p>But the decline in TV revenues is also related to the fragmentation of audiences, as viewers shift to new on-demand services. TV3’s daily audience reach for its linear services <a href="https://www.nzonair.govt.nz/research/where-are-the-audiences-2023/">declined by almost 50%</a>, from 35% in 2014 to 17% in 2023.</p>
<p>Perhaps not surprisingly, Newshub’s demise has amplified calls from the news sector to expedite the <a href="https://www.legislation.govt.nz/bill/government/2023/0278/latest/whole.html">Fair News Digital Bargaining Bill</a>. This would require the online platforms to negotiate payments to news providers for hosting, linking and sharing news content. </p>
<p>Some estimates suggest this could be worth $30–50 million annually to the news sector. On the face of it, this may appear to be a logical solution – but it’s not that simple.</p>
<h2>A flawed bill</h2>
<p>There are a number of <a href="https://www.parliament.nz/resource/en-NZ/54SCEDSI_EVI_fc7faac0-2ec0-4e47-7ab5-08db9ebb2302_EDSI122/f9a94645093fe85c6e9450a7c377e42daeb7da04">problems with the proposed bill</a>. Fundamentally, it misdiagnoses the market relationship between the platforms and the news media.</p>
<p>The tech platforms’ capture of digital advertising stems not from its co-option of news content, but from the mass harvesting of audience data (enabling targeted advertising), and algorithmic influence over content discovery.</p>
<p>The bill also provides no fixed benchmarks for payments. And the arbitration process in the event of non-agreement is potentially very complex, because different media outlets will have varying relationships with each platform.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-wont-keep-paying-australian-media-outlets-for-their-content-are-we-about-to-get-another-news-ban-224857">Facebook won't keep paying Australian media outlets for their content. Are we about to get another news ban?</a>
</strong>
</em>
</p>
<hr>
<p>Making those agreements will depend on the goodwill of the platforms. But arbitration could well determine the advantages the platforms confer on news providers (increasing their visibility and directing traffic to their websites) outweigh the commercial benefits to the platforms of hosting or sharing news content.</p>
<p>Indeed, Meta’s resistance to the news bargaining frameworks in <a href="https://www.theguardian.com/australia-news/2024/mar/01/facebook-news-tab-shut-down-end-australia-journalism-funding-deals">Australia</a> and <a href="https://www.bbc.com/news/world-us-canada-67755133">Canada</a> underlines the risk of a platform exempting itself from bargaining obligations by prohibiting the hosting and sharing of news. </p>
<p>News media depending on platform payments might also be motivated to provide content that maximises value to the platforms – for example, populist or controversial content more likely to be shared. Or they may be less inclined to critically investigate issues involving their benefactors.</p>
<p>Ultimately, there is no guarantee any platform payments will actually be reinvested in news production, let alone commercially unattractive genres such as local government or regional reporting.</p>
<h2>A new form of funding</h2>
<p>There is no realistic possibility of the government bailing out Newshub or any other individual news outlet.</p>
<p>And while the news media’s function in upholding democratic processes and holding power to account remains vital, it doesn’t follow that market competition and plurality are sufficient to sustain that.</p>
<p>Indeed, it was the introduction of commercial competition for eyeballs and advertising that drove <a href="https://www.researchgate.net/publication/279455908_The_State_the_Media_and_Thin_Democracy">measurable declines</a> in the length and substance of television news through the 1990s.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/breaking-news-making-google-and-facebook-pay-nz-media-for-content-could-deliver-less-than-bargained-for-196030">Breaking news: making Google and Facebook pay NZ media for content could deliver less than bargained for</a>
</strong>
</em>
</p>
<hr>
<p>Democracy cannot thrive if the fourth estate is in a commercial race to the bottom. It requires diversity of perspectives and competition for substance that treats the audience as citizens, not just fodder for advertisers.</p>
<p>This requires a new form of funding and a new institutional arrangement. One way to achieve this would be through a small levy on digital advertising expenditure, and potentially other commercial revenues such as internet and streaming services. The revenue would be reinvested in news content through an independent agency on a contestable basis.</p>
<p>There are different possible mechanisms, but an initial model could apply a levy to digital advertising spend across the media sector. This would mean the advertising spend currently going to Google and Meta would generate the majority of the revenue. </p>
<p>Although the spend going to other media would, in principle, also incur the levy, there could be rebates for local content producers. News operators would, in any case, be the recipients of the journalism funding which the levy makes possible.</p>
<p>Even a 1% levy on the $1.8 billion digital advertising spend would generate as much revenue as the (now defunct) <a href="https://www.stuff.co.nz/national/300932677/public-interest-journalism-fund-closes">Public Interest Journalism Fund</a>. A 3% levy would equal the higher estimates of what the proposed Fair News Digital Bargaining Bill would deliver.</p>
<h2>Collaborative news sharing</h2>
<p>Being administered by an independent agency (perhaps NZ On Air) would help ensure the levy supported news based on public service principles – including investigative, local government, regional and minority coverage – and that a wide range of news operations received support.</p>
<p>There is also a need for some form of collaborative news-sharing model. RNZ already shares its news content, and there have been proposals for a <a href="https://www.rnn.co.nz/">regional news network</a> to cover local issues often overlooked by the mainstream. </p>
<p>An independent, multi-platform news publisher model could underpin such an initiative. It would operate across both broadcasting, print and online media, and allow members to make use of any pooled content on their own channels or websites. </p>
<p>A levy mechanism and public news publisher model would be a far better basis for rescuing New Zealand’s fourth estate than throwing the news media some crumbs from Big Tech’s table.</p><img src="https://counter.theconversation.com/content/224966/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Thompson is a founding member and chair of the Better Public Media Trust. He has previously undertaken commissioned research for the Canadian Department of Heritage, the Department of Internal Affairs, the Ministry for Culture and Heritage, and NZ On Air. </span></em></p>Calls for the Fair News Digital Bargaining Bill to be fast-tracked are misguided. A better solution would be a straight levy on digital advertising to fund public interest news production.Peter Thompson, Associate Professor of Media Studies, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2248572024-03-01T07:07:04Z2024-03-01T07:07:04ZFacebook won’t keep paying Australian media outlets for their content. Are we about to get another news ban?<p>Facebook’s parent company, Meta, <a href="https://about.fb.com/news/2024/02/update-on-facebook-news-us-australia/">has announced</a> it will stop paying for news content in Australia when the current deals it has expire. Meta will also cease news aggregation on the site.</p>
<p>Three years ago, the company signed deals with Australian news outlets after the government introduced laws requiring tech companies to pay for the news on their platforms. The law only comes into effect if no commercial deal is struck.</p>
<p>Meta has now decided that the cost of providing news in Australia is too high. Its reason for the change is to “better align our investments to our products and services people value the most”. That is, it saves money. </p>
<p>So what does this mean for news on Facebook? What can users expect to find on the platform?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-news-is-fading-from-sight-on-big-social-media-platforms-where-does-that-leave-journalism-218522">The news is fading from sight on big social media platforms – where does that leave journalism?</a>
</strong>
</em>
</p>
<hr>
<h2>An unsurprising manoeuvre</h2>
<p>This decision was largely predictable, as it’s consistent with Meta’s actions in the UK, France, and Germany in December 2023. The same “deprecation” will occur simultaneously in the US. </p>
<p>Meta’s rationale is that news is “a small part of the Facebook experience for the vast majority of people” and is not a reason for the use of the platform as it “makes up less than 3% of what people around the world see in their Facebook feed”. It does not comment on the percentage in Australia.</p>
<p>Meta says “this does not impact our commitment to connecting people to reliable information on our platforms”. However, this “reliable information” is a reference to fact-checking in the context of misinformation. </p>
<p>Meta does not see a link between reliable information and Australian news. It has not addressed the issue of the sustainability of news journalism in Australia.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebooks-news-blockade-in-australia-shows-how-tech-giants-are-swallowing-the-web-155832">Facebook's news blockade in Australia shows how tech giants are swallowing the web</a>
</strong>
</em>
</p>
<hr>
<h2>So what will Facebook look like?</h2>
<p>Facebook says that it will simply remove the <a href="https://www.facebook.com/business/help/417376132287321?id=204021664031159">dedicated tab</a> on the site for news content. </p>
<p>For many users, this will not have an effect. However, for those who use Facebook as a news aggregator, access to links to news publishers will disappear. </p>
<p>Facebook users will need to go to the Facebook page of their favourite news publishers in order to be able to keep up with events. This means having to “follow” all of the news publishers with which Facebook currently has a commercial agreement.</p>
<p>Unlike the approach in 2021, Facebook is not going to <a href="https://www.abc.net.au/news/2021-02-18/facebook-to-restrict-sharing-or-viewing-news-in-australia/13166208">shut down</a> all of the pages that its systems thought were “media pages” (including emergency services and helplines such as 1-800-RESPECT). </p>
<p>Instead, Meta is encouraging news publishers to buy the tech giant’s services to increase their own traffic. </p>
<p>However, this means Meta expects that the flow of funds will be from news publishers to Meta, rather than the other way around.</p>
<h2>What does this mean for news?</h2>
<p>There is already a concern that social media is replacing legacy news sources.</p>
<p>Meta has consistently argues that news is not a driver of its business. In <a href="https://about.fb.com/wp-content/uploads/2020/08/Facebooks-response-to-Australias-proposed-News-Media-and-Digital-Platforms-Mandatory-Bargaining-Code.pdf">submissions to government</a>, it has sought to differentiate Meta and Google. In fact, news publishers often report having their <a href="https://theconversation.com/the-news-is-fading-from-sight-on-big-social-media-platforms-where-does-that-leave-journalism-218522">content buried</a> by algorithms over which they have no control. </p>
<p>Meta contends that news is so unimportant that it would rather not have news options than pay news publishers for content. </p>
<p>The Facebook news ban of 2021 was largely in response to the government’s <a href="https://www.acma.gov.au/news-media-bargaining-code">News Media Bargaining Code</a> – an arrangement in which news organisations could negotiate with big tech companies over payment and inclusion of their content on digital platforms. </p>
<p>In contrast, Google has previously been willing to enter into commercial deals or to launch news aggregator services rather than having a code imposed on it. </p>
<p>It is not clear whether Google will change its view in Australia as a result of the Meta decision. The News Media Bargaining Code has the potential to apply to both businesses. However, Google relies more on news content than Meta. </p>
<h2>Can the government do anything?</h2>
<p>The relevant ministers, Stephen Jones and Michelle Rowland, have already <a href="https://minister.infrastructure.gov.au/rowland/media-release/metas-news-content-announcement">referred to</a> the decision as a “dereliction of its commitment to the sustainability of Australian news media.” </p>
<p>As a practical matter, the News Media Bargaining Code is only triggered if there is no commercial deal in play. The current commercial deals with news outlets are <a href="https://www.abc.net.au/news/2024-03-01/meta-won-t-renew-deal-with-australian-news-media/103533874">due to expire</a> in a few months. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/this-weeks-changes-are-a-win-for-facebook-google-and-the-government-but-what-was-lost-along-the-way-155865">This week's changes are a win for Facebook, Google and the government — but what was lost along the way?</a>
</strong>
</em>
</p>
<hr>
<p>Meta has said that it “will not offer new Facebook products specifically for news publishers in the future”. It will let the existing commercial agreements lapse in in Australia, France, and Germany as they already have in the UK and the US.</p>
<p>The treasurer is now faced with a tough decision. He can “designate” Meta under the code and force it to the bargaining table, or he can agree that news is not a driver of Facebook use. This decision will need to take into account the issue of news journalism sustainability. </p>
<p>However, it also risks a repeat of the 2021 shut down in Australia and a similar one in <a href="https://www.bbc.com/news/world-us-canada-67755133">Canada</a> last year.</p><img src="https://counter.theconversation.com/content/224857/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rob Nicholls received funding from the Australian Research Council. He has previously received funding from Google (at the University of New South Wales). </span></em></p>The news page on Facebook will go, and with it, the flow of money to some Australian media outlets. But will the news content disappear too?Rob Nicholls, Visiting Fellow, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2230862024-02-09T00:56:45Z2024-02-09T00:56:45ZDesperate for Taylor Swift tickets? Here are cybersecurity tips to stay safe from scams<p>The global superstar Taylor Swift is bringing her Eras tour to Australia later this month, with sold-out shows in Sydney and Melbourne. With Swifties numbering in the thousands, fans who didn’t initially secure tickets are understandably desperate to find some. </p>
<p>Enter the many fraudsters seizing this opportunity. Sadly, the Australian Competition and Consumer Commission (ACCC) <a href="https://www.accc.gov.au/media-release/swifties-beware-scammers-are-in-their-cruel-summer-era">has reported over A$135,000</a> already lost to ticket fraud for the Swift concerts. The actual losses are likely to be much higher. </p>
<p>Hackers are also targeting the accounts of ticket holders in order to steal and resell legitimate tickets.</p>
<p>So how can you protect yourself if you are looking to buy or sell Eras tickets, or just want to keep your Ticketek account safe?</p>
<h2>The problem is ticket fraud</h2>
<p>In recent years, there has been a shift to electronic ticketing for events. This uses a unique barcode (or QR code) which can be dynamic. In the case of Ticketek, electronic tickets are linked to the purchaser’s phone number to reduce fraud.</p>
<p>Electronic ticketing aims to overcome a range of problems, such as counterfeit tickets, duplicate tickets and ticket scalping. Unsurprisingly, scammers have updated their techniques, too. </p>
<p>When purchasing tickets, it can be difficult to know if it is an authentic website, a genuine ticket and a legitimate transaction. </p>
<p>For example, scammers are selling <a href="https://www.scamwatch.gov.au/news-alerts/scam-alert-taylor-swift-tickets">non-existent tickets</a> across a range of social media platforms. They are also creating fake, legitimate-looking websites that lure in unsuspecting victims to hand over their personal details and money in return for heartache. </p>
<p>Many fraudsters are also tricking people with ticket sales on Facebook. Excited fans send the requested payment (usually a cash transfer), but will not receive their promised tickets and are not likely to recover the money.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="An example Facebook post advertising a " src="https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=486&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=486&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=486&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=610&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=610&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574515/original/file-20240208-26-e030ed.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=610&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Facebook has many groups where Taylor Swift fans are on the lookout for tickets, making them vulnerable to scammers.</span>
<span class="attribution"><span class="source">Facebook</span></span>
</figcaption>
</figure>
<h2>Hacked accounts</h2>
<p>The prevalence of hacking drives a lot of the ticket fraud. This is particularly evident through the only official reseller of Eras tickets (and many other events) – Ticketek Marketplace. </p>
<p>Some people have had their Ticketek accounts <a href="https://au.news.yahoo.com/taylor-swift-fans-see-tickets-disappear-ticketek-works-to-curb-scammers-203020815.html">hacked</a>, and offenders have been able to make transactions without the owner’s consent. By the time they realise, it is too late – the owner may have lost their tickets with nothing in return. </p>
<p>There are also many <a href="https://www.9news.com.au/national/taylor-swift-ticket-scammers-hunt-victims-on-facebook-for-australia-eras-tour/d1776810-154e-4f52-aa40-6375eb4285d8">reports</a> of victims whose known contacts (family or friends) message them on social media offering the chance to buy tickets. This approach reduces red flags or suspicions, as it uses existing trust and relationships to get a payment.</p>
<p>However, victims soon find their family member or friend has had their account hacked. Again, there is no ticket and no chance of recovering funds. </p>
<p>Hacking genuine accounts to perpetrate fraud is common. Recently, <a href="https://www.abc.net.au/news/2024-01-31/booking-com-scams-surge-phishing-australians-thousands-dollars/103390292">hackers gained unauthorised access</a> to hotel provider accounts on the popular accommodation website Booking.com. They then communicated with guests to gain direct payments and financial details. </p>
<h2>If I’d only played it safe</h2>
<p>There are no foolproof guarantees when trying to buy resold tickets. But you can look out for warning signs and take steps to reduce the risk of fraud or being hacked.</p>
<p><strong>Only buy tickets through the authorised seller website.</strong> In the case of Swift, that’s Ticketek Marketplace. While customers are reporting <a href="https://www.smh.com.au/culture/music/look-what-you-made-me-do-desperate-swifties-abandon-ticketek-in-risky-hunt-for-tickets-20240118-p5ey6b.html">long wait times</a> and less than satisfactory user experiences right now, it is still the most likely place to have genuine tickets. </p>
<hr>
<hr>
<p><strong>Do not, under any circumstances, buy tickets on social media such as Facebook.</strong> This includes from known contacts. There is no guarantee that the ticket exists or the person is genuine. There is also no recourse for lost payment. </p>
<p><strong>Never provide or confirm your payment details outside of Ticketek.</strong> Do not transfer any cash via a bank transfer to a seller. There are no seller fees on Ticketek Marketplace, and no reason to pay outside of the regulated system. </p>
<p><strong>Ensure you have strong passwords on all your accounts.</strong> Do not use the same password on several accounts. This is vitally important to protect yourself against many types of harm, not just ticket fraud. </p>
<p><strong>Enable two-factor authentication on any accounts you can.</strong> This provides an additional layer of protection should your password be compromised.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-multi-factor-authentication-and-how-should-i-be-using-it-191591">What is multi-factor authentication, and how should I be using it?</a>
</strong>
</em>
</p>
<hr>
<p><strong>Use a credit card where possible</strong> rather than debit card or cash transfers. You may be able to dispute a transaction or charge if you have used your credit card and may be able to recover any lost funds.</p>
<p><strong>Take screenshots of any communications and transactions</strong> when purchasing tickets online. While this will not prevent fraud, it does make it easier to report an incident or figure out what happened. </p>
<p><strong>Always confirm in person or over the phone with any known contacts</strong> who have messaged an offer or requested funds. With the prevalence of hacking into accounts, you may not be communicating with the person you think you are. </p>
<h2>No one teaches you what to do</h2>
<p>If you think you have been a victim of ticket fraud, contact your bank or financial institution immediately. The quicker you can do this, the better. </p>
<p>You should also contact the platform through which you made the transaction (such as Ticketek Marketplace). </p>
<p>You can report any financial losses to <a href="https://www.cyber.gov.au/report-and-recover/report">ReportCyber</a>, which is an online police reporting portal for cyber incidents, as well as <a href="https://www.scamwatch.gov.au/report-a-scam">Scamwatch</a>, to assist with education and awareness activities.</p>
<p>If you need support or assistance for any compromise of your identity, contact <a href="https://www.idcare.org/">iDcare</a>.</p><img src="https://counter.theconversation.com/content/223086/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cassandra Cross has previously received funding from the Australian Institute of Criminology and the Cybersecurity Cooperative Research Centre.</span></em></p>Australian fans who didn’t manage to snag Eras tickets are on the hunt – and scammers are capitalising on this. Here’s everything you need to know to protect yourself.Cassandra Cross, Associate Dean (Learning & Teaching) Faculty of Creative Industries, Education and Social Justice, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2224082024-02-07T12:03:02Z2024-02-07T12:03:02ZUsing AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls<figure><img src="https://images.theconversation.com/files/573450/original/file-20240205-17-4tssh6.jpg?ixlib=rb-1.1.0&rect=33%2C0%2C3693%2C2460&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/technology-security-concept-personal-authentication-system-709257292">metamorworks/Shutterstock</a></span></figcaption></figure><p>Every minute, millions of social media posts, photos and videos flood the internet. <a href="https://www.socialpilot.co/blog/social-media-statistics">On average</a>, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video. </p>
<p>This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence. </p>
<p>The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations. </p>
<p>The concerted effort in recent years to <a href="https://www.tandfonline.com/doi/full/10.1080/1057610X.2023.2222901">develop tools</a> for the identification and removal of online terrorist content has, in part, been fuelled by the emergence of new laws and regulations. This includes the EU’s terrorist content online <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A32021R0784">regulation</a>, which requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.</p>
<h2>Behaviour and content-based tools</h2>
<p>In broad terms, there are two types of tools used to root out terrorist content. The first looks at certain account and message behaviour. This includes how old the account is, the use of trending or unrelated hashtags and abnormal posting volume. </p>
<p>In many ways, this is similar to spam detection, in that it does not pay attention to content, and is <a href="https://www.resolvenet.org/research/remove-impede-disrupt-redirect-understanding-combating-pro-islamic-state-use-file-sharing">valuable for detecting</a> the rapid dissemination of large volumes of content, which are often bot-driven. </p>
<p>The second type of tool is content-based. It focuses on linguistic characteristics, word use, images and web addresses. Automated content-based tools take <a href="https://tate.techagainstterrorism.org/news/tcoaireport">one of two approaches</a>. </p>
<p><strong>1. Matching</strong></p>
<p>The first approach is based on comparing new images or videos to an existing database of images and videos that have previously been identified as terrorist in nature. One challenge here is that terror groups are known to try and evade such methods by producing subtle variants of the same piece of content. </p>
<p>After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity <a href="https://about.fb.com/news/2019/03/technical-update-on-new-zealand/">were in circulation</a>. </p>
<p>So, to combat this, matching-based tools generally use <a href="https://about.fb.com/news/2019/08/open-source-photo-video-matching/">perceptual hashing</a> rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches. </p>
<p>Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and <a href="https://towardsdatascience.com/black-box-attacks-on-perceptual-image-hashes-with-gans-cc1be11f277">recreate</a> the original image.</p>
<figure class="align-center ">
<img alt="A close up of a mobile phone screen displaying several social media apps." src="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Millions of posts, images and videos are uploaded to social media platforms every minute.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/moscow-russia-29072023-new-elon-musks-2339442245">Viktollio/Shutterstock</a></span>
</figcaption>
</figure>
<p><strong>2. Classification</strong></p>
<p>The second approach relies on classifying content. It <a href="https://www.cambridgeconsultants.com/insights/whitepaper/ofcom-use-ai-online-content-moderation">uses</a> machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own. </p>
<p>Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review. </p>
<p>This approach also <a href="https://tate.techagainstterrorism.org/news/tcoaireport">faces challenges</a>, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and <a href="https://oro.open.ac.uk/69799/">resource-intensive</a>. </p>
<p>The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including <a href="https://doi.org/10.1177/2053951719897945">subtlety and irony</a>. They also <a href="https://cdt.org/wp-content/uploads/2017/11/Mixed-Messages-Paper.pdf">lack</a> cultural sensitivity, including variations in dialect and language use across different groups. </p>
<p>These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as <a href="https://restofworld.org/2021/why-facebook-keeps-failing-in-ethiopia/">Ethiopia</a> and <a href="https://www.newamerica.org/the-thread/facebooks-content-moderation-language-barrier/">Romania</a>, while free speech activists in countries such as <a href="https://www.middleeasteye.net/news/revealed-seven-years-later-how-facebook-shuts-down-free-speech-egypt">Egypt</a>, <a href="https://syrianobserver.com/news/58430/facebook-deletes-accounts-of-assad-opponents.html">Syria</a> and <a href="https://www.accessnow.org/transparency-required-is-facebooks-effort-to-clean-up-operation-carthage-damaging-free-expression-in-tunisia/">Tunisia</a> have reported having their content removed.</p>
<h2>We still need human moderators</h2>
<p>So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged. </p>
<p>But this is demanding and draining work, and there have been <a href="https://www.wired.co.uk/article/facebook-content-moderators-ireland">damning reports</a> regarding the working conditions of moderators, with many tech companies such as Meta <a href="https://www.stern.nyu.edu/experience-stern/faculty-research/who-moderates-social-media-giants-call-end-outsourcing">outsourcing</a> this work to third-party vendors. </p>
<p>To address this, we <a href="https://tate.techagainstterrorism.org/news/tcoaireport">recommend</a> the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the wellbeing of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly. </p>
<p>But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise. </p>
<p>Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be <a href="https://www.prnewswire.com/news-releases/content-moderation-solutions-market-to-cross-us-32-bn-by-2031-tmr-report-301514155.html">worth $32bn by 2031</a>. </p>
<p>But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.</p>
<p>So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded <a href="https://tate.techagainstterrorism.org/">Tech Against Terrorism Europe</a> project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s <a href="https://about.fb.com/news/2022/12/meta-launches-new-content-moderation-tool/">Hasher-Matcher-Actioner</a>, which companies can use to build their own database of hashed terrorist content. </p>
<p>International organisations, governments and tech platforms must prioritise the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.</p><img src="https://counter.theconversation.com/content/222408/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stuart Macdonald receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101). </span></em></p><p class="fine-print"><em><span>Ashley A. Mattheis receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101).</span></em></p><p class="fine-print"><em><span>David Wells receives funding from the Council of Europe to conduct an analysis of emerging patterns of misuse of technology by terrorist actors (ongoing)</span></em></p>The complex task of tackling online terror needs human eyes as well as artificial intelligence.Stuart Macdonald, Professor of Law, Swansea UniversityAshley A. Mattheis, Postdoctoral Researcher, School of Law and Government, Dublin City UniversityDavid Wells, Honorary Research Associate at the Cyber Threats Research Centre, Swansea UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2222562024-02-01T13:33:05Z2024-02-01T13:33:05ZAre social media apps ‘dangerous products’? 2 scholars explain how the companies rely on young users but fail to protect them<figure><img src="https://images.theconversation.com/files/572539/original/file-20240131-19-ltvgx5.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4929%2C3283&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The CEOs of Discord, Snap, TikTok, X and Meta prepare to testify before the Senate Judiciary Committee on Jan. 31, 2024.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/jason-citron-ceo-of-discord-evan-spiegel-ceo-of-snap-shou-news-photo/1975356383">Alex Wong/Getty Images</a></span></figcaption></figure><p>“You have blood on your hands.”</p>
<p>“I’m sorry for everything you have all been through.”</p>
<p>These quotes, the first from Sen. Lindsey Graham, R-S.C., speaking to Meta CEO Mark Zuckerberg, and the second from Zuckerberg to families of victims of online child abuse in the audience, are highlights from an extraordinary day of <a href="https://www.judiciary.senate.gov/committee-activity/hearings/big-tech-and-the-online-child-sexual-exploitation-crisis">testimony before the Senate Judiciary Committee </a>about protecting children online. </p>
<p>But perhaps the most telling quote from the Jan. 31, 2024, hearing came not from the CEOs of Meta, TikTok, X, Discord or Snap but from Sen. Graham in his opening statement: Social media platforms “as they are currently designed and operate are dangerous products.”</p>
<p>We are <a href="https://scholar.google.com/citations?hl=en&user=yu4Ew7gAAAAJ&view_op=list_works&sortby=pubdate">university researchers</a> <a href="https://scholar.google.com/citations?hl=en&user=AkbGPz4AAAAJ&view_op=list_works&sortby=pubdate">who study</a> how social media organizes news, information and communities. Whether or not social media apps meet the legal definition of “<a href="https://dictionary.findlaw.com/definition/unreasonably-dangerous.html">unreasonably dangerous products</a>,” the social media companies’ business models do rely on having millions of young users. At the same time, we believe that the companies have not invested sufficient resources to effectively protect those users.</p>
<p>Mobile device use by children and teens <a href="https://www.edweek.org/leadership/kids-screen-time-rose-during-the-pandemic-and-stayed-high-thats-a-problem/2023/02">skyrocketed during the pandemic and has stayed high</a>. Naturally, teens want to be where their friends are, be it the skate park or on social media. In 2022, there were an estimated 49.8 million users age 17 and under of YouTube, 19 million of TikTok, 18 million of Snapchat, 16.7 million of Instagram, 9.9 million of Facebook and 7 million of Twitter, <a href="https://doi.org/10.1371/journal.pone.0295337">according to a recent study</a> by researchers at Harvard’s Chan School of Public Health. </p>
<p>Teens are a significant revenue source for social media companies. Revenue from users 17 and under of social media <a href="https://doi.org/10.1371/journal.pone.0295337">was US$11 billion</a> in 2022, according to the Chan School study. Instagram netted nearly $5 billion, while TikTok and YouTube each accrued over $2 billion. Teens mean green.</p>
<p>Social media poses a <a href="https://www.psychiatrist.com/news/surgeon-general-advisory-social-media-poses-profound-risk-of-harm-to-kids/">range of risks for teens</a>, from exposing them to harassment, bullying and sexual exploitation to encouraging eating disorders and suicidal ideation. For Congress to take meaningful action on protecting children online, we identify three issues that need to be accounted for: age, business model and content moderation.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/yUAfRod2xgI?wmode=transparent&start=261" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Following vigorous prompting from Sen. Josh Hawley, R-Mo., Meta CEO Mark Zuckerberg apologized to families of victims of online child abuse.</span></figcaption>
</figure>
<h2>How old are you?</h2>
<p>Social media companies have an incentive to look the other way in terms of their users’ ages. Otherwise they would have to spend the resources to moderate their content appropriately. Millions of underage users – those under 13 – are an “<a href="https://www.nytimes.com/2023/11/25/technology/instagram-meta-children-privacy.html">open secret</a>” at Meta. Meta has <a href="https://about.fb.com/news/2022/06/new-ways-to-verify-age-on-instagram/">described some potential strategies</a> to verify user ages, like requiring identification or video selfies, and using AI to guess their age based on “Happy Birthday” messages. </p>
<p>However, the accuracy of these methods is not publicly open to scrutiny, so it’s difficult to audit them independently.</p>
<p>Meta has stated that <a href="https://about.fb.com/news/2023/11/online-teen-safety-legislation-is-needed/">online teen safety legislation is needed</a> to prevent harm, but the company points to app stores, currently dominated by Apple and Google, as the place where age verification should happen. However, these guardrails can be easily circumvented by accessing a social media platform’s website rather than its app.</p>
<h2>New generations of customers</h2>
<p>Teen adoption is crucial for continued growth of all social media platforms. The <a href="https://www.wsj.com/articles/the-facebook-files-11631713039?mod=bigtop-breadcrumb">Facebook Files</a>, an investigation based on a review of company documents, showed that Instagram’s growth strategy relies on teens helping family members, particularly younger siblings, get on the platform. Meta claims it optimizes for “meaningful social interaction,” prioritizing family and friends’ content over other interests. However, Instagram allows pseudonymity and multiple accounts, which makes parental oversight even more difficult.</p>
<p>On Nov. 7, 2023, <a href="https://www.judiciary.senate.gov/imo/media/doc/2023-11-07_-_testimony_-_bejar.pdf">Auturo Bejar</a>, a former senior engineer at Facebook, testified before Congress. At Meta he surveyed teen Instagram users and found 24% of 13- to 15-year-olds said they had received unwanted advances within the past seven days, a fact he characterizes as “likely the largest-scale sexual harassment of teens to have ever happened.” Meta has since <a href="https://about.fb.com/news/2024/01/introducing-stricter-message-settings-for-teens-on-instagram-and-facebook/">implemented restrictions</a> on direct messaging in its products for underage users.</p>
<p>But to be clear, widespread harassment, bullying and solicitation is a part of the landscape of social media, and it’s going to take more than parents and app stores to rein it in.</p>
<p>Meta recently announced that it is aiming to provide teens with “<a href="https://about.fb.com/news/2024/01/teen-protections-age-appropriate-experiences-on-our-apps/">age-appropriate experiences</a>,” in part by prohibiting searches for terms related to suicide, self-harm and eating disorders. However, these steps don’t stop online communities that promote these harmful behaviors from flourishing on the company’s social media platforms. It takes a carefully trained team of human moderators to monitor and enforce terms of service violations for dangerous groups.</p>
<h2>Content moderation</h2>
<p>Social media companies point to the promise of artificial intelligence to moderate content and provide safety on their platforms, but AI is not a silver bullet for managing human behavior. Communities adapt quickly to AI moderation, augmenting banned words with purposeful misspellings and creating backup accounts to prevent getting kicked off a platform.</p>
<p>Human content moderation is also problematic, given social media companies’ business models and practices. Since 2022, <a href="https://techcrunch.com/2024/01/25/tech-layoffs-2023-list/">social media companies</a> have implemented massive layoffs that struck at the heart of their trust and safety operations and weakened content moderation across the industry. </p>
<p>Congress will need hard data from the social media companies – data the companies have not provided to date – to assess the appropriate ratio of moderators to users.</p>
<h2>The way forward</h2>
<p>In health care, professionals have a duty to warn if they believe something dangerous might happen. When these uncomfortable truths surface in corporate research, little is done to inform the public of threats to safety. Congress could mandate reporting when internal studies reveal damaging outcomes. </p>
<p>Helping teens today will require social media companies to invest in human content moderation and meaningful age verification. But even that is not likely to fix the problem. The challenge is facing the reality that social media as it exists today thrives on having legions of young users spending significant time in environments that put them at risk. These dangers for young users are baked into the design of contemporary social media, which requires much clearer statutes about who polices social media and when intervention is needed.</p>
<p>One of the motives for tech companies not to segment their user base by age, which would better protect children, is how it would affect advertising revenue. Congress has limited tools available to enact change, such as enforcing laws about advertising transparency, including “know your customer” rules. Especially as AI accelerates targeted marketing, social media companies are going to continue making it easy for advertisers to reach users of any age. But if advertisers knew what proportion of ads were seen by children, rather than adults, they may think twice about where they place ads in the future.</p>
<p>Despite a number of high-profile hearings on the harms of social media, Congress has not yet passed legislation to protect children or make social media platforms liable for the content published on their platforms. But with so many young people online post-pandemic, it’s up to Congress to implement guardrails that ultimately put privacy and community safety at the center of social media design.</p><img src="https://counter.theconversation.com/content/222256/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joan Donovan is on the board of Free Press and the founder of the Critical Internet Studies Institute.</span></em></p><p class="fine-print"><em><span>Sara Parker works for the Media Ecosystem Observatory at McGill University. Their work is largely funded by the Government of Canada. </span></em></p>As legislators rail against social media companies, the companies continue to put millions of young people at risk. Here’s how − and what can be done about it.Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston UniversitySara Parker, Research Analyst at the Media Ecosystem Observatory, McGill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2033642024-01-11T17:16:09Z2024-01-11T17:16:09ZRedundancies have unintended consequences for all employees, even those who keep their jobs<figure><img src="https://images.theconversation.com/files/519774/original/file-20230406-16-wkkmb9.jpg?ixlib=rb-1.1.0&rect=226%2C197%2C6248%2C3420&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/portrait-shot-asian-sad-jobless-businesswoman-2179277135">Bangkok Click Studio/Shutterstock</a></span></figcaption></figure><p>Tech giants including X (then known as <a href="https://www.cnbc.com/2022/11/02/twitter-reportedly-ready-to-cut-about-3700-employees.html">Twitter</a>) and <a href="https://edition.cnn.com/2023/03/14/tech/meta-layoffs">Facebook owner Meta</a> announced thousands of job cuts globally in 2022 and 2023, as did other firms like entertainment company <a href="https://edition.cnn.com/2023/03/27/media/disney-layoffs/index.html">Disney</a>, consultancy firm <a href="https://www.reuters.com/business/finance/kpmg-lay-off-about-6-deal-advisory-staff-uk-source-2023-10-17/">KPMG</a> and phone company <a href="https://news.sky.com/story/vodafone-plans-11-000-job-cuts-as-new-boss-rues-performance-12881966">Vodafone</a>. And let’s not forget those making redundancies as a result of company collapses such as UK retailer <a href="https://news.sky.com/story/further-9-100-wilko-employees-to-be-made-redundant-after-rescue-deal-collapses-administrators-say-12959338#:%7E:text=News%20%7C%20Sky%20News-,Further%209%2C100%20Wilko%20employees%20to%20be%20made%20redundant%20after%20rescue,find%20a%20buyer%20for%20them.">Wilko</a>. In the UK alone, the number of <a href="https://www.gqlittler.com/resources/news-and-views/spike-in-redundancies-for-uk-businesses.htm">planned redundancies</a> by companies increased by 54% over the last year, from 153,635 to 237,017.</p>
<p>This is likely to continue. Businesses are dealing with <a href="https://ifamagazine.com/number-of-planned-redundancies-in-the-uk-increases-54-in-the-past-year-amid-economic-instability/#:%7E:text=Sharp%20rises%20in%20borrowing%20costs,the%20specialist%20employment%20law%20firm.">sharp rises in borrowing costs</a> and <a href="https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/earningsandworkinghours#:%7E:text=Annual%20growth%20in%20regular%20earnings,in%20August%20to%20October%202023.">continued wage growth</a>, at the same as consumer spending is falling, affecting industries like <a href="https://www.adnews.com.au/news/a-redundancy-rush-as-the-industry-prepares-for-a-slow-start-to-2024">advertising</a> and <a href="https://www.theguardian.com/business/2023/dec/29/britons-cut-back-on-dining-out-and-buying-clothes-barclays-reveals">retail</a>.</p>
<p>Of course, such news has a very direct impact on those that lose their jobs. But all employees are impacted by reductions in a workforce. The employees made redundant are undoubtedly the victims, but those at risk are semi-victims, even if they are redeployed into another role. </p>
<p>Even the survivors – employees that don’t get laid off – are affected by stress and increased workloads in some cases. And let’s not forget the “bringers of bad news”: the management and HR teams that have to execute the layoff process may also feel stress or guilt.</p>
<p>Each group experiences job cuts in a very different way, of course. But there are some consistencies in how all are affected – and in how to help.</p>
<h2>1. Decreased trust leading to a toxic work environment</h2>
<p>As soon as redundancies are announced, the <a href="https://www.cipd.org/uk/knowledge/factsheets/psychological-factsheet/">“psychological contract”</a> that outlines the relationship between employees and employer is damaged. Essentially, <a href="https://www.taylorfrancis.com/books/mono/10.4324/9781003030416/strategic-redundancy-implementation-madeleine-stevens">trust is breached</a> as the worker’s expectations and beliefs about their employer are challenged. The idea that everyone is working towards a common goal can be shattered by a redundancy announcement.</p>
<p>When trust is broken in this way, employees might start making decisions about their loyalty and commitment. Experiencing <a href="https://www.bps.org.uk/psychologist/double-jeopardy-surreptitious-consequences-redundancy">an “unsafe” environment psychologically</a>, or low levels of job insecurity, can encourage people to look for new work opportunities – sometimes before redundancies are even announced. Consequently, organisations might <a href="https://www.taylorfrancis.com/books/mono/10.4324/9781003030416/strategic-redundancy-implementation-madeleine-stevens">lose talented and skilled staff</a> that they would have saved from redundancy. </p>
<h2>2. Psychological stress leading to increased absenteeism</h2>
<p>All employees can experience significant levels of stress <a href="https://www.taylorfrancis.com/books/mono/10.4324/9781003030416/strategic-redundancy-implementation-madeleine-stevens">during a layoff process</a>. For those made redundant, this stress is exacerbated by financial concerns about how they will pay their bills. The feelings of helplessness and anxiety over a job loss could lead to ill mental and physical health.</p>
<p>Semi-survivors (those whose jobs were at risk but ultimately weren’t made redundant) and survivors often also experience stress due to an increased workload. They may have to pick up additional duties previously carried out by employees who have been made redundant. Leaders can also <a href="https://www.tandfonline.com/doi/full/10.1080/09585192.2021.1976246">experience stress</a>. They deal with disgruntled employees, but they also usually have to deliver the unpleasant news of job losses in the first place. </p>
<h2>3. Job insecurity leading to loss of talent</h2>
<p>Once people experience the kind of breach of trust that can come with mass lay-offs, it can lead to feelings of <a href="https://www.tandfonline.com/doi/full/10.1080/09585192.2021.1976246">job insecurity and low morale</a>. You might think to yourself: “I may as well get ahead of the game and find another job now.” Or: “Why would I stay with this organisation? Do my bosses even know what they are doing?”</p>
<p>So, whether other workers are due to lose their jobs or not, they may start to apply for alternative roles. And in most situations, it’s unsurprising that the most talented employees, and those that are highly skilled, can often easily find new employment. A voluntary exit of workers at the same time as or after a redundancy programme can cause significant damage to the business if skilled and talented employees that are imperative to operational success leave at a time of organisational vulnerability. </p>
<p>Social media platform X (then known as Twitter) found this to be the case after making redundancies in November 2022. Around 1,000 employees <a href="https://www.theverge.com/2022/11/21/23472025/elon-musk-twitter-hiring-again-ending-layoffs">resigned voluntarily</a> after redundancy announcements were made following the sale of the business to Elon Musk. The employer/employee power shifted and the company had to start rehiring or replacing valuable, highly skilled employees, having only just made redundancies.</p>
<figure class="align-center ">
<img alt="Young professionals company employees diverse staff members gather together sit on chairs brainstorming solving working moments having dispute express opinion point of view." src="https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519775/original/file-20230406-26-mbj9mi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Communication should be an important part of a layoffs plan.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-professionals-company-employees-diverse-staff-1770506714">fizkes/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Tackling the hidden consequences of redundancies</h2>
<p>What can employers do to limit the unforeseen consequences of workforce stress during redundancies?</p>
<p>First of all, communicating redundancies with compassion can help keep employees on board. Communication should come from the top, with the leadership team owning the message. But it also needs to be a two-way process, allowing all employees to have their questions answered. My research with Claire Hannibal into so-called <a href="https://www.tandfonline.com/doi/full/10.1080/09585192.2021.1976246">“redundancy envoys”</a> (those who deliver the redundancy message) shows that the rationale for job cuts needs to be transparent, clearly communicated and fully understood by employees.</p>
<p>Second, each impacted group needs support that is tailored to their needs. Providing more generous compensation packages for victims of redundancy can help to alleviate immediate financial concerns. Employers can also help them network with companies that are hiring, or connect them with organisations providing training and skills for new roles, or that educate people about self-employment or retirement.</p>
<p>Employers should also offer career and trauma counselling support to all employees. This will help them understand and manage the range of emotions they may feel – from guilt, anger and resentment, to stress and sadness.</p>
<p>Finally, employers need to think carefully about job design for the remaining roles. Due to increased workloads and survivors often having to pick up new sills, every role must be realigned with the organisation’s revised vision. Employees should be supported to understand any new tasks they need to prioritise – and which tasks they no longer have to fulfil. Providing training and development will also help to rebuild employee’s confidence in the organisation.</p>
<p>Although making redundancies is very unpleasant for the whole workforce, there are ways for employers to undertake the process with compassion, treating all employees – whether they’re leaving or staying – with dignity.</p><img src="https://counter.theconversation.com/content/203364/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Madeleine Stevens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Company layoffs can have unintended consequences, even for those spared from redundancy.Madeleine Stevens, Reader in Organisational Transformation and Teaching Innovation, Liverpool John Moores UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2188932023-12-18T17:17:40Z2023-12-18T17:17:40ZMeta charging European users to remove ads is a privacy red herring<figure><img src="https://images.theconversation.com/files/564596/original/file-20231209-29-trsne2.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5184%2C3888&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Would you pay to browse Facebook or Instagram without ads?</span> <span class="attribution"><span class="source">(Timothy Hales Bennett/Unsplash)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/meta-charging-european-users-to-remove-ads-is-a-privacy-red-herring" width="100%" height="400"></iframe>
<p>This November, Meta rolled out a new subscription model for Facebook and Instagram users in the European Union, where they could pay a fee in exchange for <a href="https://about.fb.com/news/2023/10/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">an ad-free browsing experience</a> on Facebook and Instagram. Referred to by critics as a “Pay or Okay” model, and <a href="https://www.wired.com/story/meta-facebook-pay-for-privacy-europe/">charging 9.99 to 12.99 euros monthly</a>, the option is already an object of controversy.</p>
<p>Meta, among <a href="https://www.wired.com/story/meta-facebook-pay-for-privacy-europe/">many</a> <a href="https://techcrunch.com/2023/10/30/meta-ad-free-sub-eu/">others</a>, presented the new policy as a <a href="https://www.theregister.com/2023/10/31/meta_ad_free_europe/">privacy-preserving measure</a>. Meta explains it responds to EU privacy regulations.</p>
<p>And opponents of this approach only partially disagree. The <a href="https://www.data-protection-authority.gv.at/">Austrian Data Protection Authority</a>, together with activist and lawyer Max Schrems and the <a href="https://noyb.eu/en">advocacy group NOYB</a>, filed a complaint against Meta’s new model. They argue that paying for privacy <a href="https://noyb.eu/en/noyb-files-gdpr-complaint-against-meta-over-pay-or-okay">breaches privacy as a fundamental right</a>.</p>
<p>The European Consumer Organisation also filed <a href="https://www.beuc.eu/choose-to-Lose-with-Meta">an equivalent complaint</a>. Asking to pay for privacy is wrong, the complaint argues, and asking to pay so much for it is worse.</p>
<p>But both those who are excited about privacy-preserving Facebook and Instagram versions, and the organizations that filed these complaints, miss the point: Meta doesn’t provide any option to pay for privacy — in the EU or anywhere else.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/W_yeCSb2tlk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBS covers Meta’s decision to charge users to avoid ads on Facebook and Instagram.</span></figcaption>
</figure>
<h2>Meta’s business model relies on personal data</h2>
<p>The issue is that users who choose the new option still have their information collected by Meta. The company declared that people who pay for the service will not have their information processed for <a href="https://about.fb.com/news/2023/10/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">renting advertising spots on their accounts</a>. It never promised that they wouldn’t have their information collected or processed for other purposes. </p>
<p>Meta’s objective isn’t just to display ads to many people but also to make its ads more effective through precise targeting. To do this, Meta collects information about its users, from their location and likes to their <a href="https://www.facebook.com/business/tools/meta-pixel">browsing outside of the platform</a>. </p>
<p>Meta’s most valuable resource isn’t your attention while you use Facebook for an hour or two, but what the accumulation of thousands of those hours provides. Your attention is valuable for a second, but <a href="https://www.penguinrandomhouse.com/books/670634/i-have-nothing-to-hide-by-heidi-boghosian/">your information is useful forever</a>; Meta can predict how likely a user is to buy a product when others with similar demographics clicked on it before.</p>
<p>It is unlikely that Meta will offer pay-for-privacy because it would lose enormous profits from advertisers if it stopped collecting data from paying users. If it did that and EU users opted for a payment-supported Facebook as opposed to an ad-supported one, it would miss out on ad accuracy for its non-EU users too. The monthly payments received would be unlikely to compensate for that large loss.</p>
<h2>An additional revenue stream</h2>
<p>Avoiding ads can make scrolling through Facebook and Instagram more enjoyable, but paying to avoid them doesn’t change Meta’s privacy problem. The privacy problem with Meta’s business model isn’t that it shows ads but rather what it does to make them valuable: collecting personal information and subjecting it to Meta’s data practices.</p>
<p>Although ads are the public face of Meta’s business model, the real profit and potential harms lie in its data practices. Renting ad spots generates revenue, but the heart of both Meta’s business model and its privacy problem is what it does to improve them. In that context, Facebook’s paid version is an additional revenue stream rather than a commitment to more ethical data practices or a breach of fundamental rights.</p>
<p>A more pleasant, ad-free browsing experience is a convenience that, in fact, may increase the amount of time people spend on the platform, allowing for more personal data to be collected.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a woman sits on a couch and scrolls through Facebook on a tablet" src="https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/566064/original/file-20231215-26-ixi2gb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Avoiding ads may make scrolling Facebook more enjoyable, but it doesn’t mean that users’ information won’t be collected.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Beyond Facebook</h2>
<p>Facebook is one example of a ubiquitous business model in the information economy. </p>
<p>The real cost of services in the information economy isn’t their monthly fee. Whether we send money to paid services that also collect our data like Amazon Prime, Netflix, Spotify and Uber, or we instead see ads, doesn’t change our harm exposure. There’s nothing wrong, from a privacy perspective, with paying to remove ads. What entities do with our data poses long-term risks <a href="https://global.oup.com/academic/product/why-privacy-matters-9780190939045">beyond our immediate browsing experience</a>.</p>
<p>So users who opt for paid alternatives aren’t exempt from most of the cost. The Cambridge Analytica scandal, where personal information from millions was exploited for political purposes, illustrates one of the long-term risks of data accumulation. Facebook’s data practices had consequences far beyond ad targeting. The risk of data misuse and data breaches grows as <a href="https://global.oup.com/academic/product/breached-9780190940553">more information is gathered</a>.</p>
<h2>More robust rules</h2>
<p>Ultimately, it’s not the ads, but the way companies <a href="https://wwnorton.com/books/9780393882315">obtain, use, and share personal data</a> that needs attention and reform. Relying on individual users to opt out, whether paying or for free, is insufficient. It places the burden on each of them to navigate <a href="https://doi.org/10.1017/9781108591386">complicated and technical yet ambiguous privacy settings on countless platforms</a>. </p>
<p>Rather than by supporting or opposing new options to pay to avoid targeted ads, the way to reduce harm in the data economy is to develop <a href="https://doi.org/10.1017/9781108995825">more robust rules</a> to govern the collection and processing of people’s data. Useful protections are those that apply <a href="https://doi.org/10.1017/9781108995825">regardless of what options users click</a>.</p>
<p>The takeaway from the controversy shouldn’t be to stop the new — and currently Meta-specific — business model in which people can pay for ad removal. Rather, it should be to stop the old, internet-wide, business model in which companies collect and use people’s data with insufficient commitments to keep them safe.</p><img src="https://counter.theconversation.com/content/218893/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ignacio Cofone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Meta’s decision to charge users for an ad-free experience still requires that people have their information collected.Ignacio Cofone, Associate professor, Law, McGill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2185222023-11-30T19:03:44Z2023-11-30T19:03:44ZThe news is fading from sight on big social media platforms – where does that leave journalism?<p>According to a <a href="https://newsmediauk.org/blog/2023/11/02/editors-warn-of-existential-threat-to-journalism-from-big-tech/">recent survey</a> by the News Media Association, 90% of editors in the United Kingdom “believe that Google and Meta pose an existential threat to journalism”. </p>
<p>Why the pessimism? Because being in the news business but relying on social media platforms and search engines has become very risky. The big tech companies are de-prioritising news content, making it harder for citizens to find verified information produced by journalists.</p>
<p>It is arguable the threat isn’t necessarily existential. News companies are also <a href="https://www.inma.org/blogs/research/post.cfm/the-un-conscious-uncoupling-of-platforms-and-news-publishers-is-happening-quickly">leaving social media platforms</a>, potentially claiming back some control and building resilience into their revenue models. </p>
<p>Leading New Zealand digital publisher Stuff, for example, recently decided to stop <a href="https://www.stuff.co.nz/national/300988705/stuff-group-withdraws-from-x-formerly-twitter">posting its content</a> on X (formerly Twitter), “except stories that are of urgent public interest – such as health and safety emergencies”.</p>
<p>But as I describe in my new book, <a href="https://www.bwb.co.nz/books/from-paper-to-platform/">From Paper to Platform</a>, news organisations that continue to conduct their news business via these platforms will have limited control. As social media companies and search engines change the terms of their services at will, news companies are left to deal with the consequences. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/breaking-news-making-google-and-facebook-pay-nz-media-for-content-could-deliver-less-than-bargained-for-196030">Breaking news: making Google and Facebook pay NZ media for content could deliver less than bargained for</a>
</strong>
</em>
</p>
<hr>
<h2>Risks of ‘platformed publishing’</h2>
<p>Platforms such as Google and Facebook play various roles in the modern media ecosystem. Consequently, their actions create multiple risk points for news media. The impacts differ, of course, depending on each news company’s own goals and strategies.</p>
<p>As one <a href="https://journals.sagepub.com/doi/full/10.1177/14648849211031363">Scandinavian study</a> of media risk management noted, “platforms pose a competitive threat to news organisations”. But that threat varies, depending on how news organisations respond, and how reliant they are on those platforms for audience reach or funding.</p>
<p>News companies distribute their content on platforms such as Facebook or X because that’s where their audience is – at least a large proportion of it, anyway. But news is poorly promoted by those platforms, and Google and Facebook admit news makes up only a tiny fraction of their overall content.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/even-experts-struggle-to-tell-which-social-media-posts-are-evidence-based-so-what-do-we-do-217448">Even experts struggle to tell which social media posts are evidence-based. So, what do we do?</a>
</strong>
</em>
</p>
<hr>
<p>Furthermore, the visibility of news within these platforms is rapidly declining. The result is described by the authors of <a href="https://global.oup.com/academic/product/the-power-of-platforms-9780190908867?lang=en&cc=us">The Power of Platforms</a> as “<a href="https://reutersinstitute.politics.ox.ac.uk/news/power-platforms">platformed publishing</a>”: </p>
<blockquote>
<p>a situation where some news organisations have almost no control over the distribution of their journalism because they publish primarily to platforms defined by coding technologies, business models, and cultural conventions over which they have little influence.</p>
</blockquote>
<p>As a recent <a href="https://www.wired.com/story/facebook-is-giving-up-on-news-again/">Wired article observed</a>, “Facebook is done with news”: its parent company Meta is “killing off the News tab in France, Germany and the UK”, having already temporarily blocked access to news content in Australia in 2021 and more recently in Canada where the blackout continues.</p>
<p>Instagram’s new Threads app (also owned by Meta) has no appetite for hard news, Google’s search results offer <a href="https://pressgazette.co.uk/media-audience-and-business-data/google-core-update-news-search-october-2023/">less news</a>, and X has stopped showing news headlines and links on tweets.</p>
<h2>Weakening democracy</h2>
<p>The New Zealand news publishers I spoke to generally believe platform algorithms don’t prioritise factual news content. As <a href="https://www.bwb.co.nz/books/from-paper-to-platform/">one observed</a>, the “platforms have the control over algorithms”. Another noted how platforms “can bury or promote you as they like, their tweaks in algorithms determine your fate”.</p>
<p>This has real consequences beyond the impact on media metrics and advertising revenue. Platforms have an influence on democratic processes – including elections.</p>
<p>The same News Media Association survey quoted at the start of this article also reveals 77% of UK editors believe platform antics such as news blackouts will weaken democratic societies. </p>
<p>When people cannot access (or have limited access to) verified and trusted news, other things fill the void. The Israel-Gaza conflict, to take just the most recent example, has seen an <a href="https://www.euronews.com/next/2023/10/11/eus-thierry-breton-gives-elon-musk-24-hour-ultimatum-to-deal-with-israel-hamas-misinformat">increase in disinformation</a> on X – to the extent the European Union’s digital rights chief warned owner Elon Musk he was potentially breaching EU law.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/41-us-states-are-suing-meta-for-getting-teens-hooked-on-social-media-heres-what-to-expect-next-216914">41 US states are suing Meta for getting teens hooked on social media. Here’s what to expect next</a>
</strong>
</em>
</p>
<hr>
<h2>Terms of payment</h2>
<p>There has been some cause for optimism recently due to Google and Facebook becoming funders of journalism and news, having been either mandated or coerced to pay publishers for their content. </p>
<p>Australia was first to introduce a law requiring platforms to compensate news companies, followed by Canada. The previous New Zealand government introduced a <a href="https://www.parliament.nz/en/pb/sc/make-a-submission/document/54SCEDSI_SCF_FC7FAAC0-2EC0-4E47-7AB5-08DB9EBB2302/fair-digital-news-bargaining-bill">similar bill</a> to parliament, but there is no certainty it will become law under the new administration. </p>
<p>In Australia and Canada, the platforms implemented news “blackouts” in their services as a response to these laws, effectively making news invisible to their users.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-google-and-meta-owe-news-publishers-much-more-than-you-think-and-billions-more-than-theyd-like-to-admit-216818">Why Google and Meta owe news publishers much more than you think – and billions more than they’d like to admit</a>
</strong>
</em>
</p>
<hr>
<p>And while these platform payments have brought additional revenue to many news publishers, the terms of the payments are not public. It’s hard to estimate how much Google and Facebook have actually paid for news content, but it has been <a href="https://cepr.org/voxeu/columns/logic-behind-australias-news-media-bargaining-code">estimated in Australia</a> to be A$200 million annually. </p>
<p>If that sounds substantial, consider this: <a href="https://policydialogue.org/publications/working-papers/paying-for-news-what-google-and-meta-owe-us-publishers-draft-working-paper/">a recent US study</a> suggested Google and Meta should be paying far more than they do, estimating Facebook owes news publishers US$1.9 billion and Google US$10-12 billion annually.</p>
<p>It’s hard to see those platforms agreeing to such figures, or increasing any payments for news. More likely, the payments will gradually dwindle as Google and Meta continue prioritising other services and products over news. </p>
<p>Newsrooms will likely have to say goodbye to platformed publishing and social media news distribution. It’s clear it isn’t working as well as many hoped, and it will almost certainly not work in the long term.</p><img src="https://counter.theconversation.com/content/218522/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Merja Myllylahti does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media platforms are abandoning news – which is bad news for traditional media organisations that have come to rely on them for consumers.Merja Myllylahti, Senior Lecturer, Co-Director Research Centre for Journalism, Media & Democracy, Auckland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2189042023-11-30T17:21:23Z2023-11-30T17:21:23ZGoogle’s $100 million to Canada’s news industry is a small price to pay to avoid regulation<figure><img src="https://images.theconversation.com/files/562607/original/file-20231130-21-h7o8mw.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C3997&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An agreement reached between Google and the federal government means the search engine will pay $100 million annually to Canadian media outlets.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/googles-100-million-to-canadas-news-industry-is-a-small-price-to-pay-to-avoid-regulation" width="100%" height="400"></iframe>
<p>The deal between Google and the federal government to <a href="https://www.canada.ca/en/canadian-heritage/news/2023/11/statement-by-minister-st-onge-on-next-steps-for-the-online-news-act.html">resolve their dispute</a> over paying for news online will come as a relief for the media industry in Canada. </p>
<p>News publishers were facing <a href="https://blog.google/intl/en-ca/company-news/outreach-initiatives/an-update-on-canadas-bill-c-18-and-our-search-and-news-products/">the prospect of disappearing from Google Search and other services</a> — the equivalent of vanishing from the internet — after Google had threatened to block news links in response to the Online News Act.</p>
<p>The deal is good news for Canadians, who had already seen news disappear from Facebook and Instagram in the summer after Meta carried out its threat <a href="https://globalnews.ca/news/9934703/facebook-meta-news-blocking-canada-regulations/">to block news links rather than pay for them</a>.</p>
<p>At the heart of the dispute is the <a href="https://laws-lois.justice.gc.ca/eng/acts/O-9.3/">Online News Act</a>, also known as Bill C-18, which is due to come into force on Dec. 19. The legislation attempts to deal with the power technology giants have over how Canadians access news and information.</p>
<p>As a former journalist, researcher and <a href="https://theconversation.com/its-time-to-start-the-conversation-in-canada-79877">co-founder of <em>The Conversation Canada</em></a>, this is a story that I have followed closely.</p>
<h2>The deal in numbers</h2>
<p>Under the agreement, <a href="https://www.cp24.com/news/google-to-pay-100m-a-year-to-canadian-news-publishers-in-deal-with-ottawa-1.6665893">Google will contribute $100 million annually</a>, indexed to inflation, in financial support to newspapers, broadcasters and digital news outlets.</p>
<p>The money will be welcomed by journalism organizations, which have been facing <a href="https://angusreid.org/canada-media-consolidation-torstar-postmedia-government-funding-cbc/">declining revenues and audiences</a>. </p>
<p>But the amount is far lower than the 2022 Parliamentary Budget Officer’s estimate of <a href="https://www.pbo-dpb.ca/en/publications/RP-2223-017-M--cost-estimate-bill-c-18-online-news-act--estimation-couts-lies-projet-loi-c-18-loi-nouvelles-ligne">$329.2 million annually</a> from Google and Meta. It’s also lower than later <a href="https://www.cbc.ca/news/politics/online-news-act-google-meta-1.6954656">federal government estimates</a> of $172 million from Google alone.</p>
<p>The funding is partly intended to compensate print and broadcast media for falls in advertising revenues as companies moved their ads online.</p>
<p>The $14.4 billion digital ad market is <a href="https://gmicp.org/growth-and-upheaval-in-the-network-media-economy-in-canada-1984-2022/">dominated by Google and Meta</a>, which account for 77 per cent. Google’s share is $6.7 billion; the money Google will contribute to Canadian news media accounts for just under 1.5 per cent of its digital ad business in Canada.</p>
<p>The money from Google is small considering that the newspaper industry alone brought in <a href="https://gmicp.org/growth-and-upheaval-in-the-network-media-economy-in-canada-1984-2022/">$2 billion in revenues</a> in 2022. </p>
<h2>The battle over regulation</h2>
<p>The numbers only tell part of the story. Bill C-18 is a test case of <a href="https://theconversation.com/bill-c-18-google-and-meta-spark-crucial-test-for-canadian-journalism-208827">the power of platforms</a> like Google and Meta to run and control Canada’s communications infrastructures.</p>
<p>While the agreement allows all sides to claim victory, it is clear that Google successfully extracted key concessions over how it is regulated in Canada.</p>
<p>The search giant got its core demand for funding to be capped at a set amount of $100 million — comparable to what Google agreed to <a href="https://www.cjr.org/business_of_news/australia-pressured-google-and-facebook-to-pay-for-journalism-is-america-next.php">in Australia</a>, which adopted similar legislation in 2021.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/q5vPuRtvQ7g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Bloomberg News Corp. report on Google’s private arrangements with News Corp.</span></figcaption>
</figure>
<p>Similar to Australia, the deal with Ottawa means Google should end up being exempt from the Online News Act. The legislation in both countries contains provisions that enable platforms to be exempt if they make appropriate deals with the news media. </p>
<p>Differently to Australia though, Google will be able to work with a single body representing the news industry in Canada, though that remains to be determined.</p>
<p>Google made individual private arrangements with news outlets in Australia, deciding who to fund and by how much. </p>
<p>What is not clear, though, is how the negotiations will work in Canada, who will be involved and how transparent will the process be. </p>
<h2>Picking winners and losers</h2>
<p>Critics have warned that the lack of transparency in Australia allowed platforms to pick which outlets received the money and how much. The funding heavily benefitted mainstream media in Australia, <a href="https://www.bbc.com/news/business-56101859">notably Rupert Murdoch’s media empire</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/canadas-online-news-act-may-let-meta-and-google-decide-the-winners-and-losers-in-the-media-industry-208088">Canada's Online News Act may let Meta and Google decide the winners and losers in the media industry</a>
</strong>
</em>
</p>
<hr>
<p>Canadian Heritage has said that Google’s $100 million will be spread across the news industry, including independent news outlets, and those from Indigenous and official-language minority communities.</p>
<p>The provision to distribute the funding “based on the number of full-time equivalent <a href="https://www.canada.ca/en/canadian-heritage/news/2023/11/statement-by-minister-st-onge-on-next-steps-for-the-online-news-act.html">journalists engaged by those businesses</a>” risks repeating the missteps of Australia by failing to encourage newer, emergent journalism organizations often seeking to fill the gaps left by commercial media.</p>
<p>How all of this will play out, and what it means for Canadian news consumers, should become clearer in the coming weeks as Canadian Heritage sheds more light on how C-18 will work out in practice.</p><img src="https://counter.theconversation.com/content/218904/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alfred Hermida receives funding from the Social Sciences and Humanities Research Council. He is a co-founder and board member of The Conversation Canada.</span></em></p>Google has secured significant concessions in its deal with Ottawa over Bill C-18, the Online News Act, which comes into effect on Dec. 19.Alfred Hermida, Professor, School of Journalism, Writing, and Media, University of British ColumbiaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2177602023-11-27T13:41:34Z2023-11-27T13:41:34ZSupreme Court to consider giving First Amendment protections to social media posts<figure><img src="https://images.theconversation.com/files/560784/original/file-20231121-4426-i5zrwh.jpg?ixlib=rb-1.1.0&rect=0%2C22%2C3706%2C3084&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Citizens have sometimes been surprised to find public officials blocking people from viewing their social media feeds.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/businessman-standing-in-front-of-a-big-smart-royalty-free-illustration/1025098142">alashi/DigitalVision Vectors via Getty Images</a></span></figcaption></figure><p>The First Amendment does not protect messages posted on social media platforms. </p>
<p>The companies that own the platforms can – and do – remove, promote or limit the distribution of any posts <a href="https://www.freedomforum.org/free-speech-on-social-media/">according to corporate policies</a>. But all that might soon change.</p>
<p>The Supreme Court has agreed to <a href="https://www.nytimes.com/2023/10/31/opinion/social-media-supreme-court-democracy.html">hear five cases</a> during this current term, which ends in June 2024, that collectively give the court the opportunity to reexamine the nature of content moderation – the rules governing discussions on social media platforms such as Facebook and X, formerly known as Twitter – and the constitutional limitations on the government to affect speech on the platforms.</p>
<p>Content moderation, whether done manually by company employees or automatically by a platform’s software and algorithms, affects what viewers can see on a digital media page. Messages that are promoted garner greater viewership and greater interaction; those that are deprioritized or removed will obviously receive less attention. Content moderation policies reflect decisions by digital platforms about the relative value of posted messages.</p>
<p>As an attorney, <a href="https://lynngreenky.com/">professor</a> and author of a book about the <a href="https://press.uchicago.edu/ucp/books/book/distributed/W/bo156864042.html">boundaries of the First Amendment</a>, I believe that the constitutional challenges presented by these cases will give the court the occasion to advise government, corporations and users of interactive technologies what their rights and responsibilities are as communications technologies continue to evolve.</p>
<h2>Public forums</h2>
<p>In late October 2023, the Supreme Court heard oral arguments on two related cases in which both sets of plaintiffs argued that elected officials who use their social media accounts either exclusively or partially to promote their politics and policies <a href="https://www.nytimes.com/2023/04/24/us/elected-officials-social-media-supreme-court.html">cannot constitutionally block constituents</a> from posting comments on the officials’ pages.</p>
<p>In one of those cases, <a href="https://www.oyez.org/cases/2023/22-324">O’Connor-Radcliff v. Garnier</a>, two school board members from the Poway Unified School District in California blocked a set of parents – who frequently posted repetitive and critical comments on the board members’ Facebook and Twitter accounts – from viewing the board members’ accounts. </p>
<p>In the other case heard in October, <a href="https://www.oyez.org/cases/2023/22-611">Lindke v. Freed</a>, the city manager of Port Huron, Michigan, apparently angered by critical comments about a posted picture, blocked a constituent from viewing or posting on the manager’s Facebook page. </p>
<p>Courts have long held that public spaces, like parks and sidewalks, are public forums, which must <a href="https://www.oyez.org/cases/1900-1940/307us496">remain open to free and robust conversation and debate</a>, subject only to neutral rules <a href="https://firstamendment.mtsu.edu/article/time-place-and-manner-restrictions/">unrelated to the content of the speech expressed</a>. The silenced constituents in the current cases insisted that in a world where a lot of public discussion is conducted in interactive social media, digital spaces used by government representatives for <a href="https://www.nytimes.com/2023/04/24/us/elected-officials-social-media-supreme-court.html">communicating with their constituents</a> are also public forums and should be subject to the same First Amendment rules as their physical counterparts.</p>
<p>If the Supreme Court rules that public forums can be both physical and virtual, government officials will not be able to arbitrarily block users from viewing and responding to their content or remove constituent comments with which they disagree. On the other hand, if the Supreme Court rejects the plaintiffs’ argument, the only recourse for frustrated constituents will be to create competing social media spaces where they can criticize and argue at will.</p>
<h2>Content moderation as editorial choices</h2>
<p>Two other cases – <a href="https://www.oyez.org/cases/2023/22-555">NetChoice LLC v. Paxton</a> and <a href="https://www.oyez.org/cases/2023/22-277">Moody v. NetChoice LLC</a> – also relate to the question of how the government should regulate online discussions. <a href="https://perma.cc/YHK2-WVWS">Florida</a> and <a href="https://perma.cc/B2WU-M3CK">Texas</a> have both passed laws that modify the internal policies and algorithms of large social media platforms by regulating how the platforms can promote, demote or remove posts.</p>
<p>NetChoice, a tech industry trade group representing a <a href="https://netchoice.org/about/#association-members">wide range of social media platforms</a> and online businesses, including Meta, Amazon, Airbnb and TikTok, contends that the platforms are not public forums. The group says that the Florida and Texas legislation unconstitutionally restricts the social media companies’ First Amendment right to make their own <a href="https://www.oyez.org/cases/1973/73-797">editorial choices</a> about what appears on their sites.</p>
<p>In addition, NetChoice alleges that by limiting Facebook’s or X’s ability to rank, repress or even remove speech – whether manually or with algorithms – the Texas and Florida laws amount to government requirements that the <a href="https://www.oyez.org/cases/1994/94-749">platforms host speech they didn’t want to</a>, which is also unconstitutional. </p>
<p>NetChoice is asking the Supreme Court to rule the laws unconstitutional so that the platforms remain free to make their own independent choices regarding when, how and whether posts will remain available for view and comment.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man in a military uniform stands at a lectern looking out at a group of people sitting in chairs." src="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In 2021, U.S. Surgeon General Vivek Murthy declared misinformation on social media, especially about COVID-19 and vaccines, to be a public health threat.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/surgeon-general-vivek-murthy-and-white-house-press-news-photo/1328901388">Chip Somodevilla/Getty Images</a></span>
</figcaption>
</figure>
<h2>Censorship</h2>
<p>In an effort to reduce harmful speech that proliferates across the internet – speech that supports criminal and terrorist activity as well as misinformation and disinformation – the federal government has engaged in wide-ranging discussions with internet companies about their <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">content moderation policies</a>.</p>
<p>To that end, the Biden administration has regularly advised – <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">some say strong-armed</a> – social media platforms to deprioritize or remove posts the government had flagged as misleading, false or harmful. Some of the posts <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">related to misinformation</a> about COVID-19 vaccines or promoted human trafficking. On several occasions, the officials would suggest that platform companies ban a user who posted the material from making further posts. Sometimes, the corporate representatives themselves would ask the government what to do with a particular post.</p>
<p>While the public might be generally aware that content moderation policies exist, people are not always aware of how those policies affect the information to which they are exposed. Specifically, audiences have no way to measure how content moderation policies affect the marketplace of ideas or influence debate and discussion about public issues.</p>
<p>In <a href="https://www.scotusblog.com/case-files/cases/missouri-v-biden/">Missouri v. Biden</a>, the plaintiffs argue that government efforts to persuade social media platforms to publish or remove posts were so relentless and invasive that the moderation policies no longer reflected the companies’ own editorial choices. Rather, they argue, the policies were in reality government directives that effectively silenced – <a href="https://www.oyez.org/cases/1970/1873">and unconstitutionally censored</a> – speakers with whom the government disagreed. </p>
<p>The court’s decision in this case could have wide-ranging effects on the manner and methods of government efforts to influence the information that guides the public’s debates and decisions.</p><img src="https://counter.theconversation.com/content/217760/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lynn Greenky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Supreme Court will hear five cases this term that will examine the nature of online discussion spaces run by social media platforms.Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2163382023-11-14T21:02:24Z2023-11-14T21:02:24ZFake news didn’t play a big role in NZ’s 2023 election – but there was a rise in ‘small lies’<p>The threat of disinformation on social media in the lead-up to New Zealand’s 2023 election <a href="https://thespinoff.co.nz/politics/09-08-2023/inside-the-plan-to-stop-a-misinformation-election">loomed large</a> for the <a href="https://www.stuff.co.nz/national/politics/132664984/analogue-politics-in-a-digital-age-how-officials-are-preparing-for-the-misinformation-wave-this-election">Electoral Commission</a> and <a href="https://thespinoff.co.nz/the-bulletin/12-11-2021/nzs-disinformation-surge">academics studying fake news</a>. </p>
<p>So how bad did it really get?</p>
<p>As part of the <a href="https://www.wgtn.ac.nz/hppi/centres/isprl/new-zealand-social-media-study">New Zealand Social Media Study</a>, we analysed more than 4,000 posts on Facebook from political parties and their leaders. Our study focused on the five weeks ahead of election day. </p>
<p>What we found should give New Zealanders some comfort about the political discourse on social media. While not perfect, there was not as much <a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">misinformation</a> (misleading information created without the intent to manipulate people) and <a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">disinformation</a> (deliberate attempts to manipulate with false information) as everyone feared. </p>
<h2>Looking for fake news and half truths</h2>
<p>To identify examples of both of those, a team of research assistants analysed and fact-checked posts, classifying them as either “not including fake news” or “including fake news”. </p>
<p>Fake news posts were defined as completely or mostly made up, and intentionally and verifiably false. </p>
<p>An example of this type of disinformation would be the “<a href="https://thespinoff.co.nz/society/08-06-2023/no-whangarei-girls-high-school-students-are-not-identifying-as-cats">litter box hoax</a>”, alleging schools provided litter boxes for students who identified as cats or furries. </p>
<p>Originating from overseas sources, this story has been debunked multiple times. In New Zealand, this <a href="https://www.stuff.co.nz/national/133004212/party-leader-sue-grey-raises-litterboxes-in-schools-myth-at-candidate-meeting">hoax was spread by Sue Grey</a>, leader of the NZ Outdoors & Freedoms Party. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-can-be-information-poison-when-we-need-facts-most-100495">Social media can be information poison when we need facts most</a>
</strong>
</em>
</p>
<hr>
<p>In cases of doubt, or when the research assistants couldn’t prove the information was false, they coded the posts as “not including fake news”. The term “fake news” was therefore reserved for very clear cases of false information. </p>
<p>If a post did not include fake news, the team checked for potential half-truths. Half-truths were defined as posts that were not entirely made up, but contained some incorrect information. </p>
<p>The National Party, for example, put up a post suggesting the Ministry of Pacific Peoples had <a href="https://www.rnz.co.nz/news/election-2023/498030/national-targets-ministry-for-pacific-peoples-50k-post-budget-breakfasts-spend">hosted breakfasts to promote Labour MPs</a>, at the cost of more than $50,000. While the ministry did host breakfasts to explain the most recent budget, and the cost was accurate, there was no indication the purpose of this event was to promote Labour MPs.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1710132911176065029"}"></div></p>
<h2>How 2023’s election compared to 2020</h2>
<p>At the beginning of the campaign, the proportion of what we identified as fake news being published on Facebook by political parties and their leaders was 2.5% – similar to what we saw in 2020. </p>
<p>The proportion of fake news posts then dropped below 2% for a long period and even fell as low as 0.7% at one point in the campaign, before rising again in the final stretch. The share of fake news peaked at 3.8% at the start of the last week of the campaign. </p>
<p>Over the five weeks of the campaign, we identified an average of 2.6% of Facebook posts by political parties and their leaders in any given week qualified as fake news. In 2020, the weekly average was 2.5%, which means the increase of fake news was minimal.</p>
<p>The sources of much of the outright fake news were parties on the fringes. According to our research, none of the major political parties were posting outright lies. </p>
<p>But there were posts from all political parties assessed as half-truths.</p>
<p>Half-truths stayed well below 10% during the five weeks we looked at, peaking at 6.5% in the final week. On average, the weekly share of half-truths was 4.8% in 2023, while in 2020 it was 2.5%. </p>
<p>So while the number of “big lies” – also known as “fake news” – did not increase in 2023 compared to 2020, the number of “small lies” in political campaigns is growing. </p>
<p>All of the political parties took more liberties with the truth in 2023 than they did in 2020.</p>
<h2>Playing on emotions and oversimplifying</h2>
<p>More than a third of all misleading posts in 2023 were emotional (37%), targeting voters’ emotions through words or pictures. Some 26% of the social media posts jumped to conclusions, while 23% oversimplified the topics being discussed. And 21% of the posts cherry-picked information, meaning the information presented was incomplete.</p>
<p>Some of the social media posts we identified as fake news or half-truths used pseudo-experts: people with some academic background, but who are not qualified to be expert witnesses on the topic under discussion (18%). </p>
<p>We also saw anecdotes of unclear origin, instead of scientific facts (15%), while 7% had unrealistic expectations of science, such as expecting science to offer 100% certainty.</p>
<p>Some of the posts included the claim that the posts’ authors had a silent majority behind them (5%). Another 5% of the social media posts identified as disinformation included personal attacks, rather than debating someone’s arguments.</p>
<h2>Staying vigilant</h2>
<p>The levels of misinformation and disinformation on social media during the past two elections in New Zealand have been fairly low – and certainly no cause for panic. But that doesn’t mean it will always stay that way. </p>
<p>On the one hand, we need to keep an eye on the social media campaigns in future elections and, in particular, monitor the development and use of misinformation and disinformation by political parties on the fringe. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/beyond-fake-news-social-media-and-market-driven-political-campaigns-78346">Beyond fake news: social media and market-driven political campaigns</a>
</strong>
</em>
</p>
<hr>
<p>We also need to keep eye on the major parties, as small lies might pave the way for more fake news or conspiracy theories in the future.</p>
<p>On the other hand, we need to resist overstating the use of misinformation and disinformation in New Zealand. Currently, there doesn’t appear to be the appetite to spread disinformation on social media by our major political parties or leaders. </p>
<p>This is a good thing for the health of our democracy, and we need to ensure it stays that way.</p><img src="https://counter.theconversation.com/content/216338/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mona Krewel is affiliated with the advisory panel to the Department of the Prime Minister and Cabinet to strengthen the country’s capacity to identify and address misinformation and disinformation. This article reflects her personal opinions as a researcher.</span></em></p>We found the number of “big lies” – also known as fake news – didn’t increase in 2023 compared to 2020. But we did spot more “small lies” this time. Here’s what to look out for in coming elections.Mona Krewel, Senior lecturer in Comparative Politics, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2173482023-11-14T13:52:26Z2023-11-14T13:52:26ZFacebook’s new ad-free tier could end annoying consent pop-ups, but it could also put a price on your privacy<figure><img src="https://images.theconversation.com/files/559091/original/file-20231113-29-2ak1l.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3840%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/communication-network-concept-young-asian-woman-1788704870">metamorworks/Shutterstock</a></span></figcaption></figure><p>We have reached a key juncture in the debate about online privacy, following Meta’s recent decision <a href="https://about.fb.com/news/2023/10/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">to offer some users</a> paid-for ad-free access to Facebook and Instagram. The time has come to decide how much we value keeping our data, tastes and whereabouts to ourselves.</p>
<p>The main and often only way that free-to-use online services such as Google, Facebook, Instagram or TikTok make money is <a href="https://theconversation.com/why-metas-share-price-collapse-is-good-news-for-the-future-of-social-media-193482">by selling data</a> about user preferences to advertisers. And since the General Data Protection Regulation (<a href="https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en#:%7E:text=In%202016%2C%20the%20EU%20adopted,as%20law%20across%20the%20EU.">GDPR</a>) came into effect in the EU and the UK in 2018, firms have been allowed to track and sell this data to third parties, so long as they seek your explicit consent beforehand.</p>
<p>Most companies complied with GDPR by introducing those annoying pop-up windows that we’ve all come to know and hate when we open up a website. Popups are not explicitly required by <a href="https://gdpr-info.eu/">the regulations</a>, but they make clicking “OK” for tracking the easy choice, so they have been widely adopted by companies.</p>
<p>Not by all companies, however. Meta, Facebook’s owner, took a different approach. It chose to ask for consent just once amid the lengthy terms and conditions <a href="https://www.biggestlieonline.com/">you are asked to read</a> when signing up to its platform. </p>
<p>EU courts <a href="https://edpb.europa.eu/news/news/2023/edpb-urgent-binding-decision-processing-personal-data-behavioural-advertising-meta_en">have now ruled</a> this tactic is illegal, and the UK plans to do the same in its <a href="https://www.euractiv.com/section/data-privacy/news/uk-data-reform-bill-revived-after-lengthy-legislative-delay/">long-delayed</a> reform of its <a href="https://publications.parliament.uk/pa/bills/cbill/58-04/0001/230001.pdf">data protection laws</a>.</p>
<h2>Paying for privacy</h2>
<p>If you have a Facebook account and live in the EU, Switzerland or an EEA country, <a href="https://mashable.com/article/facebook-ad-free-review#:%7E:text=I%20was%20able%20to%20purchase%20the%20subscription">you will now be offered the choice</a> between the standard targeted-ad social media experience, or you could start paying €10 (£8.72) (via browser) or €12 (in-app) per month for ad-free access. </p>
<p><a href="https://about.fb.com/news/2023/10/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">Meta said</a> the decision, first announced in August, was made “to address a number of evolving and emerging regulatory requirements in the region”. As well as the EU’s <a href="https://digital-markets-act.ec.europa.eu/index_en">Digital Markets Act</a>, this includes <a href="https://curia.europa.eu/juris/document/document.jsf?docid=275125&mode=req&pageIndex=1&dir=&occ=first&part=1&text=&doclang=EN&cid=735692">a recent ruling</a> by the Court of Justice of the European Union (CJEU) that allows firms to track you if they offer a non-tracking alternative “for a reasonable fee”. </p>
<p><a href="https://about.fb.com/news/2023/10/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe/">Meta says</a> the CJEU “expressly recognised” subscription models as “a valid form of consent for an ads funded service”.</p>
<p>But is €10 per month a reasonable price to pay to be able to keep your personal data to yourself? </p>
<p>At first sight, the number does not seem to be too far off. Meta’s <a href="https://s21.q4cdn.com/399680738/files/doc_earnings/2023/q3/presentation/Earnings-Presentation-Q3-2023.pdf">average revenue per user</a> in Europe was US$6.34 (£5.17) per month in the third quarter of 2023. </p>
<p>In the US, it was US$18.70. Companies such <a href="https://www.washingtonpost.com/technology/2023/02/06/consumers-paid-money-data/">as Tapestri</a> – which pay you for your private information and sell it to advertisers – claim to pay people between US$8 and US$25 a month for information about their location and tastes.</p>
<p>But most people have no idea what the market value of their privacy is because <a href="https://www.aeaweb.org/articles?id=10.1257/mic.20200200">it depends on so many different things</a>. If you spend a lot of money online, you are worth more to advertisers. </p>
<p>If you share a lot of characteristics – such as gender, ethnicity, musical tastes, religion, or location – with someone that is already tracked, however, your data is worth less. <a href="https://www.aeaweb.org/articles?id=10.1257/jel.54.2.442">The asymmetry</a> between what you know about the commercial value of your data and what the companies selling it know makes it difficult to evaluate if you are getting a good deal.</p>
<p>There are other reasons why we should not expect many users to pay for an ad-free social media experience. <a href="https://www.nber.org/papers/w23488">Research shows</a> that, whenever asked, people state that they value their private data a lot. In practice, when offered the choice, many of us give it away for a very low price in exchange for vague promises about privacy protection.</p>
<p>Whether this implies that we don’t value privacy, or that we don’t understand that we are giving it away for very cheap, is not clear.</p>
<figure class="align-center ">
<img alt="A thumbs up sign with the words facebook and 4-5 Grand Canal Quay, in front of a glass office building." src="https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559094/original/file-20231113-23-3i07wn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Meta’s head office for Europe and the Middle East, in Dublin, Ireland.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/grand-canal-quay-dublin-2-ireland-2306641771">KarlM Photography/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The end of pop-ups?</h2>
<p>So what’s the solution? An economist <a href="https://www.aeaweb.org/articles?id=10.1257/aer.20191330">might suggest</a> giving users ownership of their own data, letting them see how much the market offers for it, and then observing how they behave. We could then actively choose to pay in part or in full with our data to access websites showing targeted ads. This process would also cut down on the resources used by so many different websites to collect the same information about consumers.</p>
<p>A startup called Calden already lets <a href="https://techcrunch.com/2023/08/16/caden-lands-15m-to-let-users-monetize-their-personal-data/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAACP_RNDcZslhuLRNKovzY_24t17_-saS-_bp-zbzskHWo1kWWtYVM5xXgNdhDeseNcJWKIMQE-5XBGrlDUUE61k55b473UVT5kmiMdGpNMCyBf7QbZoYC1ciEzUawVmBwoRzXgzJ6XRNpknCuK1dODxUYT_xNmxz-wz_NbbmfdNt">you pick the parts</a> of your private data you are happy to sell, and to whom. Letting users control their own data – and the monetisation of it – may create new issues, however. AI-powered users could be created to maximise <a href="https://www.mercurynews.com/2021/03/23/facebook-accused-of-taking-money-for-ads-to-fake-people-after-disabling-1-3-billion-fake-accounts/">fake ad revenue</a>, for example. </p>
<p>It could also lead to more inequality. The commercial value of someone’s data rises in line with the amount of money they spend online. Paying for access with our data could put the less wealthy at a disadvantage.</p>
<p>One thing seems certain, however: this could mark the end of popup consent forms. As well as these recent EU changes, the UK is also planning <a href="https://researchbriefings.files.parliament.uk/documents/CBP-9746/CBP-9746.pdf">to allow consumers to opt-out</a> from pop-ups. This means consenting to give away your data would become the default to an even greater degree than it is right now.</p>
<p>As these new regulations are rolled out, we will learn whether it will lead to a normalisation of tracking without consent, indicating that we do not value privacy as much as we thought. Alternatively, we could move towards a system where we manage and trade our privacy like any other commodity. </p>
<p>And if some countries ban the practice of individual tracking altogether, “free” online services like search engines and social media platforms might eventually only be available to paying customers.</p><img src="https://counter.theconversation.com/content/217348/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Renaud Foucart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Facebook and Instagram users in some parts of the world can now pay for an ad-free experience – but at what price?Renaud Foucart, Senior Lecturer in Economics, Lancaster University Management School, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2169142023-11-09T05:40:16Z2023-11-09T05:40:16Z41 US states are suing Meta for getting teens hooked on social media. Here’s what to expect next<figure><img src="https://images.theconversation.com/files/558537/original/file-20231109-28-kyr3up.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5114%2C3360&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>In the United States, 41 states <a href="https://www.reuters.com/legal/dozens-us-states-sue-meta-platforms-harming-mental-health-young-people-2023-10-24/">have filed lawsuits against Meta</a> for allegedly driving social media addiction in its young users (under the age of 18), amid growing concerns about the negative effects <a href="https://www.oecd-ilibrary.org/docserver/4d013cc5-en.pdf?expires=1699164373&id=id&accname=guest&checksum=4B45AC766D83C418F74D8DA27AA6F400">of platforms</a>.</p>
<p>The lawsuits allege Meta has been harvesting young users’ data, deploying features to promote compulsive use of both Facebook and Instagram, and misleading the public about the negative effects of these features. </p>
<p>What might we expect to happen next? And are there potential consequences for Australia?</p>
<h2>Leveraging whistleblower revelations</h2>
<p>The most significant suit, filed in a federal court in California, involves 33 states. The claim is <a href="https://ag.ny.gov/sites/default/files/court-filings/meta-multistate-complaint.pdf">based on</a> breaches of state consumer protection statutes and common law principles regarding deceptive, unfair or unconscionable conduct, and federal privacy <a href="https://uscode.house.gov/view.xhtml?req=granuleid%3AUSC-prelim-title15-section6502&edition=prelim">statutory provisions</a> and <a href="https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa">regulations</a> (collectively “COPPA”) which specifically protect children.</p>
<p>This co-ordinated action is reminiscent of other class actions in the US and United Kingdom by <a href="https://www.rohingyafacebookclaim.com/">Rohingya refugees against Facebook</a> for its role in enabling hate speech against their community in Myanmar. </p>
<p>These cases rely in part on revelations made by former Meta employee Frances Haugen in 2021 about the role Facebook’s algorithms play in <a href="https://www.wsj.com/podcasts/the-journal/the-facebook-files-part-4-the-outrage-algorithm/e619fbb7-43b0-485b-877f-18a98ffa773f">facilitating harms on the platform</a>. Haugen’s <a href="https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress">testimony</a> suggests algorithms deployed across Facebook and Instagram were designed to increase content sharing, and therefore profits, using data harvested from users over many years. </p>
<p>These algorithms play a crucial role in determining what kind of content viewers are exposed to, how long they engage with it, and the likelihood of them sharing it. </p>
<p><a href="https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215">According to Haugen</a>, Meta made changes to its algorithms in 2018 to prioritise meaningful social interactions. These changes, she said, impacted how content was viewed on the news feed, leading to increased sharing of negative content such as hate speech. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/meta-just-copped-a-a-1-9bn-fine-for-keeping-eu-data-in-the-us-but-why-should-users-care-where-data-are-stored-206186">Meta just copped a A$1.9bn fine for keeping EU data in the US. But why should users care where data are stored?</a>
</strong>
</em>
</p>
<hr>
<h2>Concerns over algorithms and content</h2>
<p>The California case is notable for the specific allegations around strategies used to keep young people interacting with Facebook and Instagram. For instance, the plaintiffs have elaborated on the impact of the “infinite scroll” feature introduced in 2016. </p>
<p>This feature prevents users from viewing a single post in isolation. Instead it provides a continuous stream of content without a natural endpoint. Haugen described this as being similar to giving users small dopamine hits. It leaves them wanting more and less likely to exercise self-control.</p>
<p>The plaintiffs in the California case claim this feature encourages users, and especially young users, to compulsively use the platforms – negatively affecting their wellbeing and mental health. </p>
<p>They say the recommendation algorithms used by Meta periodically present users with harmful materials. These include “content related to eating disorders, violent content, content encouraging negative self-perception and body image issues, [and] bullying content”.</p>
<p>They also allege features such as “variable reward schedules” are implemented to encourage compulsive use by young people. This causes further physical and mental harm (such as from a lack of sleep).</p>
<h2>Consequences for Australia</h2>
<p>In the US, <a href="https://www.law.cornell.edu/uscode/text/47/230">federal laws</a> substantially restrict liability of online intermediaries such as Meta for content shared by users.</p>
<p>In contrast, Australia’s <a href="https://www.legislation.gov.au/Details/C2022C00052">Online Safety Act</a> empowers the <a href="https://www.esafety.gov.au/">eSafety Commissioner</a> to compel social media platforms and other online intermediaries to remove problematic material from circulation. This includes material relating to <a href="https://www.esafety.gov.au/key-topics/cyberbullying">cyberbullying</a> of children, <a href="https://www.esafety.gov.au/key-topics/adult-cyber-abuse">cyberabuse</a> of adults, <a href="https://www.esafety.gov.au/key-topics/image-based-abuse">image-based abuse</a> and <a href="https://www.esafety.gov.au/sites/default/files/2020-03/eSafety-AVM-factsheet.pdf">abhorrent violent material</a>.</p>
<p>The Federal Court can impose significant penalties for violations of the <a href="https://www.legislation.gov.au/Details/C2021A00076">Online Safety Act</a>. But this doesn’t cover all the harmful content on social media, such as some linked to eating disorders and negative self-image.</p>
<p>Addressing young users’ compulsive social media use is a different challenge altogether. Some measures against this are possible. For example, if the US deception allegations are proven, any evidence that this extends to Australian users may ground an action against Meta for misleading or deceptive conduct (or false or misleading representations) under the <a href="https://austlii.edu.au/cgi-bin/viewdoc/au/legis/cth/consol_act/caca2010265/sch2.html">Australian Consumer Law</a>. </p>
<p>Last year, <a href="https://www.accc.gov.au/media-release/google-llc-to-pay-60-million-for-misleading-representations">A$60 million in civil penalties</a> was awarded against Google LLC for <a href="https://theconversation.com/accc-world-first-australias-federal-court-found-google-misled-users-about-personal-location-data-159138">false or misleading representations</a> in 2017-2018. A smaller A$20 million penalty was awarded <a href="https://www.accc.gov.au/media-release/20m-penalty-for-meta-companies-for-conduct-liable-to-mislead-consumers-about-use-of-their-data">against two of Meta’s subsidiaries</a> in 2023.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/accc-world-first-australias-federal-court-found-google-misled-users-about-personal-location-data-159138">ACCC 'world first': Australia's Federal Court found Google misled users about personal location data</a>
</strong>
</em>
</p>
<hr>
<p>Penalties under the Australian Consumer Law have increased since the Google case, likely due to the deep pockets of platforms. Options for courts awarding <a href="https://www.legislation.gov.au/Details/C2022A00054">penalties include</a> 30% of a platform’s turnover, or three times the value of the benefit to the offending entity.</p>
<p>However, platforms are in a stronger position where conduct isn’t misleading, false or deceptive, but is merely <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3284143">“manipulative” or “unfair”</a>. For instance, the infinite scroll feature is unlikely to be considered misleading or deceptive under Australian law. </p>
<p>Australia also has no legislative equivalent to COPPA. Australia’s law of unconscionable conduct requires such a high level of harsh or oppressive conduct that it’s extremely difficult to prove. </p>
<p>One recent <a href="http://classic.austlii.edu.au/au/journals/PrecedentAULA/2018/36.html#:%7E:text=Ms%20Guy%20brought%20actions%20under,to%20unconscionable%20conduct%2C%20in%20that">unconscionable conduct</a> case brought by a problem gambler based on the addictive design of electronic poker machines failed in the Federal Court. </p>
<p>Shortcomings in the current law have, in part, led to calls for a new prohibition on <a href="https://treasury.gov.au/consultation/c2023-430458">unfair trading practices</a>. Pressure is also mounting to <a href="https://www.ag.gov.au/rights-and-protections/publications/privacy-act-review-report">reform</a> the ineffective and under-enforced <a href="https://www.austlii.edu.au/cgi-bin/viewdb/au/legis/cth/num_act/pa1988108/">Privacy Act</a>. </p>
<h2>We need collaboration and innovation</h2>
<p>There are still many gaps in Australian law required to protect consumers, especially children, against harms posed by social media platforms. But domestic law can only go so far in protecting people using a medium that operates (mostly) seamlessly across borders. </p>
<p>As such, international law scholars have suggested more creative approaches in the context of online hate speech. <a href="https://www.tandfonline.com/doi/abs/10.1080/14754835.2021.1947208">One suggestion</a> has been to make platforms accountable for their actions under the laws of the country where they are headquartered, for enabling crimes that have taken place in other jurisdictions.</p>
<p>In 2021, the world welcomed a US district court’s <a href="https://www.washingtonpost.com/context/order-in-re-the-republic-of-the-gambia-v-facebook-inc/6fd698bc-034f-43e2-a544-5592e174bc8a/?itid=lk_inline_manual_1">order</a> for Facebook to disclose various materials to The Gambia relating to hate speech against the Rohingya community in Myanmar. </p>
<p>In doing so, the court strengthened The Gambia’s claims in <a href="https://www.icj-cij.org/case/178">a pending action</a> before the International Court of Justice. This action claims the Myanmar government had, through its genocidal actions against the Rohingya people, breached its obligations under <a href="https://www.un.org/en/genocideprevention/documents/atrocity-crimes/Doc.1_Convention%20on%20the%20Prevention%20and%20Punishment%20of%20the%20Crime%20of%20Genocide.pdf">the Genocide Convention</a> – and that hate speech amplified on Facebook enabled the violence.</p>
<p>As society grapples with the implications of mass data collection and profit-maximising algorithms, protecting individuals will require international co-operation and a re-evaluation of legal frameworks.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/want-to-delete-your-social-media-but-cant-bring-yourself-to-do-it-here-are-some-ways-to-take-that-step-176149">Want to delete your social media, but can't bring yourself to do it? Here are some ways to take that step</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/216914/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kayleen Manwaring receives funding from the UNSW Allens Hub for Technology, Law and Innovation and the Cyber Security Cooperative Research Centre. She is a member of the Advisory Board for the Consumer Policy Research Centre (Vic) and is Deputy Chair and NSW Coordinator for an Australian chapter of the IEEE Society on Social Implications of Technology.</span></em></p><p class="fine-print"><em><span>Siddharth Narrain does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The plaintiffs claims features such as ‘infinite scroll’ leave users less likely to be able to exercise self-control.Kayleen Manwaring, Senior Research Fellow, Allens Hub for Technology, Law & Innovation, and Senior Lecturer, School of Private & Commercial Law, UNSW SydneySiddharth Narrain, Lecturer in Law, University of AdelaideLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1931812023-11-08T13:58:08Z2023-11-08T13:58:08ZInternet of Things: tech firms have become our digital landlords – but people are starting to fight back<figure><img src="https://images.theconversation.com/files/556337/original/file-20231027-29-uknyr4.jpg?ixlib=rb-1.1.0&rect=17%2C11%2C3858%2C2481&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/smart-home-concept-remote-control-management-2239959019vvvvv">Hodoimg</a></span></figcaption></figure><p>From smart toasters to fitness collars for dogs, we live in a world where everything around us is gradually being connected to the internet and fitted with sensors so that we can interact with them online. </p>
<p>Many people worry about the privacy risks of using these devices because they may allow hackers to listen to our conversations at home. But the contracts for using them are so long we don’t understand which other rights we might be signing away. </p>
<p>During research for <a href="https://www.taylorfrancis.com/books/oa-mono/10.4324/9780429468377/internet-things-law-guido-noto-la-diega?_gl=1*1ybqum4*_ga*MTUyNDIzODc3OC4xNjk1MTM2Nzkx*_ga_0HYE8YG0M6*MTY5ODY4NzM0MS44LjAuMTY5ODY4NzM0MS4wLjAuMA..">my book</a>, I found that using Alexa’s voice command triggers 246 contracts that we have had to accept in order to use it. These contracts transfer our rights and data to countless, often unidentified, parties. For example, they frequently refer to “affiliates”. </p>
<p>Despite months of research I wasn’t able to clarify who these affiliates are or even whether these affiliates are subsidiaries or advertisers. Of the 246 contracts, I focused on those that are most likely to be relevant to smart speaker Echo’s users. I found they are on average as long as Harry Potter and the Prisoner of Azkaban (317 pages). Not exactly a light read. </p>
<p>Data analysis company <a href="https://9to5mac.com/2021/12/17/read-apples-terms-and-conditions/">Statista found</a>, it would take an hour and a half to read Apple’s terms and conditions for creating an Apple ID. And that’s assuming you don’t need to pause to check the text’s meaning. </p>
<p>Using the Literatin plugin, a Google Chrome extension that assesses the readability of text, I found these contracts are as readable as Machiavelli’s 16th-century political treatise, The Prince. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1471834765959602181"}"></div></p>
<h2>Does this matter?</h2>
<p>Until recently, we might have been forgiven for thinking that the terms and conditions (T&Cs) we accept when browsing the internet were just a box-ticking exercise and nothing to worry about. </p>
<p>But between January and July 2023, Europe’s lead data protection enforcement authorities – the <a href="https://noyb.eu/en/breaking-meta-prohibited-use-personal-data-advertising">European Data Protection Board</a> and the <a href="https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:62021CJ0252">EU Court of Justice</a> – shed light on Meta’s (formerly known as Facebook, Inc) practice of relying on these contracts to target us with ads. And, in an unprecedented move, they banned this practice.</p>
<p>T&Cs are not just about our privacy – and our privacy is not just about our data. By surrounding ourselves with devices with sensors (also known as the “<a href="https://www.techtarget.com/iotagenda/definition/Internet-of-Things-IoT">Internet of Things</a>)”, we’ve effectively invited <a href="https://www.researchgate.net/publication/332821471_Review_of_Joshua_AT_Fairfield_Owned_Property_Privacy_and_the_New_Digital_Serfdom">digital landlords</a> into our homes.</p>
<p><a href="https://www.primevideo.com/help?nodeId=202095490&view-type=content-only">One example</a> I refer to in my book can be found in an Amazon contract that legally binds anyone watching videos on their Echo devices: “Purchased digital content … may become unavailable … and Amazon will not be liable to you”. </p>
<p>In other words, if you think that you own your digital content only because you are purchasing it, think again: can we call it property if it can be taken away randomly? </p>
<p>Companies do act on these types of hidden clauses. In 2019 <a href="https://www.nytimes.com/2009/07/18/technology/companies/18amazon.html">Amazon (rather fittingly) took back the ebooks</a> of George Orwell’s Animal Farm and 1984 from its Kindle users due to alleged copyright issues.</p>
<p>Another example is how tractor manufacturer John Deere relied on its end-user licence agreement (Eula) to <a href="https://www.eff.org/deeplinks/2016/12/john-deere-really-doesnt-want-you-own-tractor">stop farmers repairing</a> their smart tractors. John Deere’s Eula forbade customers even looking at the software it uses to run its tractors. </p>
<p>Betting giant Spreadex took a customer, Colin Cochrane to court to force him to pay almost £50,000 of gambling losses in 2012, racked up by his stepson. Cochrane’s girlfriend’s son had been “playing” with his computer without his permission while he was away from the house. </p>
<p>Spreadex pointed the UK account owner to a clause in its customer agreement that equated the use of account passwords with a confirmation of who was behind the screen using the device. </p>
<p>Fortunately for Cochrane, the judge held that the clause was not enforceable because it would have been “<a href="https://www.bailii.org/cgi-bin/format.cgi?doc=/ew/cases/EWHC/Comm/2012/1290.html">quite irrational</a>” for Spreadex to assume the customer read the agreement and understood its implications.</p>
<h2>Regulation won’t work</h2>
<p>Examples of law reform include the <a href="https://www.gov.uk/government/news/uk-children-and-adults-to-be-safer-online-as-world-leading-bill-becomes-law#:%7E:text=The%20Act%20places%20legal%20responsibility,and%20eating%20disorders%2C%20and%20pornography.">online safety bill in the UK</a> and the <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_23_3491">Data Act in the EU</a>. They are both in progress, so we don’t know yet when they will be adopted. </p>
<p>Law reform is a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2682489">painfully slow process</a>. Big tech and other large stakeholders have a huge influence because they have <a href="https://www.cnbc.com/2020/07/31/big-tech-spends-20-million-on-lobbying-including-on-coronavirus-bills.html">money and influence</a> to fight laws they don’t like. </p>
<p>Sometimes bills end up so diluted they are of little use. This was the case with the General Data Protection Regulation (GDPR) which came into effect at the end of a <a href="https://iapp.org/resources/article/a-brief-history-of-the-general-data-protection-regulation/">nine-year process</a>. It was born out of date. Several studies have underlined GDPR’s <a href="https://www.elgaronline.com/view/edcoll/9781800371675/9781800371675.00031.xml">inadequacy to deal with new technologies</a> such as ChatGPT. </p>
<h2>What does work</h2>
<p>The solution is to collectively organise. Let’s circle back to John Deere and the way the company tried to deprive tractor owners of their right to fix their machines. There is much to learn from those farmers who joined together with hackers to resist “smart power abuses”. </p>
<p>After <a href="https://www.vice.com/en/article/kbgzgz/farmers-right-to-repair">opposing their right to repair campaign</a> for years, at the beginning of 2023 John Deere gave in and authorised farmers and ranchers to <a href="https://www.theverge.com/2023/1/9/23546323/john-deere-right-to-repair-tractors-agreement">fix their own tractors</a>. But only after attendees at a hacker’s convention figured out how to “jailbreak” the code that was locking farmers and engineers out. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/liberate-the-tractors-the-right-to-repair-movement-thats-regaining-control-of-our-devices-188954">'Liberate the tractors': the right to repair movement that's regaining control of our devices</a>
</strong>
</em>
</p>
<hr>
<p>All around the world, groups of computer scientists, digital rights activists, citizens are <a href="https://www.ft.com/content/50e87334-597c-4ef5-adc9-2ea4ee823161">creating cooperatives</a> and <a href="https://www.meetup.com/topics/internet-of-things/">citizen-led movements</a>. They are motivated by partly different <a href="https://www.bbc.co.uk/news/technology-57744091">yet overlapping goals</a> for example making the IoT more open and diverse. </p>
<p>Big tech workers are acting collectively to prevent unethical uses of their employers’ technology. For example, <a href="https://www.cnbc.com/2020/06/22/google-employees-petition-company-to-cancel-police-contracts.html#:%7E:text=The%20announcement%20came%20after%20thousands,are%20using%20it%20for%20harm.">in 2020 Google employees</a> fought to stop the company’s decision to provide its AI to law enforcement agencies despite the <a href="https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/">failures of facial recognition</a>, which has often <a href="https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/">perpetuated racism</a> and other forms of discrimination. </p>
<p>We can win the fight against smart power through alliances between these collectives.</p><img src="https://counter.theconversation.com/content/193181/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Guido Noto La Diega receives funding from the Arts and Humanities Research Council and the German Research Foundation - project ref no AH/W010518/1 "From Smart Technologies to Smart Consumer Laws: Comparative Perspectives from Germany and the United Kingdom". They serve on the Advisory Council of the Open Rights Group Scotland.</span></em></p>No one has time to read the terms and conditions we are often asked to consent to. But we’re sometimes agreeing to things we would rather not.Guido Noto La Diega, Chair in Intellectual Property and Technology Law, University of StirlingLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2166472023-11-02T14:23:30Z2023-11-02T14:23:30ZSocial media content in times of war: an expert guide on how to keep violence off your feeds<figure><img src="https://images.theconversation.com/files/556623/original/file-20231030-25-2np8f3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There are some practical ways to filter the amount of violent and graphic content you see on social media.</span> <span class="attribution"><span class="source">bubaone</span></span></figcaption></figure><p>Social media platforms are a great source of information and entertainment. They also help us to maintain contact with friends and family. But social media can also – <a href="https://theconversation.com/mounting-research-documents-the-harmful-effects-of-social-media-use-on-mental-health-including-body-image-and-development-of-eating-disorders-206170">and has</a>, <a href="https://doi.org/10.1093/joc/jqab034">often</a> – become a toxic environment for spreading disinformation, hatred and conflict. </p>
<p>Most people can’t or don’t want to opt out of social media. Efforts by courts and <a href="https://foreignpolicy.com/2022/04/25/the-real-threat-to-social-media-is-europe/">state bodies</a> to regulate or control it are slowly catching up, but so far have been unsuccessful. And social media companies have a record of <a href="https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/">prioritising engagement</a> over social benefit.</p>
<p>Users are left with a dilemma: how to benefit from social media without exposing themselves to distressing, damaging or illegal content. This becomes even more of an issue in times of heightened global tension and conflict. Both the conflict in Ukraine and now the Gaza War have increased the risk of seeing <a href="https://www.npr.org/2023/10/24/1208165068/graphic-videos-and-images-of-the-israel-hamas-war-are-flooding-social-media">horrifying and damaging images</a> on one’s feed. </p>
<p>This article, based on <a href="https://orcid.org/0000-0001-5171-663X">my research</a> on news on social media, is a guide to curating and editing your social media feeds to ensure that the content you see is suited to your needs and is not offensive or disturbing. </p>
<p>It is organised into the broadest social media categories. I’m not covering newer services such as <a href="https://www.threads.net/login">Threads</a>, <a href="https://mastodon.social/explore">Mastodon</a>, <a href="https://post.news/feed">Post</a> and <a href="https://bsky.app/">Bluesky</a>, although the principles are generally applicable. I have focused on using these apps on a mobile phone, because that’s what <a href="https://www.pewresearch.org/global/2022/12/06/internet-smartphone-and-social-media-use-in-advanced-economies-2022/">the majority of users</a> do, rather than using them on a web browser. I am concentrating mostly on video content.</p>
<p>Social media can be a powerful tool for information and learning, but it is a flawed one. Whatever approach you take to managing your feeds, remain cautious and sceptical. Pay attention to updates to policies and user agreements and consider carefully who you trust and follow. </p>
<h2>Your choice or theirs?</h2>
<p>Many social networks offer an algorithmically selected feed as your first point of contact. The specifics of the algorithms are not publicly known and the companies refine them constantly. The feed is largely based on your location and the topics and people you have expressed an interest in previously (whether following, or simply having watched or interacted with the content). It may also include other information such as your age and gender, which you may have previously given the service. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/algorithms-are-moulding-and-shaping-our-politics-heres-how-to-avoid-being-gamed-201402">Algorithms are moulding and shaping our politics. Here's how to avoid being gamed</a>
</strong>
</em>
</p>
<hr>
<p>Organisations and individuals invest money and time in ensuring that their content will be seen. Advertisers will also pay to have their content shown to customers who meet their criteria. It is also important to remember that paid content is not just goods and services for sale, but may be a political or social agenda – often one that is hidden. This is the basis of <a href="https://link.springer.com/article/10.1007/s13278-023-01028-5">fake news and deliberate misinformation</a>.</p>
<p>Here are a few ways to manage your social media feeds.</p>
<h2>Be careful who you follow</h2>
<p>On all networks except TikTok, the key is carefully selecting the people you follow.</p>
<p>On Twitter (X) the best option is to move away from the “for you” page (which is the default view) and focus on the “following” page. You can’t remove the “for you” page entirely. The “following” feed includes everyone you follow, their tweets and their retweets. </p>
<p>If you are seeing content you don’t want to, you can unfollow, block or mute them.</p>
<p>The simplest way to clean up your Facebook news feed is to “unfriend” accounts. Another option is to “unfollow” someone: you remain friends, they can see your content and engage with it, but their posts won’t appear in your feed unless they mention you or you seek it out. Or you can “take a break” from someone, which is a kind of temporary block. Blocking is the most extreme option. It will remove them and all of their content and hide all of yours from them.</p>
<p>Instagram offers similar options to unfollow and mute (similar to Facebook’s “take a break” option).</p>
<p>TikTok has only limited options for users to filter or curate their feeds. The “following” page only shows creators you are following (and ads). It isn’t and can’t be set as the default view.</p>
<p>The “for you” page is entirely algorithm driven. Clicking on a creator only allows you to follow them, not to hide or block them. You can, however, block specific users. Click on their profile, then the share icon. “Report” and “block” are below the various share options. Blocking removes their content, but not other users’ content that features them.</p>
<h2>Explore your settings</h2>
<p>Many platforms have options for limiting violent or graphic content. On Facebook this is buried in the Settings menu. From there, click on News Feed, then Reduce. You can’t remove this content, but you can move it down in your feed. </p>
<p>On TikTok, long pressing on the screen brings up the options panel. From there you can report a video; there’s also a “not interested” option to remove that video and others with similar hashtags from your feed. If you click on “details” to see which hashtags will be filtered, you can select specific ones to block. It’s not clear how reliable this is, however – hashtags change over time. A number of hashtags apparently can’t be filtered, but it’s not clear what these are or why they can’t be filtered.</p>
<p>The “content preferences” option under “settings” allows you to filter video keywords. That removes them from your “for you” page, your “following” page, or both.</p>
<p>You can also set TikTok to “restricted mode”. This limits access to “unsuitable content” – an opaque description.</p>
<h2>User beware</h2>
<p>This is not a perfect guide, since social media is not designed to be controlled by the user. These companies are based on user engagement: the more time you spend on their app, the more money they make. They’re not particularly interested in ensuring the content is helpful or accurate.</p><img src="https://counter.theconversation.com/content/216647/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Megan Knight does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Whatever approach you take to managing your feeds, remain cautious and sceptical.Megan Knight, Associate Dean, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2163212023-10-27T14:27:50Z2023-10-27T14:27:50ZHow to redesign social media algorithms to bridge divides<p>Social media platforms have been implicated in conflicts of all scales, from <a href="https://www.theatlantic.com/magazine/archive/2023/09/jarell-jackson-shahjahan-mccaskill-killed-philadelphia-social-media/674760/">urban gun violence</a> to the <a href="https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media/">storming of the US Capitol building</a> on January 6 and <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/N16/350/68/PDF/N1635068.pdf?OpenElement">civil war in South Sudan</a>. Scientifically, it is <a href="https://theconversation.com/misinformation-why-it-may-not-necessarily-lead-to-bad-behaviour-199123">difficult to tell</a> how much social media can be blamed for one-off incidents. </p>
<p>But in much the way that climate change increases the risk of extreme weather, evidence suggests that current algorithms (which mostly <a href="https://medium.com/understanding-recommenders/how-platform-recommenders-work-15e260d9a15a">optimise for engagement</a>) raise the political “temperature” by disproportionately surfacing inflammatory content. This <a href="https://arxiv.org/abs/2305.16941">may make people angrier</a>, increasing the risk that social differences <a href="https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media">escalate to violence</a>.</p>
<p>But what if we redesigned social media to bridge divides? “<a href="https://www.belfercenter.org/publication/bridging-based-ranking">Bridging-based ranking</a>” is an alternative kind of algorithm for ranking content in social media feeds that explicitly aims to build mutual understanding and trust across differing perspectives.</p>
<p>The core logic of bridging-based ranking has already been used on <a href="https://bridging.systems/facebook-papers/">Facebook</a> and <a href="https://communitynotes.twitter.com/guide/en/about/introduction">X</a> (formerly known as Twitter), albeit not in the main feed. It is also used in <a href="https://pol.is/home">Polis</a>, an online platform for collecting public input, used by several governments to inform policymaking on polarised topics. </p>
<p>There are many open questions, but evidence from existing uses of bridging-based ranking suggests that changes to algorithms may <a href="https://arxiv.org/abs/2307.13912">reduce partisan animosity</a> and <a href="https://bridging.systems/facebook-papers/">improve the quality and inclusiveness</a> of online interactions.</p>
<p>People are increasingly looking for alternative algorithms. Regulators <a href="https://techcrunch.com/2023/08/25/quiet-qutting-ai/">in the EU</a> and new platforms <a href="https://blueskyweb.xyz/blog/3-30-2023-algorithmic-choice">such as Bluesky</a> are giving users choice regarding which algorithm determines what they see, and recent <a href="https://www.science.org/content/article/does-social-media-polarize-voters-unprecedented-experiments-facebook-users-reveal">large-scale experiments on Facebook</a> have tested different options.</p>
<p>If we care about social cohesion, then during this period of “shopping around” we need to seriously consider alternatives such as bridging.</p>
<h2>How it works</h2>
<p>Current <a href="https://medium.com/understanding-recommenders/how-platform-recommenders-work-15e260d9a15a">engagement-based algorithms</a> make predictions about which posts are most likely to generate clicks, likes, shares or views – and use these predictions to rank the most engaging content at the top of your feed. This tends to amplify the most polarising voices, because divisive perspectives are very engaging.</p>
<p><a href="https://bridging.systems/">Bridging-based ranking</a> uses a different set of signals to determine which content gets ranked highly. One approach is to increase the rank of content that receives positive feedback from people who normally disagree. This creates an incentive for content producers to be mindful of how their content will land with “the other side”.</p>
<p>Among the <a href="https://bridging.systems/facebook-papers/">internal Facebook documents</a> leaked by whistleblower Frances Haugen in 2021, there is evidence that Facebook tested this approach for ranking comments. </p>
<p>Comments with positive engagement from diverse audiences were found to be of higher quality, and “much less likely” to be reported for bullying, hate or inciting violence. A similar strategy is used in <a href="https://communitynotes.twitter.com/guide/en/about/introduction">Community Notes</a>, a crowd-sourced fact checking feature on X, to identify notes that are helpful to people on both sides of politics.</p>
<p>This pattern of “diverse positive feedback” is the most widely implemented approach to bridging. Others include <a href="https://arxiv.org/abs/2307.13912">lowering the ranking</a> of content that promotes partisan violence, or using surveys to shape algorithms so that they increase the ranking of content according to <a href="https://www.wired.com/story/platforms-engagement-research-meta/">how it makes users feel in the long term</a>, rather than the short term.</p>
<p>Conflict is an important part of society, and in many cases, a key driver of <a href="https://www.jstor.org/stable/586859">political and social change</a>. The goal of bridging is not to eliminate conflict or disagreement, but to promote constructive forms of conflict.</p>
<p>This is known as <a href="https://www.beyondintractability.org/essay/transformation">conflict transformation</a>. Professional mediators, facilitators and “peacebuilders”, who work with opposing groups, have a detailed understanding of <a href="https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media">how conflicts escalate</a>. They also know how to structure communication between opposing groups in ways that build mutual understanding and trust.</p>
<p>Research on bridging-based ranking can draw on this, taking insights from conflict management in the physical world and <a href="https://scripties.uba.uva.nl/search?id=record_24357">translating</a> them <a href="https://howtobuildup.medium.com/archetypes-of-polarization-on-social-media-d56d4374fb25">into digital systems</a>. </p>
<p>For example, facilitating contact between people from rival groups in “opt in”, non-threatening settings <a href="https://doi.org/10.1016/j.ijintrel.2011.03.001">can reduce prejudice</a>, and we <a href="https://doi.org/10.1073/pnas.2311627120">can</a> <a href="https://www.nature.com/articles/s41562-023-01655-0">design</a> social platforms to create these conditions online.</p>
<h2>Why should big tech adopt this?</h2>
<p>Firms such as Meta have built their fortune on the “attention economy” and content which promotes short-term engagement, and hence revenue.</p>
<p>We simply don’t yet know the extent to which the goals of bridging and engagement are in tension. If you talk to people who work at social media platforms, they will tell you that when well-intended changes to the algorithm are tested, user engagement sometimes drops initially, but then slowly rebounds over time, ultimately ending up with more engagement.</p>
<p>The problem is, platforms normally get cold feet and cancel experiments before they can observe such long-term benefits. Evidence we <em>do</em> have from <a href="https://bridging.systems/facebook-papers/">leaked Facebook papers</a> suggests that incorporating bridging <a href="https://youtu.be/ePh_DVi3dMM">improves the user experience</a>.</p>
<p>Bridging-based ranking might also have benefits beyond engagement. By reducing <a href="https://lukethorburn.com/files/BridgingBasedRanking-PluralitySpringSymposium.pdf#page=13">toxicity</a> and content that <a href="https://bridging.systems/facebook-papers/">violates community guidelines</a>, it would likely reduce the need for costly content moderation.</p>
<p>Demonstrating a willingness to make their algorithms less divisive would also build goodwill among regulators, reducing the risk of reputational and legal damage. For example, Facebook has been heavily criticised for allegedly facilitating incitements to violence in <a href="https://www.bbc.co.uk/news/world-asia-46105934">Myanmar</a>, <a href="https://www.theguardian.com/world/2018/mar/07/sri-lanka-blocks-social-media-as-deadly-violence-continues-buddhist-temple-anti-muslim-riots-kandy">Sri Lanka</a>, and <a href="https://www.theguardian.com/technology/2022/dec/14/meta-faces-lawsuit-over-facebook-posts-inciting-violence-in-tigray-war">Ethiopia</a>. </p>
<p>It has subsequently faced lawsuits from victims and communities, who have sought <a href="https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence">up to £150 billion</a> in damages.</p>
<h2>Questions and challenges</h2>
<p>Important questions around bridging-based ranking remain, and we set out many of these in a <a href="https://knightcolumbia.org/content/bridging-systems">recent paper</a> published with the Knight First Amendment Institute, which publishes original scholarship and policy papers relating to the defence of freedoms of speech and the press in the digital age. </p>
<p>Which divides should be bridged? Are there unintended consequences – for example, amplifying mainstream views at the expense of minority viewpoints? How can decisions about the design of mass communication technologies be made democratically?</p>
<p>Bridging is not a panacea. There is only so much algorithmic changes can do to address societal conflict, which is a result of complex factors such as inequality. But by recognising that digital platforms are reshaping society, we have an obligation to guide that process in an ethical, humanistic direction that brings out the best in us.</p>
<p>It falls to both the tech companies that built these systems and an engaged public to create technologies designed for social cohesion. With care, wisdom and democratic oversight, we can foster online communities that reflect our better sides. But we have to make that choice.</p><img src="https://counter.theconversation.com/content/216321/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aviv Ovadya is affiliated with the the Berkman Klein Center at Harvard, the AI & Democracy Foundation, the newDemocracy Foundation, and the Centre for Governance of AI. </span></em></p><p class="fine-print"><em><span>Luke Thorburn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Algorithms have been blamed for dividing society. What if they could support social cohesion instead?Luke Thorburn, PhD Candidate in Safe and Trusted AI, King's College LondonAviv Ovadya, Affiliate at the Berkman Klein Center for Internet & Society, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2109562023-10-27T12:17:55Z2023-10-27T12:17:55ZWhy Elon Musk is obsessed with casting X as the most ‘authentic’ social media platform<figure><img src="https://images.theconversation.com/files/555929/original/file-20231025-19-mfd5h2.jpg?ixlib=rb-1.1.0&rect=8%2C8%2C5521%2C3772&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">X CEO Elon Musk has argued that his social media platform allows users to 'be their true selves.'</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/elon-musk-ceo-of-tesla-and-x-arrives-for-the-ai-insight-news-photo/1678314548?adppopup=true">Nathan Howard/Getty Images</a></span></figcaption></figure><p>With X, formerly known as Twitter, hitting the <a href="https://www.nytimes.com/2022/10/27/technology/elon-musk-twitter-deal-complete.html">one-year anniversary</a> of Elon Musk’s US$44 billion takeover of the social media platform, it can feel disorienting to try to make sense of all that’s gone down. </p>
<p>Blue check-mark verifications <a href="https://www.nytimes.com/2023/03/31/technology/personaltech/twitter-blue-check-musk.html">got hawked</a>. Internal company documents about content moderation policies <a href="https://www.npr.org/2022/12/14/1142666067/elon-musk-is-using-the-twitter-files-to-discredit-foes-and-push-conspiracy-theor">got laundered</a>. A puzzling rebrand to “X” <a href="https://www.washingtonpost.com/technology/2023/07/24/elon-musk-x-twitter-rebrand-logo/">got hatched</a>. And a literal cage match with Meta head Mark Zuckerberg was on again and, ultimately, <a href="https://www.nytimes.com/2023/08/13/business/zuckerberg-musk-cage-fight.html">off again</a>.</p>
<p>It appears unclear what, precisely, Musk’s ambitions are for the platform. But when a threatening competitor, Threads, emerged in summer 2023, he may have offered a brief window of insight.</p>
<p>A clone of X, Threads <a href="https://www.washingtonpost.com/technology/2023/07/10/threads-meta-twitter-zuckerberg/">rolled up 100 million users</a> in less than a week after its June launch, becoming the fastest-growing app of all time. Musk promptly erupted with two attacks on Zuckerberg’s creation.</p>
<p>The first was catty and, as such, invited notice within digital spaces programmed to promote outrage. <a href="https://twitter.com/elonmusk/status/1676770522200252417?lang=en">Musk declared</a>, “It is infinitely preferable to be attacked by strangers on Twitter, than indulge in the false happiness of hide-the-pain Instagram.” </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1676770522200252417"}"></div></p>
<p><a href="https://twitter.com/elonmusk/status/1678686570122199040">The second</a> – “You are free to be your true self here” – was more overlooked, yet revealed an essential premise that social media companies must sell to all their users.</p>
<p>As I argue in my new book, “<a href="https://www.sup.org/books/title/?id=36333">The Authenticity Industries</a>,” authenticity represents the central battle for social media companies. They design their platforms to demonstrate and facilitate genuine self-performance from users. That’s what makes for dependable data, and dependable data – sold to advertisers – is <a href="https://slate.com/technology/2019/10/mark-zuckerberg-facebook-georgetown-speech-authentic.html">what makes the internet economy hum</a>.</p>
<p>Silicon Valley’s commitment to the ideal of authenticity remains ironclad, even as more and more people are starting to recognize that <a href="https://theconversation.com/taylor-swifts-eras-tour-is-a-potent-reminder-that-the-internet-is-not-real-life-209325">the internet isn’t real life</a>.</p>
<h2>A life performed</h2>
<p>Over the past decade, Instagram – with its glossy, obsessively manicured tableaux – became the aesthetic antithesis against which all other social media platforms measure that authenticity. </p>
<p>Instagram tinted life by allowing users to apply sun-kissed, nostalgic filters to their photographs. To scrub clean any blemishes on selfies posted there, add-ons like Facetune enabled magazine-quality Photoshopping <a href="https://digitalnative.substack.com/p/the-rejection-of-internet-perfection?s=r.">and topped paid-app charts</a>. Instagram became your highlight reel: galleries of far-flung travels and mouth-watering food porn exquisitely curated – a life performed as much as lived.</p>
<p>“[Instagram’s] basically almost designed to make your friends jealous,” one executive at TikTok <a href="https://www.sup.org/books/title/?id=36333">confided to me</a>. “It kind of makes me depressed a little bit sometimes when I go on Instagram and I feel, like, ‘Oh, I’m not fit enough. I’m not successful enough.’”</p>
<p>Over time, #NoFilter caveats, blurry photo dumps and shameless “finsta” accounts – a portmanteau of “fake” and “Instagram” – <a href="https://www.refinery29.com/en-gb/bereal-authenticity-performance-online-instagram">arose as forms of authenticity backlash</a> to the “false happiness” of the posed lifestyles appearing on users’ feeds.</p>
<p>Heck, even Instagram knew it had a problem, copy-and-pasting Snapchat’s signature ephemerality and <a href="https://about.instagram.com/blog/announcements/introducing-instagram-stories">launching its disappearing Stories feature</a> to lower the pressure on users to post perfection.</p>
<p>If ever a platform, then, has been deserving of <a href="https://www.nytimes.com/2019/06/17/business/media/miquela-virtual-influencer.html">Reddit co-founder Alexis Ohanian’s 2019 quip</a> that “social media, to date, has largely been the domain of real humans being fake,” it’s probably Instagram.</p>
<h2>Different flavors of the same thing</h2>
<p>Recall Musk’s second, <a href="https://twitter.com/elonmusk/status/1678686570122199040">more revelatory rejoinder</a> on behalf of X: “You are free to be your true self here.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1678686570122199040"}"></div></p>
<p>For two decades, this has been the first commandment of social media promotion – both by platforms and on them.</p>
<p>More broadly, all online communication bears the burden of proof in this vein: It must compensate for the absence of face-to-face verifiability, which a 1993 Peter Steiner <a href="https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog">cartoon for The New Yorker</a> satirized with the caption, “On the internet, nobody knows you’re a dog.”</p>
<p>Research confirms this. One <a href="https://www.mdpi.com/2076-0760/6/1/10">clever study</a> by media scholars Meredith Salisbury and Jefferson Pooley scoured the publicity pablum, CEO platitudes and app store copy from Friendster onward, finding that nearly every site leans on the same rhetorical clichés – like “real life” and “genuine” – as a means of defining itself against the purported phoniness of other sites.</p>
<p>But this might well be the narcissism of tiny differences at work, with Threads only the latest instance of social media copycatting. </p>
<p>In 2020, Wired <a href="https://www.wired.com/story/social-media-giants-look-the-same-tiktok-twitter-instagram/">incisively tallied</a> how <a href="https://blog.twitter.com/en_us/topics/product/2020/introducing-fleets-new-way-to-join-the-conversation">X’s Fleets</a>, a 24-hour posting-expiration feature, was a copy of Instagram’s Stories, which was itself originally ripped off from Snapchat. <a href="https://influencermarketinghub.com/what-is-snap-spotlight/">Snapchat developed Spotlight</a> for short-form video content, comparable to Instagram’s Reels and YouTube’s Shorts, all of which were an attempt to fend off TikTok, itself a reincarnation of Vine.</p>
<p>And all of these, including last year’s 56 million-times-downloaded viral sensation, <a href="https://www.washingtonpost.com/technology/2022/09/17/bereal-copy-tiktok-instagram-snapchat/">BeReal</a> – where users snap unfiltered, unposed selfies for friends at random times daily – have promised users the opportunity to be their true selves. </p>
<p>In as much as Musk has pursued anything in his first year as Chief Twit, that seems to be his ambition: engineering a space with no social guardrails, where any inhibitions of decorum are ignored in favor of speaking, authentically, from the heart.</p>
<h2>Ambitions don’t match reality</h2>
<p>To a certain kind of personality, that’s probably an alluring offer. Indeed, Zuckerberg’s original – and still most enduring – platform triumph, Facebook, depended on designing a website that induced an online performance of a “true” offline self.</p>
<p>Those norms were embedded in design choices, as Zuckerberg made plain his disregard for our <a href="https://www.penguinrandomhouse.com/books/708488/the-presentation-of-self-in-everyday-life-by-erving-goffman/">multistage, two-faced selves</a> in an <a href="https://www.simonandschuster.com/books/The-Facebook-Effect/David-Kirkpatrick/9781439102121">oft-quoted line</a>, “You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.”</p>
<p>“Single-identity authenticity” was Facebook’s early market strategy, and the nascent website initially required users to register with a college email address. The design choice may well have been critical to Facebook vanquishing its closest early competitors, <a href="https://www.mentalfloss.com/article/556413/friendster-rise-and-fall-jonathan-abrams">Friendster</a> and <a href="https://www.theatlantic.com/technology/archive/2011/01/the-rise-and-fall-of-myspace/69444/">Myspace</a>.</p>
<p>“The .edu email system served as this authenticating clearinghouse,” one early Facebook executive <a href="https://www.sup.org/books/title/?id=36333">explained to me</a>, a phrasing that could as easily be applied to the utility of Instagram accounts today for Threads. “Really, users 0 through 10 million were all verified and authenticated by the .edu email system, [while] Myspace had 57 Jennifer Anistons.”</p>
<p>That authenticating clearinghouse would soon vanish as Facebook opened itself up to users not enrolled in college – like, say, <a href="https://www.theguardian.com/technology/2017/oct/30/facebook-russia-fake-accounts-126-million">the disinformation agents</a> who have meddled in U.S. elections from Russia.</p>
<h2>A regression to the meanest</h2>
<p>All this competition makes for authenticity jockeying: Musk attempted to parry Zuckerberg’s Threads threat with his invitation to convene strangers who will stop being polite and <a href="https://en.wikipedia.org/wiki/The_Real_World_(TV_series)">start getting real</a>. </p>
<p>But in an ominous echo of Rupert Murdoch’s $500 million <a href="https://www.theatlantic.com/technology/archive/2011/06/as-myspace-sells-for-35-million-a-history-of-the-networks-valuation/241224/">write-off</a> of Myspace, Musk’s $44 billion purchase has struggled with those bot-and-blue check mark difficulties of user verification.</p>
<p>None of this is to say Threads will eventually triumph over X, even as the crisis in the Middle East – and the misinformation circulating because of it – <a href="https://slate.com/technology/2023/10/x-twitter-elon-musk-israel-hamas-gaza-misinformation-meta-threads.html">seems to have initiated</a> another exodus of defectors from X. After all, a month after its launch, Threads had already lost <a href="https://gizmodo.com/threads-has-lost-more-than-80-of-daily-active-users-1850707329">an estimated</a> 80% of its daily active users.</p>
<p>Threads’ vibes may have been cheerful and friendly at the outset – disingenuously so, according to Musk – but it may well prove that, eventually, all social media sites regress toward the meanest. </p>
<p>Musk would probably call that “authenticity.” On X, you might not be able to trust the veracity of the user or the information they’re spreading. But you can be sure that they don’t feel like they have to bite their tongue and act nice.</p>
<p>Social media company names may change. But when identity is the most lucrative commodity they trade in, their fetishization of authenticity won’t.</p><img src="https://counter.theconversation.com/content/210956/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Serazio does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>With identity the most lucrative commodity social media platforms trade in, their fetishization of authenticity remains ironclad.Michael Serazio, Associate Professor of Communication, Boston CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2132092023-10-24T12:25:09Z2023-10-24T12:25:09ZLet the community work it out: Throwback to early internet days could fix social media’s crisis of legitimacy<figure><img src="https://images.theconversation.com/files/555410/original/file-20231023-15-otewua.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3489%2C2331&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Content moderators like these workers make decisions about online communities based on company dictates.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/content-moderators-work-at-a-facebook-office-in-austin-news-photo/1142321813">Ilana Panich-Linsman for The Washington Post via Getty Images</a></span></figcaption></figure><p>In the 2018 documentary “<a href="https://gebrueder-beetz.de/en/productions/the-cleaners/">The Cleaners</a>,” a young man in Manila, Philippines, explains his work as a content moderator: “We see the pictures on the screen. You then go through the pictures and delete those that don’t meet the guidelines. The daily quota of pictures is 25,000.” As he speaks, his mouse clicks, deleting offending images while allowing others to remain online.</p>
<p>The man in Manila is one of thousands of content moderators hired as contractors by social media platforms – <a href="https://www.npr.org/2023/03/31/1167246714/googles-ghost-workers-are-demanding-to-be-seen-by-the-tech-giant">10,000 at Google alone</a>. Content moderation on an industrial scale like this is part of the everyday experience for users of social media. Occasionally a post someone makes is removed, or a post someone thinks is offensive is allowed to go viral. </p>
<p>Similarly, platforms add and remove features without input from the people who are most affected by those decisions. Whether you are outraged or unperturbed, most people don’t think much about the history of a system in which people in conference rooms in Silicon Valley and Manila determine your experiences online.</p>
<p>But why should a few companies – or a few billionaire owners – have the power to decide everything about online spaces that billions of people use? This unaccountable model of governance has led stakeholders of all stripes to criticize platforms’ decisions as <a href="https://www.brennancenter.org/sites/default/files/2021-08/Double_Standards_Content_Moderation.pdf">arbitrary</a>, <a href="https://nymag.com/intelligencer/2022/12/twitter-files-explained-elon-musk-taibbi-weiss-hunter-biden-laptop.html">corrupt</a> or <a href="https://www.oxfordstrategyreview.com/content/social-irresponsibility-how-social-media-works-for-the-west-but-fails-the-rest">irresponsible</a>. In the early, pre-web days of the social internet, decisions about the spaces people gathered in online were often made by members of the community. Our <a href="https://doi.org/10.1177/20563051231196864">examination of the early history of online governance</a> suggests that social media platforms could return – at least in part – to models of community governance in order to address their crisis of legitimacy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/iGCGhD8i-o4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The documentary ‘The Cleaners’ shows some of the hidden costs of Big Tech’s customer service approach to content moderation.</span></figcaption>
</figure>
<h2>Online governance – a history</h2>
<p>In many early online spaces, governance was handled by community members, not by professionals. One early online space, <a href="https://thenewstack.io/a-look-back-in-time-the-forgotten-fame-of-lambdamoo/">LambdaMOO</a>, invited users to build their own governance system, which devolved power from the hands of those who technically controlled the space – administrators known as “wizards” – to members of the community. This was accomplished via a <a href="https://doi.org/10.1111/j.1083-6101.1996.tb00185.x">formal petitioning process and a set of appointed mediators</a> who resolved conflicts between users.</p>
<p>Other spaces had more informal processes for incorporating community input. For example, on bulletin board systems, users <a href="https://yalebooks.yale.edu/book/9780300248142/the-modem-world/">voted with their wallets</a>, removing critical financial support if they disagreed with the decisions made by the system’s administrators. Other spaces, like text-based Usenet newsgroups, gave users substantial power to shape their experiences. The newsgroups left obvious spam in place, but gave users tools to block it if they chose to. Usenet’s administrators argued that it was fairer to allow each user <a href="https://fishbowl.pastiche.org/2021/01/12/usenet_spam">to make decisions that reflected their individual preferences</a> rather than taking a one-size-fits-all approach.</p>
<p>The graphical web expanded use of the internet from <a href="https://www.internetworldstats.com/emarketing.htm">a few million users to hundreds of millions within a decade</a> from 1995 to 2005. During this rapid expansion, community governance was replaced with governance models inspired by customer service, which focused on scale and cost. </p>
<p>This switch from community governance to customer service made sense to the fast-growing companies that made up the late 1990s internet boom. Promising their investors that they could grow rapidly and make changes quickly, companies looked for approaches to the complex work of governing online spaces <a href="https://doi.org/10.1177/20563051231196864">that centralized power and increased efficiency</a>. </p>
<p>While this customer service model of governance allowed early user-generated content sites like Craigslist and GeoCities <a href="https://datasociety.net/library/origins-of-trust-and-safety/">to grow rapidly</a>, it set the stage for the crisis of legitimacy facing social media platforms today. Contemporary battles over social media are rooted in the sense that the people and processes governing online spaces are unaccountable to the communities that gather in them. </p>
<h2>Paths to community control</h2>
<p>Implementing community governance in today’s platforms could take a number of different forms, some of which are already being experimented with.</p>
<p>Advisory boards like Meta’s <a href="https://about.meta.com/actions/oversight-board-facts/">Oversight Board</a> are one way to involve outside stakeholders in platform governance, providing independent — albeit limited — review of platform decisions. X (formerly Twitter) is taking a more democratic approach with its <a href="https://help.twitter.com/en/using-x/community-notes">Community Notes</a> initiative, which allows users to contextualize information on the platform by crowdsourcing notes and ratings.</p>
<p>Some may question whether community governance can be implemented successfully in platforms that serve billions of users. In response, we point to Wikipedia. It is entirely community-governed and has created an open encyclopedia that’s become the foremost information resource in many languages. Wikipedia is surprisingly resilient to vandalism and abuse, with robust procedures that ensure a resource used by billions remains accessible, accurate and reasonably civil.</p>
<p>On a smaller scale, total self-governance – echoing early online spaces – could be key for communities that serve specific subsets of users. For example, <a href="https://archiveofourown.org/">Archive of Our Own</a> was created after fan-fiction authors – people who write original stories using characters and worlds from published books, television shows and movies – found existing platforms unwelcoming. For example, many fan-fiction authors were <a href="https://www.theverge.com/2022/8/15/23200176/history-of-ao3-archive-of-our-own-fanfiction">kicked off social media platforms</a> due to overzealous copyright enforcement or concerns about sexual content.</p>
<p>Fed up with platforms that didn’t understand their work or their culture, a group of authors designed and built their own platform specifically to meet the needs of their community. AO3, as it is colloquially known, serves millions of people a month, includes tools specific to the needs of fan-fiction authors, and is governed by the same people it serves.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="text above and below a photo of two people in lab coats standing in a hallway" src="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=817&fit=crop&dpr=1 600w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=817&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=817&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1027&fit=crop&dpr=1 754w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1027&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1027&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">X, formerly Twitter, allows people to use Community Notes to append relevant information to posts that contain inaccuracies.</span>
<span class="attribution"><a class="source" href="https://twitter.com/kareem_carr/status/1709198073174311207/photo/1">Screen capture by The Conversation U.S.</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Hybrid models, like on Reddit, <a href="https://www.redditinc.com/policies/content-policy">mix centralized and self-governance</a>. Reddit hosts a collection of interest-based communities called subreddits that have their own rules, norms and teams of moderators. Underlying a subreddit’s governance structure is a set of rules, processes and features that apply to everyone. Not every subreddit is a sterling example of a healthy online community, but more are than are not.</p>
<p>There are also technical approaches to community governance. One approach would enable users to choose the algorithms that curate their social media feeds. Imagine that instead of only being able to use Facebook’s algorithm, you could choose from a suite of algorithms provided by third parties – for example, from The New York Times or Fox News.</p>
<p>More radically decentralized platforms like Mastodon devolve control to a network of servers that are similar in structure to email. This makes it easier to choose an experience that matches your preferences. You can choose which Mastodon server to use, and can switch easily – just like you can choose whether to use Gmail or Outlook for email – and can change your mind, all while maintaining access to the wider email network. </p>
<p>Additionally, advancements in generative AI – which shows <a href="https://doi.org/10.1109/MS.2023.3265877">early promise in producing computer code</a> – could make it easier for people, even those without a technical background, to build custom online spaces when they find existing spaces unsuitable. This would relieve pressure on online spaces to be everything for everyone and support a sense of agency in the digital public sphere.</p>
<p>There are also more indirect ways to support community governance. Increasing transparency – for example, by providing access to data about the impact of platforms’ decisions – can help researchers, policymakers and the public hold online platforms accountable. Further, encouraging ethical professional norms among engineers and product designers can make online spaces more respectful of the communities they serve.</p>
<h2>Going forward by going back</h2>
<p>Between now and the end of 2024, national elections are scheduled in many countries, including Argentina, Australia, India, Indonesia, Mexico, South Africa, Taiwan, the U.K. and the U.S. This is all but certain to lead to conflicts over online spaces. </p>
<p>We believe it is time to consider not just how online spaces can be governed efficiently and in service to corporate bottom lines, but how they can be governed fairly and legitimately. Giving communities more control over the spaces they participate in is a proven way to do just that.</p><img src="https://counter.theconversation.com/content/213209/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ethan Zuckerman receives funding from the MacArthur Foundation, the Ford Foundation, the Knight Foundation and the (US) National Science Foundation.</span></em></p><p class="fine-print"><em><span>Chand Rajendra-Nicolucci receives funding from the MacArthur Foundation and the Ford Foundation. </span></em></p>In the days of online bulletin board systems, community members decided what was acceptable. Reviving that approach to content moderation offers Big Tech a path to legitimacy as public spaces.Ethan Zuckerman, Associate Professor of Public Policy, Communication, and Information, UMass AmherstChand Rajendra-Nicolucci, Research Fellow, Initiative for Digital Public Infrastructure, UMass AmherstLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2136822023-10-16T14:10:38Z2023-10-16T14:10:38ZTraditional farming knowledge should be stored for future use: the technology to do this is available<p>Indigenous knowledge and traditional practices have played a critical <a href="https://documents1.worldbank.org/curated/en/574381468765625385/pdf/multi0page.pdf">role</a> in development all over the world. For centuries, various disciplines ranging from medicine to biodiversity conservation have drawn on these resources. </p>
<p>On the African continent, societies have been guided by a wide range of beliefs, norms, customs and procedures in managing their ecological and social systems.</p>
<p>For example, <a href="http://repository.embuni.ac.ke/handle/123456789/4152">cultural values</a> and social practices have helped communities achieve sustainable agriculture. These include traditional practices in <a href="https://www.intechopen.com/chapters/83308">food preservation</a>, <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/ldr.3395">weather monitoring and forecasting</a> and <a href="https://www.sciencedirect.com/science/article/pii/S2666049021000566">crop production</a>.</p>
<p>Unfortunately, indigenous knowledge of agricultural practices is rapidly disappearing, because it is not being <a href="https://www.researchgate.net/profile/Emmanuel-Attoh-3/publication/352197647_Indigenous_knowledge_and_climate_change_adaptation_in_Africa_a_systematic_review/links/60be743792851cb13d88b9b9/Indigenous-knowledge-and-climate-change-adaptation-in-Africa-a-systematic-review.pdf">preserved</a>. One possible solution is digitalisation. This involves using modern information and communication technologies to capture, store and share farmers’ traditional wisdom and practices.</p>
<p>I conducted a <a href="https://journals.co.za/doi/abs/10.10520/ejc-jpad_v57_n4_a5">literature review</a> to explore the benefits and challenges of preserving indigenous agricultural knowledge in a digital form in Africa.</p>
<p>I found that mobile phones, computers, cameras, scanners and voice recorders were useful tools for this purpose. But the process must involve the local communities that use these practices. They are the creators, guardians and sharers of indigenous knowledge through their lived experiences and practices.</p>
<p>Their participation is critical for a number of reasons. One is that they would improve the quality and accuracy of knowledge stored in digital form. Another is that they would avoid errors or misunderstandings that might arise from <a href="https://rb.gy/vsahl">language or cultural barriers</a>.</p>
<p>Digital technologies can enable wider use of <a href="https://rb.gy/qd1q1">indigenous knowledge</a>. They can promote better management of agricultural resources and preserve traditional practices. </p>
<p>I also identified several challenges that hinder the process. Policy gaps, <a href="https://core.ac.uk/reader/188123510">network connectivity issues</a> and the <a href="https://doi.org/10.4018/978-1-5225-0833-5.ch010">high cost</a> of digital tools were among them.</p>
<p>The findings of this study could inform policies and interventions to record and share indigenous knowledge in Africa.</p>
<h2>Digitalisation: what’s missing?</h2>
<p>Digital technologies are already widely used in Africa, particularly among smallholder farmers. They are used in <a href="https://pdfs.semanticscholar.org/b180/025358c0b38123ea1b34bad11cc0761123ca.pdf">irrigation farming</a>, <a href="https://doi.org/10.3390/su13031158">precision farming</a>, drought predictions, micro-climate monitoring, and crop disease risk assessments. Efficiency, productivity and functionality are among the claimed benefits.</p>
<p>But my study found little evidence of indigenous agricultural knowledge being preserved. Some countries are making progress, however. South Africa has developed a system to document indigenous knowledge. Kenya, Tanzania and Uganda are also developing and using <a href="https://documents1.worldbank.org/curated/en/574381468765625385/pdf/multi0page.pdf">knowledge management initiatives</a>. <a href="https://journals.sagepub.com/doi/abs/10.1177/0340035216681326">In Ghana</a>, people are recording traditional knowledge of forest food and medicine. </p>
<p>More needs to be done. </p>
<h2>How it could be done</h2>
<p>Indigenous agricultural knowledge can be collected, processed, stored and shared in various formats. <a href="https://doi.org/10.4018/978-1-5225-0833-5.ch010">Technologies</a> such as smartphones, voice recorders and video cameras can <a href="https://www.researchgate.net/profile/Dennis-Ocholla/publication/329359896_Information_and_Communication_Technology_Tools_for_Managing_Indigenous_Knowledge_in_KwaZulu-Natal_Province_South_Africa/links/5c0421e092851c63cab5cb99/Information-and-Communication-Technology-Tools-for-Managing-Indigenous-Knowledge-in-KwaZulu-Natal-Province-South-Africa.pdf">capture texts, videos</a>, images and voice narrations about indigenous plants and traditional agricultural practices. </p>
<p>These could cover information on crop production systems, food preservation and livestock management. Weather and seasonal forecasting would be another area to cover. Management of resources like soil and water would also be useful to record. </p>
<p>The study found that databases of these practices and information could be a great resource for farmers. They could share their experiences of applying indigenous practices on various digital platforms. Other users could provide feedback. </p>
<p>My research also showed that the internet would be a valuable tool. Information could be shared on <a href="https://ijoc.org/index.php/ijoc/article/view/1667">platforms</a> such as Facebook, YouTube and TikTok.</p>
<h2>Hurdles to overcome</h2>
<p>The study identified several challenges facing the digitalisation of indigenous agricultural knowledge. <a href="https://www.researchgate.net/profile/Dennis-Ocholla/publication/329359896_Information_and_Communication_Technology_Tools_for_Managing_Indigenous_Knowledge_in_KwaZulu-Natal_Province_South_Africa/links/5c0421e092851c63cab5cb99/Information-and-Communication-Technology-Tools-for-Managing-Indigenous-Knowledge-in-KwaZulu-Natal-Province-South-Africa.pdf">Affordability</a> of smartphones is sometimes an issue for smallholder farmers. And connectivity is sometimes poor in rural or semi-urban areas. </p>
<p>Governments could make strategic investments to overcome these challenges. </p>
<p>I argue in my paper that the application of indigenous agricultural knowledge practices could help address declining agricultural productivity on the continent. </p>
<p>In addition, I argue in favour of promoting indigenous knowledge of agricultural practices to address social challenges. Indigenous knowledge has a contribution to make to sustainable agricultural productivity and food systems. It also offers insights that may be useful for conserving natural resources such as water, forests and land.</p><img src="https://counter.theconversation.com/content/213682/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mourine Sarah Achieng does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Digitalisation offers a way to preserve indigenous knowledge of agricultural practices and connect new generations of farmers to knowledge and wisdom from the past.Mourine Sarah Achieng, Post Doctoral Fellow, University of South AfricaLicensed as Creative Commons – attribution, no derivatives.