tag:theconversation.com,2011:/es/topics/cambridge-analytica-51337/articlesCambridge Analytica – The Conversation2023-06-26T16:14:14Ztag:theconversation.com,2011:article/2081322023-06-26T16:14:14Z2023-06-26T16:14:14ZFake news: EU targets political social media ads with tough new regulation proposal<figure><img src="https://images.theconversation.com/files/533551/original/file-20230622-21-ni00zz.jpg?ixlib=rb-1.1.0&rect=0%2C169%2C3062%2C2258&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/collection-icons-related-politic-including-like-2300374271">Shutterstock/icongeek26</a></span></figcaption></figure><p>Throughout Europe, strict rules govern how traditional media operates during elections. Often that means imposing a period of silence so that voters can <a href="https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights">reflect on their choices without undue influence</a>. In France, for example, no polls are allowed to be published on the day of an election.</p>
<p>There are, however, very few laws governing what social media companies do in relation to elections. This is a problem now that political parties campaign on these platforms as a matter of course. </p>
<p>So this year, the European Commission intends to <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0731">introduce regulations</a> for political adverts that will apply across the countries of the EU. </p>
<p>To understand why such action is being considered, we can look to recent concerning practices during election cycles in the UK and US.</p>
<p>As more people consume their news <a href="https://ec.europa.eu/eurostat/en/web/products-eurostat-news/-/ddn-20220824-1">online</a>, and as advertising revenues move online, social media poses a greater threat to fair and transparent elections. </p>
<p>The largest social media networks are for-profit companies. They offer marketing services to other businesses wanting to direct advertising towards network users who are a good match for their products. </p>
<p>To facilitate this, social media companies gather and store behavioural data on our activities – what we click on, what makes us hit the like button, the comments we leave. </p>
<p>Knowing these things for each person gives these companies a detailed understanding of its users. That’s ideal for identifying which user segments will be most receptive to a certain message or ad. </p>
<h2>The user marketplace</h2>
<p>Social media companies generally use an in-house artificial intelligence bidding system, operating in real-time, for each page that is presented to a user. Businesses compete for customer access by signalling how much they are willing to pay to place an ad and the algorithm chooses what will appear on the page, and where.</p>
<p>This inventive model was originally conceived by Google and has radically changed the world of marketing. Because the basis of the model lies in gathering each person’s behavioural activities on the platforms for marketing purposes, it has been described as <a href="https://hbr.org/podcast/2019/06/surveillance-capitalism">surveillance capitalism</a>.</p>
<p>All this is significant enough when we are being marketed products, but using such information in the context of <a href="https://www.nature.com/articles/s41599-021-00787-w">election campaigning</a> is even more questionable.</p>
<p>A new level of AI, surveillance and business cooperation was achieved when Facebook began <a href="https://web.archive.org/web/20170701141827/https:/politics.fb.com/ad-campaigns/">providing services</a> to companies involved in political campaigning. Of particular concern were activities around the use of targeting custom audiences in the 2016 Brexit referendum and the <a href="https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump">US presidential election of the same year</a>. </p>
<p>To this day, it is unclear how these activities affected those votes, but we know companies worked together to gather voter information and perform their own behavioural analytics for the segments of interest using, among other things, efficient computer-generated personality judgments based on inappropriately harvested Facebook profiles. Persuasive materials were then delivered at specific times to the users by Facebook.</p>
<p><a href="https://www.parliament.uk/globalassets/documents/commons-committees/culture-media-and-sport/Fake_news_evidence/Ads-supplied-by-Facebook-to-the-DCMS-Committee.pdf">Enlightening information</a> provided to a British parliamentary inquiry by Facebook shows that many of the large number of ads about Brexit sent to users were misleading and employed <a href="https://www.channel4.com/news/factcheck/factcheck-send-350m-week-brussels">debatable half-truths</a>.</p>
<p>In the US, the Federal Trades Commission imposed an extraordinary US$5 billion (€4.6 billion) fine on Facebook <a href="https://www.ftc.gov/system/files/documents/cases/182_3109_facebook_complaint_filed_7-24-19.pdf">for misleading users</a> and allowing profiles to be shared with business app developers.</p>
<p>In 2018, Facebook CEO Mark Zuckerberg <a href="https://www.facebook.com/zuck/posts/10104712037900071">said</a>: “I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.”</p>
<p>However, the EU is clearly not content with a pledge from Facebook not to let this happen again and plans to take a more heavy handed approach than it had in the past. </p>
<p>My own work in this area argues that such business projects as election influencing using advanced AI with behavioural analytics can be considered as <a href="https://link.springer.com/chapter/10.1007/978-3-319-96448-5_28">artificial people at work</a> and should be <a href="https://www.researchgate.net/publication/331656417_Artificial_Intelligence_in_Politics_Establishing_Ethics">regulated in the same way</a> as any human seeking to influence elections would be.</p>
<h2>The European approach</h2>
<p>There is currently no usable, shared definition of a political advertisement. The EU, therefore, needs to provide a definition that does not infringe on freedom of expression but enables the market to be properly regulated. </p>
<p>With this in mind, we can expect the law to make reference to there being a link between payment and the use or creation of a post. That will help separate ads from personal opinions shared on social media.</p>
<p>Once a political ad has been identified, legislation will require it to be clearly labelled as relating to a specific election or referendum. The name of the sponsor will have to be clear as well as the amount spent on the ad.</p>
<p>A key issue with the US and UK scandals was that amplification techniques had been used to position political ads on Facebook where they could be most effective. </p>
<p>This meant using potentially sensitive information about a person, such as ethnic origin, psychological profiling, religious beliefs or sexual orientation to sort them into groups to be targeted. This will not be allowed in EU countries, unless people give their explicit permission. </p>
<p>In the past, political ads have been delivered to individuals in their own private spaces, and so have not been open to public examination. The new European legislation will aim to put all political ads in an open repository, where they will be open to public scrutiny and regulation.</p>
<p>The European Commission wants to see these regulations come into force before the European elections of 2024. Getting the regulations exactly right will be challenging, and the Commission is in the final stages of discussion on the matter. Regulation of political ads will come in some form or another, making it more possible to hold social media companies to account.</p><img src="https://counter.theconversation.com/content/208132/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Kane does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New laws aim to give the public access to a repository containing every political ad sent out through social media.Tom Kane, Senior Lecturer in Business Analytics, University of StirlingLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2014022023-03-19T12:18:56Z2023-03-19T12:18:56ZAlgorithms are moulding and shaping our politics. Here’s how to avoid being gamed<figure><img src="https://images.theconversation.com/files/515106/original/file-20230314-26-owl3zr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Bakhtiar Zein/Shutterstock</span></span></figcaption></figure><p>In 2016, evidence <a href="https://www.news24.com/news24/southafrica/news/download-the-full-state-of-capture-pdf-20161102">began to mount</a> that then-South African president Jacob Zuma and a family of Indian-born businessmen, the Guptas, were responsible for widespread “state capture”. It was alleged that the Gupta family influenced Zuma’s political appointments and benefited unfairly from lucrative tenders. </p>
<p>The Guptas began to look for a way to divert attention away from them. They enlisted the help of British public relations firm Bell Pottinger, which drew on the country’s <a href="https://www.nytimes.com/2018/02/04/business/bell-pottinger-guptas-zuma-south-africa.html">existing racial and economic tensions</a> to develop a social media campaign centred on the role of “white monopoly capital” in continuing “economic apartheid”.</p>
<p>The campaign was driven by the power of algorithms. The company created over 100 fake Twitter bots or automated Twitter accounts that run on bot software – computer programs designed to perform tasks and actions, ranging from rather simple ones to quite complex ones; in this case, to simulate human responses for liking and retweeting tweets. </p>
<p>This weaponisation of communications is not limited to South Africa. Examples from elsewhere in Africa abound, including Russia <a href="https://medium.com/dfrlab/local-support-for-russia-increased-on-facebook-before-burkina-faso-military-coup-a51df6722e59">currying favour</a> in Burkina Faso via Facebook and <a href="https://investigate.africa/wp-content/themes/ancir/dist/assets/reports/Kenya_Keyboard_Warriors_24_04_2021.pdf">coordinated Twitter campaigns</a> by factions representing opposing Kenyan politicians. It’s seen beyond the continent, too – in March 2023, researchers identified <a href="https://www.pbs.org/newshour/politics/on-twitter-thousands-of-pro-trump-bots-are-attacking-desantis-haley">a network of thousands of fake Twitter accounts</a> created to support former US president Donald Trump.</p>
<p>Legal scholar Antoinette Rouvroy calls this <a href="https://www.greeneuropeanjournal.eu/algorithmic-governmentality-and-the-death-of-politics/">“algorithmic governmentality”</a>. It’s the reduction of government to algorithmic processes as if society is a problem of big data sets rather than one of how collective life is (or should be) arranged and managed by the individuals in that society. </p>
<p>In <a href="https://journals.ufs.ac.za/index.php/aa/article/view/6111">a recent paper</a>, I coined the term “algopopulism”: algorithmically aided politics. The political content in our personal feeds not only represents the world and politics to us. It creates new, sometimes “alternative”, realities. It changes how we encounter and understand politics and even how we understand reality itself.</p>
<p>One reason algopopulism spreads so effectively is that it’s very difficult to know exactly how our perceptions are being shaped. This is deliberate. Algorithms are designed in a sophisticated way to <a href="https://mediarep.org/bitstream/handle/doc/14481/Democratization-of-Artificial-Intelligence_163-173_McQuillan_Political-Affinities_.pdf">override human reasoning</a>. </p>
<p>So, what can you do to protect yourself from being “gamed” by algorithmic processes? The answers, I suggest, lie in understanding a bit more about the digital shift that’s brought us to this point and the ideas of a British statistician, <a href="https://www.britannica.com/biography/Thomas-Bayes">Thomas Bayes</a>, who lived more than 300 years ago. </p>
<h2>How the shift happened</h2>
<p>Five recent developments in the technology space have led to algorithmic governmentality: considerable improvements in hardware; generous, flexible storage via the cloud; the explosion of data and data accumulation; the development of deep convoluted networks and sophisticated algorithms to sort through the extracted data; and the development of fast, cheap networks to transfer data. </p>
<p>Together, these developments have transformed data science into something more than a mere technological tool. It has become a method for using data not only to predict how you engage with digital media, but to <a href="http://www.ladeleuziana.org/wp-content/uploads/2022/09/Gray.pdf">preempt your actions and thoughts</a>.</p>
<p>This is not to say that all digital technology is harmful. Rather, I want to point out one of its greatest risks: we are all susceptible to having our thoughts shaped by algorithms, sometimes in ways that can have real-world effects, such as when they <a href="https://mg.co.za/article/2020-01-14-how-the-nigerian-and-kenyan-media-handled-cambridge-analytica/">affect democratic elections</a>.</p>
<h2>Bayesian statistics</h2>
<p>That’s where Thomas Bayes comes in. Bayes was an English statistician; <a href="https://www.nature.com/articles/s43586-020-00001-2">Bayesian statistics</a>, the dominant paradigm in machine learning, is named after him.</p>
<p>Before Bayes, computational processes relied on frequentist statistics. Most people have encountered this method in one way or another, as in the case of how probable it is that a coin will land heads-up and tails-down. This approach starts from the assumption that the coin is fair and hasn’t been tampered with. This is called a null hypothesis. </p>
<p>Bayesian statistics does not require a null hypothesis; it changes the kinds of questions asked about probability entirely. Instead of assuming a coin is fair and measuring the probability of heads or tails, it asks us instead to consider whether the system for measuring probability is fair. Instead of assuming the truth of a null hypothesis, Bayesian inference starts with a measure of subjective belief which it updates as more <a href="https://reallifemag.com/chances-are/">evidence – or data – is gathered in real time</a>.</p>
<p>How does this play out via algorithms? Let’s say you heard a rumour that the world is flat and you do a Google search for articles that affirm this view. Based on this search, the measure of subjective belief the algorithms have to work with is “the world is flat”. Gradually, the algorithms will curate your feed to show you articles that confirm this belief unless you have purposefully searched for opposing views too. </p>
<p>That’s because Bayesian approaches use prior distributions, knowledge or beliefs as a starting point of probability. Unless you change your prior distributions, the algorithm will continue providing evidence to confirm your initial measure of subjective belief. </p>
<p>But how can you know to change your priors if your priors are being confirmed by your search results all the time? This is the dilemma of algopopulism: Bayesian probability allows algorithms to create sophisticated filter bubbles that are difficult to discount because all your search results are based on your previous searches.</p>
<p>So, there is no longer a uniform version of reality presented to a specific population, like there was when TV news was broadcast to everyone in a nation at the same time. Instead, we each have a version of reality. Some of this overlaps with what others see and hear and some doesn’t. </p>
<h2>Engaging differently online</h2>
<p>Understanding this can change how you search online and engage with knowledge. </p>
<p>To avoid filter bubbles, always search for opposing views. If you haven’t done this from the start, do a search on a private browser and compare the results you get. More importantly, check your personal investment. What do you get out of taking a specific stance on a subject? For example, does it make you feel part of something meaningful because you lack real-life social bonds? Finally, endeavour to choose reliable sources. Be aware of a source’s bias from the start and avoid anonymously published content. </p>
<p>In these ways we can all be custodians of our individual and collective behaviour.</p><img src="https://counter.theconversation.com/content/201402/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chantelle Gray does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The political content in our personal feeds not only represents the world and politics to us. It creates new, sometimes “alternative”, realities.Chantelle Gray, Professor in the School of Philosophy, North-West UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1989792023-03-02T19:38:20Z2023-03-02T19:38:20ZProtecting privacy online begins with tackling ‘digital resignation’<figure><img src="https://images.theconversation.com/files/512989/original/file-20230301-26-syl2am.jpg?ixlib=rb-1.1.0&rect=25%2C8%2C5725%2C3819&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Going online often involves surrendering some privacy, and many people are becoming resigned to the fact that their data will be collected and used without their explicit consent.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>From <a href="https://www.cnbc.com/2022/11/26/the-biggest-risks-of-using-fitness-trackers-to-monitor-health.html">smart watches</a> and meditation apps to digital assistants and social media platforms, we interact with technology daily. And some of these technologies have <a href="https://childdatacitizen.com/coerced-digital-participation/">become an essential part of our social and professional lives</a>. </p>
<p>In exchange for access to their digital products and services, many tech companies collect and use our personal information. They use that information to predict and influence our future behaviour. This kind of <a href="https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/">surveillance capitalism</a> can take the form of <a href="https://theconversation.com/the-dark-side-of-alexa-siri-and-other-personal-digital-assistants-126277">recommendation algorithms</a>, targeted advertising and <a href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/the-future-of-personalization-and-how-to-get-ready-for-it">customized experiences</a>. </p>
<p>Tech companies claim these personalized experiences and benefits enhance the user’s experience, however <a href="https://repository.upenn.edu/cgi/viewcontent.cgi?article=1554&context=asc_papers">the vast majority of consumers are unhappy with these practices</a>, especially after learning how their data is collected.</p>
<h2>‘Digital resignation’</h2>
<p><a href="https://dx.doi.org/10.2139/ssrn.1478214">Public knowledge is lacking</a> when it comes to how data is collected. Research shows that corporations both cultivate feelings of resignation and <a href="https://repository.upenn.edu/cgi/viewcontent.cgi?article=1554&context=asc_papers">exploit this lack of literacy</a> to normalize the practice of maximizing the amount of data collected. </p>
<p>Events like the <a href="https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/">Cambridge Analytica</a> scandal and revelations of mass government surveillance by <a href="https://www.reuters.com/article/us-usa-nsa-spying-idUSKBN25T3CK">Edward Snowden</a> shine a light on data collection practices, but they leave people powerless and resigned that their data will be collected and used without their explicit consent. This is called <a href="http://dx.doi.org/10.1177/1461444819833331">“digital resignation”</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A smartphone displaying the facebook logo." src="https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512979/original/file-20230301-22-br1873.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In 2022 Facebook’s parent company, Meta, agreed to pay $725 million to settle a lawsuit concerning users’ personal information to be fed to Cambridge Analytica.</span>
<span class="attribution"><span class="source">(AP Photo/Michael Dwyer, File</span></span>
</figcaption>
</figure>
<p>But while there is much discussion surrounding the collection and use of personal data, there is far less discussion about the modus operandi of tech companies. </p>
<p><a href="https://spectrum.library.concordia.ca/id/eprint/990750/">Our research</a> shows that tech companies use a variety of strategies to deflect responsibility for privacy issues, neutralize critics and prevent legislation. These strategies are designed to limit citizens’ abilities to make informed choices. </p>
<p>Policymakers and corporations themselves must acknowledge and correct these strategies. Corporate accountability for privacy issues cannot be achieved by addressing data collection and use alone. </p>
<h2>The pervasiveness of privacy violations</h2>
<p>In their study of harmful industries such as the tobacco and mining sectors, <a href="http://dx.doi.org/10.1086/653091">Peter Benson and Stuart Kirsch</a> identified strategies of denial, deflection and symbolic action used by corporations to deflect criticism and prevent legislation.</p>
<p>Our research shows that these strategies hold true in the tech industry. Facebook has a long history of <a href="https://www.theguardian.com/technology/2019/aug/23/cambridge-analytica-facebook-response-internal-document">denying and deflecting responsibility</a> for privacy issues despite its numerous scandals and criticisms.</p>
<p>Amazon has also been harshly criticized for providing <a href="https://www.theguardian.com/technology/2022/jul/13/amazon-ring-doorbell-videos-police-11-times-without-permission">Ring security camera footage to law enforcement officials without a warrant or customer consent</a>, sparking <a href="https://www.eff.org/deeplinks/2021/02/lapd-requested-ring-footage-black-lives-matter-protests">civil rights concerns</a>. The company has also created <a href="https://www.theverge.com/2022/9/20/23362010/ring-nation-mgm-amazon-mark-burnett-barry-poznick-civil-rights-cancel">a reality show using Ring security camera footage</a>. </p>
<p>Canadian and U.S. federal government employees have <a href="https://www.wsj.com/articles/canada-follows-u-s-europe-with-tiktok-ban-on-government-devices-2273b07f">recently been banned from downloading TikTok</a> onto their devices due to an “unacceptable” risk to privacy. TikTok has launched <a href="https://www.theverge.com/2023/2/2/23583491/tiktok-transparency-center-tour-photos-bytedance">an elaborate spectacle of symbolic action</a> with the opening of its <a href="https://www.youtube.com/watch?v=PxfIGVQTfWQ">Transparency and Accountability Center</a>. This cycle of denial, deflection and symbolic action normalizes privacy violations and fosters cynicism, resignation and disengagement.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A black and silver ring doorbell on a door frame." src="https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512973/original/file-20230301-424-zveqs2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Amazon has faced criticism for creating a new reality show based on footage captured by Ring doorbells.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>How to stop digital resignation</h2>
<p>Technology permeates every aspect of our daily lives. But informed consent is impossible when the average person is neither motivated nor <a href="https://ndg.asc.upenn.edu/wp-content/uploads/2018/09/Persistent-Misperceptions.pdf">knowledgeable enough</a> to read terms and conditions policies designed to confuse.</p>
<p>The <a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age_en">European Union</a> has recently enacted laws that recognize these harmful market dynamics and have started holding platforms and tech companies <a href="https://www.cnn.com/2022/11/30/tech/twitter-eu-compliance-warning/index.html">accountable</a>. </p>
<p>Québec has recently revised its privacy laws with <a href="https://www.quebec.ca/gouvernement/ministeres-et-organismes/institutions-democratique-acces-information-laicite/acces-documents-protection-renseignements-personnels/pl64-modernisation-de-la-protection-des-renseignements-personnels">Law 25</a>. The law is designed to provide citizens with increased protection and control over their personal information. It gives people the ability to request their personal information and move it to another system, to rectify or delete it (<a href="https://gdpr.eu/right-to-be-forgotten/">the right to be forgotten</a>) as well as the right to be informed when being subjected to automated decision making. </p>
<p>It also requires organizations to appoint a privacy officer and committee, and conduct privacy impact assessments for every project where personal information is involved. Terms and policies must also be communicated clearly and transparently and consent must be explicitly obtained.</p>
<p>At the federal level, the government has tabled <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020">Bill C-27, the <em>Digital Charter Implementation Act</em></a> and is currently under review by the House of Commons. It bears many resemblances to Québec’s Law 25 and also includes additional measures to regulate technologies such as artificial intelligence systems.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A laptop showing a terms and conditions document." src="https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512971/original/file-20230301-20-41o1s8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Online terms and conditions are often too long and difficult for consumers to understand.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Our findings highlight the urgent need for more privacy literacy and stronger regulations that not just regulate what is permitted, but also monitor and make accountable the firms who breach consumer privacy. This would ensure informed consent to data collection and disincentivize violations. We recommend that: </p>
<p>1) Tech companies must explicitly specify what personal data will be collected and used. Only essential data should be collected and customers should be able to opt out of non-essential data collection. This is similar to the <a href="https://gdpr.eu/cookies/">EU’s General Data Protection Regulation</a> to obtain user consent before using non-essential cookies or <a href="https://support.apple.com/en-ca/HT212025">Apple’s App Tracking Transparency</a> feature which allows users to block apps from tracking them.</p>
<p>2) Privacy regulations must also recognize and address the rampant use of <a href="https://www.vox.com/recode/22351108/dark-patterns-ui-web-design-privacy">dark patterns</a> to influence people’s behaviour, such as coercing them into providing consent. This can include the use of design elements, language or features such as making it difficult to decline non-essential cookies or making the button to provide more personal data more prominent than the opt-out button.</p>
<p>3) Privacy oversight bodies such as the <a href="https://www.priv.gc.ca/en">Office of the Privacy Commissioner of Canada</a> <a href="https://www.cbc.ca/news/canada/nova-scotia/houston-privacy-commissioner-promise-may-be-softening-1.6624079">must be fully independent</a> and authorized to investigate and <a href="https://financialpost.com/news/privacy-watchdogs-lament-lack-powers-tim-hortons-probe">enforce privacy regulations</a>.</p>
<p>4) While privacy laws like Québec’s require organizations to appoint a privacy officer, the role must also be fully independent and given the power to enforce compliance with privacy laws if it is to be effective in improving accountability.</p>
<p>5) Policymakers must be more proactive in updating legislation to account for the rapid advances of digital technology. </p>
<p>6) Finally, penalties for non-compliance often pale in comparison to the profits gained and social harms from misuse of data. For example, the U.S. Federal Trade Commission (FTC) imposed <a href="https://www.ftc.gov/news-events/news/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions-facebook">a $5 billion penalty on Facebook</a> (5.8 per cent of its <a href="https://investor.fb.com/investor-news/press-release-details/2021/Facebook-Reports-Fourth-Quarter-and-Full-Year-2020-Results/default.aspx">2020 annual revenue</a>) for its role in the <a href="https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram">Cambridge Analytica scandal</a>.</p>
<p>While this fine is the highest ever given by the FTC, it is not representative of the social and political impacts of the scandal and its influence in <a href="https://www.npr.org/2018/03/20/595338116/what-did-cambridge-analytica-do-during-the-2016-election">key political events</a>. In some cases, it may be more profitable for a company to strategically pay a fine for non-compliance. </p>
<p>To make tech giants more responsible with their users’ data, the cost of breaching data privacy must outweigh the potential profits of exploiting consumer data.</p><img src="https://counter.theconversation.com/content/198979/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Many people have become resigned to the fact that tech companies collect our private data. But policymakers must do more to limit the amount of personal information corporations can collect.Meiling Fong, PhD Student, Individualized Program, Concordia UniversityZeynep Arsel, Concordia University Chair in Consumption, Markets, and Society, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1992632023-03-01T14:45:50Z2023-03-01T14:45:50ZDebate: The multiple paradoxes of Meta and Mark Zuckerberg<p>From Facebook’s psychological experiments on unwitting users in <a href="https://www.theguardian.com/technology/2014/jul/02/facebook-apologises-psychological-experiments-on-users">2014</a> to the Cambridge Analytica scandal in <a href="https://www.businessinsider.com/cambridge-analytica-a-guide-to-the-trump-linked-data-firm-that-harvested-50-million-facebook-profiles-2018-3">2018</a> or the Facebook files in <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">2021</a>, controversies involving the company have been numerous. Despite an increased demand for transparency, Mark Zuckerberg, the CEO of Meta (formerly known as Facebook), has never been too inclined to commit to any specific actions.</p>
<p>This can be explained by the fact that social media operate in the <a href="https://econreview.berkeley.edu/paying-attention-the-attention-economy/">attention economy</a>. Their algorithms – the ranking and recommendation systems they use to filter and propose content – also aim at maximizing the time that users spend on their platform. The goal is to expose them to ads for longer periods of time, and also collect more personal data that can subsequently be monetized. To do so, social media companies design their algorithms to trigger behavioral modifications – they raise up our desires and encourage us to immediately satisfy them, and so deprive us of the ability to truly choose.</p>
<h2>Losing the tracks of misinformation’s spread</h2>
<p>My <a href="https://www.egos.org/jart/prj3/egos/main.jart?rel=de&reserve-mode=active&content-id=1646195112672&subtheme_id=1604725537942&show_prog=yes">ongoing research</a> centres on how conspiratorial social movements – including QAnon, the infamous group that was central to the January 6, 2021, assault on the US Capitol – are constructed and grow. It was fortuitous that in 2019 Meta started a <a href="https://help.crowdtangle.com/en/articles/4302208-crowdtangle-for-academics-and-researchers">pilot program</a> to “partner with researchers and academics to help them study the spread of public content on Facebook and Instagram”. The program prioritised research on topics such as misinformation, elections and Covid-19, with analysis possible through Facebook’s <a href="https://firstdraftnews.org/articles/%E2%80%A8what-facebook-gutting-crowdtangle-means-for-misinformation/">CrowdTangle database</a>.</p>
<p>However, a quick look at the documentation describing the available data made it clear that CrowdTangle had been designed in a way that made it close to impossible to conduct research on large-scale misinformation spread by groups such as QAnon. In particular, content deleted from Facebook or Instagram was also deleted from CrowdTangle. While Meta had been reluctant to <a href="https://www.nytimes.com/2020/10/06/technology/facebook-qanon-crackdown.html">stop the spread of QAnon misinformation</a>, when the company finally took action, it also removed the content from the database that researchers were supposed to use to… research QAnon.</p>
<p>Not only is the CrowdTangle database itself opaque, the application process is as well. To apply, researchers have to provide their personal information and briefly describe their research and their intended use of the data. Once the application is completed, an automated e-mail is sent that states that Meta will be in touch if they decide that the researcher can be admitted, without any further information. While academics are used to being evaluated, what is less usual is to be given no clue about the expectations nor evaluation criteria, that normally serve as an informal contractual basis. What did Meta expect to get from this program?</p>
<p>When looking for data, if they cannot be directly accessed, a last (and very efficient) recourse is to draw on the Internet Archive’s <a href="https://archive.org/web/">Wayback Machine</a>, which provides access to web pages as they were at a given date. While it was possible to access deleted Twitter accounts or YouTube pages, for example, Facebook was inaccessible. Even finding out why Facebook wasn’t archived was challenging. It is on the website <a href="https://archive-it.org/">archive-it.org</a> that an explanation appeared: Facebook blocks the archival of its pages and groups, making it the most restrictive media.</p>
<p>As a result, it is extremely difficult to study how conspiracy theories and disinformation spread and grow on Meta’s platforms. Of course, it is possible to follow it in real time, before Meta takes action and deletes such content both from its public platforms and private database. Yet researching such events require being able to access and study their dynamics after the fact. Indeed, this is the only way that the full puzzle can be reconstructed.</p>
<h2>Erasing the past</h2>
<p>Meta’s hindering the study of disinformation spread on its social media is all the more worrying as its CEO is conducting an ideological campaign. He hopes <a href="https://www.theguardian.com/technology/2012/feb/01/facebook-letter-mark-zuckerberg-text">“to change how people relate to their governments and social institutions”</a> and acknowledged that he was best described as a libertarian, a populist ideology that in some forms advocates for the reduction of government to its absolute minimum.</p>
<p>Yet libertarianism in Meta’s view seems to mean replacing state governments with private companies, resulting in replacing democracies with a sort of <a href="https://medium.com/predict/the-rise-of-techno-feudalism-6bdfe499130a">techno-feudalism</a>. These digital giants have indeed reshaped all of our institutions, from the social sphere to the workplace and in exchange have imposed a regime of generalized surveillance. This new regime has resulted in an increased concentration of wealth, greater precarity of jobs, that resists – thanks to <a href="https://www.investopedia.com/terms/a/asymmetricinformation.asp">information asymmetry</a> and <a href="https://www.opensecrets.org/news/2022/01/amid-rebrand-as-meta-facebook-set-a-new-lobbying-spending-record-in-2021/">intense lobbying</a> – the imposition of any rules and regulations on what it does.</p>
<p>As companies such as Meta continue to gain power, their practice of erasing any realities that do not suit their interests is reminiscent of the old <a href="https://www.smithsonianmag.com/history/vladimir-putins-rewriting-of-history-draws-on-a-long-tradition-of-soviet-myth-making-180979724/">Soviet model</a> rather than making the world more <a href="https://journals.sagepub.com/doi/abs/10.1177/1461444816660784">“open and connected”</a>, as Mark Zuckerberg has paradoxically claimed.</p><img src="https://counter.theconversation.com/content/199263/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Elise Berlinski ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Mark Zuckerberg says he wants the world to be more “open and connected”, but his decision to block archiving the company’s social media content argues otherwise.Elise Berlinski, Assistant Professor, Copenhagen Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1963582022-12-12T19:02:59Z2022-12-12T19:02:59ZIs it ever okay for journalists to lie to get a story?<figure><img src="https://images.theconversation.com/files/500321/original/file-20221212-97751-s12rbl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>In a time of <a href="https://www.edelman.com.au/trust-barometer-2022-australia#:%7E:text=Trust%20in%20all%20media%20sources%20has%20fallen%2C%20with,media%20by%20only%2024%25%20of%20Australians%20%28-8%20points%29.">falling trust</a> in the news media, it is vital journalists do not engage in news-gathering methods that further harm their credibility. Thanks to the rise of social media, misinformation and disinformation are rampant. Trust in news matters, so we can tell fact from fiction. Without it, democracy suffers.</p>
<p>In our new book, Undercover Reporting, Deception and Betrayal in Journalism, we ask whether deception is ever an acceptable method for journalists to use. In other words, is it ever okay to lie to a target to get a story?</p>
<p>We find it can be ethically justifiable under very specific conditions. We offer a six-point checklist for journalists (and the audience) to test if deception and betrayal are warranted. </p>
<p>Deception is one of the most common ethical problems in journalism. It ranges in seriousness from misrepresentation to the use of undercover reporting. </p>
<p>In fact, it is so common that some argue it is inherent in what journalists do. The late American writer and journalist Janet Malcolm, for instance, in her renowned book <a href="https://www.penguinrandomhouse.com/books/106480/the-journalist-and-the-murderer-by-janet-malcolm/">The Journalist and the Murderer</a>, said in her opening paragraph:</p>
<blockquote>
<p>Every journalist who is not too stupid or too full of himself [sic] to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people’s vanity, ignorance, or loneliness, gaining their trust, and betraying them without remorse.</p>
</blockquote>
<p>While we argue Malcolm pushes her argument too far, we present a range of case studies that show not only the range of deceptive practices in contemporary journalism, but also their seriousness. </p>
<p>Three of the case studies are drawn from high-profile undercover operations or acts of deception. </p>
<p>One concerns the use by <a href="https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook">Cambridge Analytica</a> of data gathered by Facebook on 87 million of its users worldwide. These data were used to influence elections in several countries, including the United States in 2016.</p>
<p>Another involved the infiltration by <a href="https://www.9news.com.au/national/al-jazeera-journo-defends-one-nation-sting/ff384357-35be-4247-bd94-078c2ade0572#:%7E:text=Al%20Jazeera%20reporter%20Rodger%20Muller%20posed%20as%20the,talking%20about%20getting%20millions%20in%20donations%20from%20them.">Al Jazeera</a> of the National Rifle Association in the US. It then repeated this with the One Nation party in Australia in 2019.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/QYyX7O02yOg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The third case is the deception and betrayal inflicted on thousands of innocent people in Britain by Rupert Murdoch’s News of the World newspaper in hacking their mobile phones. This is perhaps the most egregious example of journalists failing their ethical duty in Britain in the past century. </p>
<p>From our examination of these cases, including interviews with key journalists, and building on the work of two distinguished American journalists and scholars, <a href="https://www.penguinrandomhouse.com/books/95291/the-elements-of-journalism-by-bill-kovach-and-tom-rosenstiel/">Bill Kovach and Tom Rosenstiel</a>, we developed our six-point framework for assessing the ethical justification for the use of undercover techniques, including those of masquerade and entrapment.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/hacking-trial-verdict-coulson-guilty-and-brooks-cleared-but-end-of-an-era-for-the-red-tops-27753">Hacking trial verdict: Coulson guilty and Brooks cleared, but end of an era for the red tops</a>
</strong>
</em>
</p>
<hr>
<p>Using this test, we concluded that the operation against Cambridge Analytica was ethically justified. It told the public important truths that we would not otherwise have known. The most notable of these was that Cambridge Analytica was in the business of interfering in sovereign elections – a direct threat to democratic wellbeing. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=423&fit=crop&dpr=1 754w, https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=423&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/500317/original/file-20221212-90872-3510zz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=423&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">News of the World hacking the phone of murdered schoolgirl Milly Dowler is an example of when deception in journalism is completely unjustifiable.</span>
<span class="attribution"><span class="source">Facundo Arrizabalaga/EPA/AAP</span></span>
</figcaption>
</figure>
<p>But we also find that the operations against the NRA and One Nation were not justifiable; nor in any way could the phone hacking of celebrities and ordinary citizens such as the murdered schoolgirl <a href="https://edition.cnn.com/2012/11/28/world/europe/milly-dowler-profile/index.html">Milly Dowler</a> ever be justified to produce stories for News of the World. </p>
<p>Our framework consists of these six questions:</p>
<ol>
<li><p>Is the information sufficiently vital to the public interest to justify deception?</p></li>
<li><p>Were other methods considered and was deception the only way to get the story?</p></li>
<li><p>Was the use of deception revealed to the audience and the reasons explained?</p></li>
<li><p>Were there reasonable grounds for suspecting the target of the deception was engaged in activity contrary to the public interest?</p></li>
<li><p>Was the operation carried out with a risk strategy so it would not imperil a formal investigation by competent authorities?</p></li>
<li><p>Did the test of what is “sufficiently vital” to the public interest include an objective assessment of harm or wrongdoing?</p></li>
</ol>
<p>We consider a further case study to look at other aspects of deception and betrayal.</p>
<p>It concerns the deceptive conduct that goes under the general name of “hybrid journalism”. This is where advertising is presented in a way that is difficult to distinguish from news.</p>
<p>It goes under a variety of names such as “branded content”, “sponsored content” or “native advertising”. More recently, another label has come into fashion: “From our partners”. Reputable platforms use typography that distinguishes this from news content, but less reputable ones make it difficult to discern one from the other.</p>
<p>Journalists also engage in a range of more everyday deceptive practices. These include failing to declare oneself as a journalist; attempting to ingratiate oneself with a person by feigning a romantic interest in them; agreeing to publish information known to be untrue in order to serve the interests of a valued source; and ambushing a subject by having a microphone open or a camera rolling when the subject has no reason to think they are being recorded.</p>
<p>As these case studies show, deception and betrayal in journalism take many forms, and the ethical decisions surrounding them are far from straightforward. However, they are not inherent to the practice of journalism. Whether they are justifiable must be closely scrutinised, because the public’s trust in the media is at stake.</p><img src="https://counter.theconversation.com/content/196358/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrea Carson receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Denis Muller does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new book argues that very rarely it is ethically justifiable to deceive to get a story. But mostly it’s a dangerous and harmful practice that adds to the public’s mistrust of the media.Andrea Carson, Associate Professor, Department of Politics, Media and Philosophy, La Trobe UniversityDenis Muller, Senior Research Fellow, Centre for Advancing Journalism, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1762702022-03-10T11:14:01Z2022-03-10T11:14:01ZWhy big firms are rarely toppled by corporate scandals – new research<figure><img src="https://images.theconversation.com/files/450729/original/file-20220308-23-ibsoam.jpg?ixlib=rb-1.1.0&rect=53%2C29%2C3940%2C2634&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/modern-corporate-buildings-skyscrapers-city-london-1599442681">Shutterstock/Donatas Dabravolskas</a></span></figcaption></figure><p>Everyone makes mistakes. And that includes the world’s biggest companies, which are reliably prone to gaffes, errors of judgment and wrongdoing. </p>
<p>Some of these moments could even be labelled as corporate scandals – the kind of incident which shoves firms into the spotlight and places their activities under detailed public scrutiny. </p>
<p>But do these events do lasting damage? Does an oil spill, fraudulent activity or other unethical behaviour really affect highly valued reputations, sales and market value?</p>
<p>Our research suggests not. In fact, <a href="https://onlinelibrary.wiley.com/doi/10.1111/1467-8551.12365">our analysis</a> of the effects of a wide variety of business scandals shows that only rarely is the effect as severe as we might imagine. </p>
<p>Instead, it seems the public has a strong tendency to forget and move on. And even initial unplanned (and at the time unwanted) attention can lead to greater brand awareness, proving the old adage that any publicity is good publicity.</p>
<p>Take the recent <a href="https://news.sky.com/story/joe-rogan-and-spotify-who-is-the-us-podcaster-and-what-is-the-covid-misinformation-row-all-about-12529388">furore over Spotify</a>. In early 2022, the world’s largest music streaming service was accused by science and health professionals of offering a platform for misinformation about COVID.</p>
<p>So what happened next? At first, there was a dip in the stock market price <a href="https://www.businessinsider.com/spotifys-shares-dropped-by-12-after-neil-young-pulls-music-2022-1?r=US&IR=T">of about 12% </a> when artists including Neil Young, Joni Mitchell and Graham Nash announced they were withdrawing their music from the service. This financial hiccup was followed by an immediate <a href="https://theweek.com/coronavirus/1009616/spotify-stock-rebounds-after-joe-rogan-apology">stock price rebound </a> that is likely to climb beyond pre-scandal levels. Spotify went on to <a href="https://www.independent.co.uk/news/world/americas/spotify-joe-rogan-podcast-covid-label-misinformation-b2003821.html">add disclaimers</a> to its COVID-related content and removed some content. </p>
<p>So in the long term, this will probably turn out to be nothing more than a slight bump in the road for Spotify. As a business, it provides a hugely popular service and boasts 172 million premium subscribers around the world, 28 million of whom joined in 2020. How many of them will cancel their subscriptions and forgo access to their carefully curated playlists because Young and Mitchell have decided to walk?</p>
<p>And while it is true that the company’s business model relies on musicians and other content providers, the reality is that most artists cannot afford to not be on the platform. Giving Spotify the benefit of the doubt, it’s entirely possible it made an honest mistake and underestimated how sensitive some people have become to discussions about the pandemic. Customers will probably make peace with this. </p>
<p>Likewise, Netflix will doubtlessly survive recent controversies over some of its content, such as the British comedian <a href="https://www.theguardian.com/culture/2022/feb/04/jimmy-carr-condemned-for-joke-about-gypsies-in-netflix-special">Jimmy Carr’s comments</a> about the Holocaust. With so many subscribers around the world attracted by the service’s wide range of content, Netflix is another example of an industry giant that can shrug things off. </p>
<p>And remember <a href="https://www.cbsnews.com/news/facebook-stock-price-recovers-all-134-billion-lost-in-after-cambridge-analytica-datascandal/">Facebook’s market collapse</a> after it was linked to the personal data of millions of users being collected by the political consulting firm Cambridge Analytica? Don’t feel bad if you don’t, it lasted about seven seconds (OK, maybe seven days). The company then recovered <a href="https://www.cbsnews.com/news/facebook-stock-price-recovers-all-134-billion-lost-in-after-cambridge-analytica-datascandal/">all of the US$134 billion</a> (£102 billion) it had previously lost in market value.</p>
<h2>Law and disorder</h2>
<p>So what makes some scandals stick? In our research, we found that only certain scandals tend to have significant negative effects on corporate reputations and performance. One apparently vital element is a company being found liable in a court of law. The legal process gives weight and depth to a scandal that might otherwise have quickly disappeared.</p>
<p>The <a href="https://www.bbc.co.uk/news/business-34324772">Volkswagen emissions scandal</a> for example, started in 2015. Seven years later, the company is still negotiating settlements in class action lawsuits brought against it for cheating on emissions tests. </p>
<figure class="align-center ">
<img alt="The Royal Courts of Justice in London." src="https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=407&fit=crop&dpr=1 600w, https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=407&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=407&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=512&fit=crop&dpr=1 754w, https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=512&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/450756/original/file-20220308-27-19gyrvm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=512&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">It’s the court that counts.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/royal-courts-justice-london-136279025">Shutterstock/chrisdorney</a></span>
</figcaption>
</figure>
<p>The company’s share price dropped 30% immediately <a href="https://money.cnn.com/2015/09/24/investing/volkswagen-vw-emissions-scandal-stock/">after the scandal</a> (it has improved since the move towards electric vehicles) and Volkswagen’s reputation is still tarnished by the event, as it continues to attract significant regulatory scrutiny, affecting its <a href="https://www.morningstar.co.uk/uk/news/216954/stock-of-the-week-volkswagen.aspx">status among investors</a>. </p>
<p>Similarly, years after being found responsible for the <a href="https://www.epa.gov/enforcement/deepwater-horizon-bp-gulf-mexico-oil-spill">Deepwater Horizon disaster</a> in the Gulf of Mexico in 2010, BP is still paying the price of its negligence, as it continues to be embroiled in <a href="https://www.reuters.com/legal/transactional/5th-circuit-revives-bps-fight-over-deepwater-cleanup-workers-claims-2022-01-20/">many lawsuits</a>. And following regulatory intervention, German financial services provider Wirecard is not even around anymore to tell the story of how €1.9 billion (£1.6 billion) <a href="https://www.cnbc.com/2020/06/29/enron-of-germany-wirecard-scandal-casts-a-shadow-on-governance.html">disappeared from its balance sheet</a>. </p>
<p>Yet without corporate culpability determined by the court of law, very few accusations stick, even in the face of media scrutiny. Without clear evidence of harm caused to a group of people, there is very little in the way of measurable negative impact, or demand for compensation for the damage caused. </p>
<p>As consumers, we often like to signal moral superiority and enjoy some of the drama provided by the corporate discomfort of a juicy scandal. But our research found that people’s response to a company is driven by more mundane considerations. These are price, convenience, loyalty, ease of use and habit – and there aren’t many scandals considered quite scandalous enough to make us change any of those.</p><img src="https://counter.theconversation.com/content/176270/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Irina Surdu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The mud rarely sticks.Irina Surdu, Associate Professor of International Business Strategy, Warwick Business School, University of WarwickLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1775862022-03-09T14:30:07Z2022-03-09T14:30:07ZSocial media is being misused in Kenya’s political arena. Why it’s hard to stop it<figure><img src="https://images.theconversation.com/files/448083/original/file-20220223-13-rn4ett.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In Kenya, social media has become a new battleground in electoral campaigns. </span> <span class="attribution"><span class="source">Jakub Porzycki/NurPhoto via Getty Images</span></span></figcaption></figure><p>The information landscape in Africa – as elsewhere in the world – has expanded exponentially over the last decade. The proliferation of platform media, including Facebook, Twitter and YouTube, has been instrumental in this expansion. This has created important new debating spaces. </p>
<p>These platforms have <a href="https://issafrica.org/iss-today/social-media-and-the-state-challenging-the-rules-of-engagement">now become essential</a> for political campaigns across the continent. In Kenya, for example, social media has turned into a powerful new battleground in electoral politics.</p>
<p>Traditionally, political debates have been shaped by mainstream media. However, over the years, <a href="https://internews.org/wp-content/uploads/legacy/2021-03/KMAReport_Final_20210325.pdf">public trust in these media</a> has waned. The country’s mainstream media remains strongly wedded to factional ethnic and class interests. This has increasingly undermined its capacity to facilitate fair and open debate. This is particularly true during elections.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/journalism-has-changed-education-must-reflect-the-reality-176944">Journalism has changed. Education must reflect the reality</a>
</strong>
</em>
</p>
<hr>
<p>Social media platforms have exploited this trust deficit, acting as important alternative sites for political deliberation. However, they have also become powerful tools for disinformation and misinformation. </p>
<p>According to a <a href="https://foundation.mozilla.org/en/blog/new-research-in-kenya-disinformation-campaigns-seek-to-discredit-pandora-papers/">recent report</a> by the Mozilla Foundation, which campaigns for an open and accessible internet, there is now a relatively well-established disinformation industry in Kenya. It is largely driven by social media influencers. </p>
<p>Over the last 10 years, I have carried out research on the interface between digital technologies and politics in Kenya. The Mozilla report demonstrates what I’ve witnessed – the evolution of the political role of some of the country’s digital spaces.</p>
<p>There is no evidence that disinformation and misinformation practices can on their own influence the outcome of elections. Nevertheless, they pose a danger to democratic processes. </p>
<p>They also poison an important space in which deliberative politics should take place. In politically charged environments, such as Kenya’s, they have the capacity to exploit long-held divisions with the potential to trigger violence.</p>
<h2>Manipulation tools</h2>
<p>The Mozilla Foundation report notes that social media influencers are capable of manipulating Twitter’s algorithms to determine trending topics. This is significant because such topics tend to shape editorial agendas outside the online platform. </p>
<p>The report identifies the use of a combination of methods to facilitate manipulation. One is the use of sock puppet accounts – multiple accounts controlled by the same user. Another is astroturfing. This is the practice of masking the sponsors of online messages so that they appear organic.</p>
<p>Using these kinds of tools, social media influencers can counter any negative stories about the people who are paying them – or malign opponents. </p>
<p>The Mozilla report cites the online reaction to the Pandora Papers leaks. This was an investigative report by the International Consortium of Investigative Journalists that exposed a <a href="https://www.icij.org/investigations/pandora-papers/">“shadow financial system that benefits the world’s most rich and powerful”</a>. </p>
<p>The papers revealed how powerful individuals, including the family of Kenya’s President Uhuru Kenyatta, were using tax havens and secrecy jurisdictions to avoid public scrutiny of their assets. The authors of the Mozilla report uncovered a sophisticated strategy to counter the largely incriminating evidence against the president’s family. It involved astroturfing and the use of hashtags such as #phoneyleaks and #offshoreaccountsfacts. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-users-in-kenya-and-south-africa-trust-science-but-still-share-covid-19-hoaxes-157894">Social media users in Kenya and South Africa trust science, but still share COVID-19 hoaxes</a>
</strong>
</em>
</p>
<hr>
<p>Disinformation and misinformation practices, especially at election time in Kenya, aren’t new. But platform media provide easier and faster ways of fabricating information and distributing it at scale. Those involved are doing so with little fear due to the platforms’ ability to enable anonymity and pseudonimity. </p>
<p>The rise of these practices was evident in Kenya’s 2017 elections, attracting both local and international actors. An example was the <a href="https://foreignpolicy.com/2017/08/01/texts-lies-and-videotape-kenya-election-fake-news/">infamous ‘Cambridge Analytica’</a> case. This involved massive data manipulation done through the <a href="https://www.businessinsider.co.za/cambridge-analytica-a-guide-to-the-trump-linked-data-firm-that-harvested-50-million-facebook-profiles-2018-3?r=US&IR=T">deliberate posting of fake news</a>.</p>
<p>There is evidence that these practices are on the rise for the upcoming <a href="https://www.theelephant.info/editions/kenya-election-2022/">poll scheduled for August 2022</a>.</p>
<h2>Why solutions are hard to come by</h2>
<p>Disinformation and misinformation practices involve big tech companies, governments and the public. Their interests and priorities don’t always converge. This makes policy responses particularly challenging. </p>
<p>In addition, many governments are failing to act because of conflicting demands. On the one hand, they need to protect the public from perceived harmful information. On the other, they need to protect citizens’ rights to information and freedom of expression.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/punitive-laws-are-failing-to-curb-misinformation-in-africa-time-for-a-rethink-162961">Punitive laws are failing to curb misinformation in Africa. Time for a rethink</a>
</strong>
</em>
</p>
<hr>
<p>It gets even more complicated in countries such as Kenya where the state, as well as extensions of the state, are actively involved in misinformation and disinformation campaigns. </p>
<p>In Kenya, media owners are typically the beneficiaries of a licensing regime that rewards supporters of the government. In most cases, these politicians are keen to use their media for political mobilisation, sometimes through misinformation and disinformation campaigns. This can involve politicians actively undermining potentially effective policy responses that don’t suit their interests. </p>
<p>Another major problem is that social media influencers have a financial incentive to participate in disinformation practices. Political players are spending large amounts of money online to <a href="https://foreignpolicy.com/2017/08/01/texts-lies-and-videotape-kenya-election-fake-news/">popularise their candidature and undermine opponents</a>. These online platforms offer immediacy and scale. </p>
<p>Still, some policy responses from Canada and Sweden could form the basis for the development of local solutions.</p>
<p>Canada has taken the problem out of the state’s hands. It has done this by creating a nonpartisan panel tasked with decisions on disinformation practices. In Sweden, intelligence agencies work with journalists to address disinformation. As Chris Tenove, a research fellow at the University of British Columbia, <a href="https://crtc.gc.ca/eng/acrtc/prx/2020tenove.htm">puts it</a>:</p>
<blockquote>
<p>This uses the insights of intelligence agencies but leaves public communication up to independent journalists. </p>
</blockquote>
<p>These may not necessarily be the panacea to these practices. However, they offer a good starting point from which relevant context-specific responses may be developed.</p><img src="https://counter.theconversation.com/content/177586/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>George Ogola does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The vacuum created by a drop in public trust in mainstream platforms has given rise to new media players who don’t always play by the rules.George Ogola, Reader in Journalism, University of Central LancashireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1711862021-11-10T11:22:51Z2021-11-10T11:22:51ZFacebook will drop its facial recognition system – but here’s why we should be sceptical<figure><img src="https://images.theconversation.com/files/430501/original/file-20211105-10492-raehl0.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5002%2C3332&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Artem Oleshko/Shutterstock</span></span></figcaption></figure><p>Facebook has <a href="https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/">announced</a> that it will stop using its facial recognition system – the artificial intelligence software which recognises people in photos and videos and generates suggestions about who to “tag” in them.</p>
<p>Facial recognition systems, like Facebook’s, identify people by matching faces to digital representations of faces stored on a database. Facebook has more than a billion of these representations on file but now says it will delete them. </p>
<p>This announcement came barely a week after Facebook’s parent company rebranded itself from <a href="https://www.bbc.co.uk/news/technology-59083601">Facebook to Meta</a>. The name change reflects the company’s focus on the “metaverse”, a vision for the internet which uses technology like virtual reality to integrate real and digital worlds.</p>
<p>The name change probably also had something to do with a desire to detoxify Facebook’s image. In recent years, the social media giant has been embroiled in a number of controversies – perhaps most notably the <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Cambridge Analytica scandal</a>. </p>
<p>This saw an app use <a href="https://www.spectator.co.uk/article/were-there-any-links-between-cambridge-analytica-russia-and-brexit-">Facebook’s platform</a> to harvest personal data belonging to millions of Facebook users, which was then passed to Cambridge Analytica, a now defunct British consulting firm. In 2018, the UK’s data protection watchdog, the Information Commissioner’s Office, <a href="https://www.bbc.co.uk/news/technology-54722362">fined Facebook £500,000</a> for its role in the scandal.</p>
<p>More recently we’ve heard former Facebook product manager Frances Haugen claim that the platform harms children, stokes division and undermines democracy in pursuit of fast growth and “<a href="https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress?t=1635978244222">astronomical profits</a>”.</p>
<p>We might wonder whether the facial recognition move, too, is an attempt to present a new, responsible image focused on respecting and protecting users’ privacy.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/class-action-against-facebook-over-facial-recognition-could-pave-the-way-for-further-lawsuits-95215">Class action against Facebook over facial recognition could pave the way for further lawsuits</a>
</strong>
</em>
</p>
<hr>
<h2>Our data is like gold</h2>
<p>Facebook is free to join and use so it relies on another valuable product to cover its expenses – people’s data.</p>
<p>As part of my team’s <a href="https://researchportal.port.ac.uk/en/publications/localising-social-network-users-and-profiling-their-movement">research</a>, we got permission from a group of Facebook users, and had crawlers (bots that systematically browse the internet) collect their posts and pictures – or posts and pictures which featured them. Using machine learning algorithms on this data, we were able to profile their habits and predict with high accuracy things like where they would be the next day.</p>
<p>In a <a href="https://researchportal.port.ac.uk/en/publications/facewallgraph-using-machine-learning-for-profiling-user-behaviour">related study</a>, we looked at Facebook wall posts and, again using machine learning, we were able to build a psychological profile of users based on their posts. That is, we could ascertain when they were sad, happy, and so on.</p>
<p>If I can gather data from Facebook using a relatively simple program and come up with accurate conclusions, imagine what Facebook can do with its vast amount of data – including from our face templates – and artificial intelligence.</p>
<figure class="align-center ">
<img alt="A woman on public transport using her phone." src="https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=348&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=348&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=348&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=438&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=438&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430503/original/file-20211105-19-1rfn6nf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=438&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In a sense, Facebook is powered by our data.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-woman-using-smartphone-subway-1038574906">Rawpixel.com/Shutterstock</a></span>
</figcaption>
</figure>
<p>Amid privacy concerns about the technology, in 2019, Facebook made the facial recognition feature <a href="https://www.vox.com/recode/22761598/facebook-facial-recognition-meta?scrolla=5eb6d68b7fedc32c19ef33b4">opt-in</a>. Last year, Facebook agreed to pay a US$650 million settlement (roughly £480 million) after <a href="https://theconversation.com/class-action-against-facebook-over-facial-recognition-could-pave-the-way-for-further-lawsuits-95215">a lawsuit</a> claimed its facial recognition system violated Illinois’ Biometric Information Privacy Act.</p>
<p>While many might construe Meta’s announcement as a positive development, I see it as a convenient distraction from, or perhaps a countermeasure to, the <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">whistleblower</a> <a href="https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress?t=1635978244222">testimonies</a> presenting a company that puts profits before user safety. </p>
<p>It’s also worth pointing out that Facebook is struggling to retain <a href="https://www.socialmediatoday.com/news/internal-documents-show-facebook-usage-among-young-users-is-in-steep-declin/607708/">young users</a>, so they’re probably looking for ways to attract this important group to the platform.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-the-metaverse-and-the-monetisation-of-higher-education-171036">Facebook, the metaverse and the monetisation of higher education</a>
</strong>
</em>
</p>
<hr>
<h2>It’s not gone completely</h2>
<p>Meta’s <a href="https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/">announcement</a> specified facial recognition technology would be limited to “a narrow set of use cases” moving forward. This could include verifying a user’s identity so they can gain access to a locked account, for example.</p>
<p>As such, Meta is <a href="https://www.nytimes.com/2021/11/02/technology/facebook-facial-recognition.html">reportedly</a> <a href="https://www.ft.com/content/dd906710-f0b0-42ef-9d89-309018e72aa7">keeping DeepFace</a>, the algorithm behind its facial recognition technology. Meta spokesperson Jason Grosse said the company <a href="https://www.nytimes.com/2021/11/02/technology/facebook-facial-recognition.html">hasn’t ruled out</a> using facial recognition technology in future products. Notably, Grosse has also reportedly said the commitment to stop facial recognition <a href="https://www.vox.com/recode/22761598/facebook-facial-recognition-meta?scrolla=5eb6d68b7fedc32c19ef33b4">doesn’t apply</a> to its metaverse products. </p>
<p>Grosse told the publication Recode:</p>
<blockquote>
<p>We believe this technology has the potential to enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs […] For any potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/metaverse-five-things-to-know-and-what-it-could-mean-for-you-171061">Metaverse: five things to know – and what it could mean for you</a>
</strong>
</em>
</p>
<hr>
<p>It’s important to understand that when a person engages in a virtual reality environment in <a href="https://theconversation.com/metaverse-five-things-to-know-and-what-it-could-mean-for-you-171061">the metaverse</a>, they will generate a range of biometric data, well beyond facial scans. For example, depending on the system, it may be possible to detect and collect eye movements, body movements, blood pressure, heart rate, and details about the users’ environment. </p>
<p>Ultimately, the artificial intelligence accompanying the metaverse will be much more sophisticated and likely bring with it a new set of data privacy issues.</p><img src="https://counter.theconversation.com/content/171186/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stavros Shiaeles does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The commitment applies to the social network, but not necessarily to the metaverse.Stavros Shiaeles, Senior lecturer in Cyber Security, University of PortsmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1498042021-03-23T12:32:00Z2021-03-23T12:32:00ZPrivacy may be under threat, but its protection alone isn’t enough to preserve civil liberties<figure><img src="https://images.theconversation.com/files/390448/original/file-20210318-23-ybwzfq.jpg?ixlib=rb-1.1.0&rect=0%2C4%2C2995%2C1989&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Demonstrators shine their cellphones during a protest in St. Louis in 2020.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/protesters-light-up-their-cell-phones-during-a-protest-news-photo/1228695076?adppopup=true">Michael B. Thomas/Getty Images</a></span></figcaption></figure><p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take about interesting academic work.</em></p>
<h2>The big idea</h2>
<p>While the battle over privacy is everywhere in American life, it’s actually a relatively new concept that didn’t become grounded in law until over a century after the Declaration of Independence. </p>
<p>Privacy is supposedly a core American value, forged in the country’s founding. For example, <a href="https://www.google.com/books/edition/American_Privacy/b7CE5PqvVw8C?hl=en">historians claim</a> that privacy concerns drove the American Revolution. Colonists were reacting to British troops invading their warehouses and shops in search of taxable goods, and to British demands that the Colonists shelter soldiers in their homes. </p>
<p>And today, <a href="https://www.aclu.org/blog/national-security/qa-daniel-solove-how-bad-security-arguments-are-undermining-our-privacy">civil liberties advocates argue</a> that democracy requires privacy. They believe privacy is necessary to create independent-minded, free-thinking citizens who vote as they wish.</p>
<p>Yet the term “privacy” is not mentioned in the Constitution. A <a href="https://doi.org/10.2307/1321160">legal right to privacy</a> wasn’t articulated until 1890. And it came to be robustly <a href="https://supreme.justia.com/cases/federal/us/381/479/">defended by the Supreme Court</a> only in the 1960s. </p>
<p>These are among the many things I discovered while researching “<a href="https://www.cambridge.org/us/academic/subjects/law/e-commerce-law/life-after-privacy-reclaiming-democracy-surveillance-society?format=PB&isbn=9781108811910">Life after Privacy: Reclaiming Democracy in a Surveillance Society</a>,” which explores the nature of privacy, its history and its uncertain future. I also learned that privacy remains an ill-formed and embattled concept. </p>
<h2>Why it matters</h2>
<p><a href="https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/">Americans feel</a> their privacy is gravely endangered in the digital age. Corporations use increasingly sophisticated methods of data collection to <a href="https://www.forbes.com/sites/ianmorris/2016/12/31/facebook-knows-when-you-fall-in-love-and-thats-pretty-creepy/?sh=21021cf6f525">analyze and influence people’s behavior</a>.</p>
<p>This ability can be used both to bolster and hamper democracy. For example, Facebook used its deep knowledge of user data to <a href="https://www.nature.com/news/facebook-experiment-boosts-us-voter-turnout-1.11401">boost voter turnout</a> in 2010. Four years later, data firm Cambridge Analytica used the same technique to <a href="https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-blueprint-for-trump-victory">target voters</a> with Donald Trump campaign ads.</p>
<p>In my research, I learned that political liberty relies much less on privacy than on people’s ability and willingness to demonstrate and deliberate in the public realm. By that I mean protecting privacy alone will not help with consumer and citizen freedom. I believe people need to use the power of public protests to gain and maintain their civil liberties.</p>
<p>The <a href="https://daily.jstor.org/the-stonewall-riots-didnt-start-the-gay-rights-movement/">gay rights movement</a> demonstrated this power in the past century. Throughout the 20th century in much of America, people were <a href="https://journalofethics.ama-assn.org/article/decriminalization-sodomy-united-states/2014-11">prosecuted for homosexual behavior</a> in their private lives. The aggressive work of <a href="https://www.nytimes.com/interactive/2020/04/13/t-magazine/act-up-aids.html">ACT UP</a> and other gay rights activist groups led to legal protections for people to live and love as they wished. And in 2003, the Supreme Court <a href="https://supreme.justia.com/cases/federal/us/539/558/">overruled all state laws</a> that had prohibited homosexuality. </p>
<p>Civil and <a href="https://www.history.com/topics/19th-century/labor">labor rights campaigns</a> in the 20th century had similar outcomes. Despite civil rights leaders’ being <a href="https://taylorbranch.com/king-era-trilogy/parting-the-waters/">spied on and hounded</a> from the start, they used their power of coordination and public organizing to trump their lack of privacy. Their organizational roots, built over many decades, enabled them to withstand repeated assault and launch <a href="https://www.history.com/topics/black-history/the-greensboro-sit-in">disciplined</a>, <a href="https://www.biography.com/news/black-history-birmingham-childrens-crusade-1963">creative</a> protests. </p>
<p>In other words, privacy is not so much a prerequisite for democracy as it is a product of democratic action. </p>
<h2>What still isn’t known</h2>
<p>It is still unclear how digital technology has changed the nature of political protest, and whether it has made it more or less effective. </p>
<p>As scholar <a href="https://yalebooks.yale.edu/book/9780300259292/twitter-and-tear-gas">Zeynep Tufekci notes</a>, modern, internet-fueled “networked protests” like <a href="https://www.washingtonpost.com/national/on-leadership/what-is-occupy-wall-street-the-history-of-leaderless-movements/2011/10/10/gIQAwkFjaL_story.html">Occupy Wall Street</a> and the <a href="https://www.aljazeera.com/news/2020/12/17/what-is-the-arab-spring-and-how-did-it-start">Arab Spring</a> used social media to quickly organize massive protests, but with <a href="https://www.vice.com/en/article/nze9em/twitter-makes-it-easy-to-start-a-revolution-without-finishing-it">limited long-term success</a>. </p>
<h2>What’s next</h2>
<p>Digital technology has changed Americans’ behavior in surprising ways, including when it comes to privacy. People share intimate details about their lives on social media. Meanwhile, digital media <a href="https://www.pnas.org/content/118/9/e2023301118">has also given rise</a> to hardened partisanship and political radicalization.</p>
<p>I believe philosophers need to look ahead and consider what other new behaviors digital technology is inspiring. Perhaps consumers and citizens will become more predictable, as <a href="https://www.theguardian.com/books/2019/feb/02/age-of-surveillance-capitalism-shoshana-zuboff-review">data analysts believe</a>. Alternatively, people may rise up and rebel against constant surveillance and the efforts of spying governments and marketers to control them.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p><img src="https://counter.theconversation.com/content/149804/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Firmin DeBrabander does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A privacy expert says citizens will need to exercise their right to public protest if they want to preserve their privacy.Firmin DeBrabander, Professor of Philosophy, Maryland Institute College of ArtLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1568202021-03-12T12:37:40Z2021-03-12T12:37:40ZAI and you: how confusion about the technology that runs our world threatens democracy<p>Thomas Jefferson, the American statesman and third US president, was many things (including, notoriously, a slave-owner). But whatever else he was (or wasn’t), he was a firm believer in what he called the “<a href="https://www.amazon.co.uk/Rise-American-Democracy-Jefferson-Lincoln/dp/0393329216">suffrage of the people</a>” — what today we’d call democracy. </p>
<p>The democracy he had in mind, of course, wasn’t a truly “general suffrage” of all citizens: in its most ambitious form it enfranchised only male taxpayers and soldiers. It was also far removed from the classical ideal set by Ancient Athens, in which all eligible citizens gathered regularly to debate and settle policy. Still, even Jefferson’s limited and strictly “representative” version of democracy required something vital if it was to function properly: not just an able and knowledgeable public service, but a well-informed voting public. </p>
<p><a href="http://www.let.rug.nl/usa/presidents/thomas-jefferson/letters-of-thomas-jefferson/jefl73.php">As Jefferson himself put it</a>: “Whenever the people are well-informed, they can be trusted with their own government.” Most Western democracies subscribe to this example today. But in the face of scientific and technological progress over the course of the 20th century, many political scientists, <a href="https://www.amazon.co.uk/Future-Shock-Alvin-Toffler/dp/0808501526">futurists</a> and <a href="http://content.time.com/time/subscriber/article/0,33009,916784,00.html">journalists</a> have been left wondering about the future of democracy. </p>
<p>In the quest to figure out where we’re headed, an obvious question looms. Just how well-informed can we expect the average citizen to be in a world that grows ever more complex and befuddling by the day? It would be naïve to think that the rise of science and technology hasn’t made it more difficult to fully comprehend the problems we face as citizens. </p>
<p>Global warming is the standout issue. Unless you happen to belong to a handful of experts who are well-informed on geology, meteorology and oceanography, you have to make a serious effort to understand the intricacies of climate science. </p>
<p>Add global warming scepticism to the news and it’s no wonder climate scepticism is so high in some countries. In the US, up to <a href="https://www.pewresearch.org/science/2019/11/25/u-s-public-views-on-climate-and-energy/">20% of US citizens</a> don’t think human activity contributes much, or anything at all, to climate change. In Australia, <a href="https://www.pewresearch.org/fact-tank/2019/04/18/a-look-at-how-people-around-the-world-view-climate-change/">38% of people surveyed</a> don’t consider climate change to be a major threat. The same survey found that in Canada that figure is 34%, and in the UK 30%.</p>
<h2>There’s a new game in town</h2>
<p>Unfortunately, the past five to 10 years have also seen the rise of artificial intelligence (AI), and more particularly a branch of AI called “machine learning”.</p>
<p>Machine learning occupies an interesting position in the story of scientific progress. On one hand it’s a natural outcome of developments in computer science that began in the 1980s. On the other hand, its total dependence on information — and its ability to make do with all sorts of information, including things like your keystroke and heart rate — marks what could turn out to be a more radical break with previous technologies. </p>
<figure class="align-center ">
<img alt="Robot side by side with a man" src="https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=310&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=310&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=310&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=390&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=390&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389119/original/file-20210311-22-s8uyei.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=390&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Machine learning allows new information to be put to a variety of questionable uses, including surveillance.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/ai-artificial-intelligence-concept-communication-network-1564233937">metamorworks/Shutterstock</a></span>
</figcaption>
</figure>
<p>Machine learning uses existing information to generate new information. But it also allows that new information to be put to a variety of questionable uses, including surveillance and manipulation. </p>
<p>If you’ve ever been recommended products while shopping online, you’ve probably been profiled. Ever been denied an application for a credit card in short order? Again, you’ve probably been profiled. Algorithmic profiling presents a host of ethical and legal challenges, particularly around discrimination and privacy. But profiling is just the tip of an ever-expanding iceberg.</p>
<h2>Democracy under attack?</h2>
<p>Many uses of big tech pose a threat to individuals as individuals, which is bad enough. Other uses, though, pose a threat to individuals as democratic citizens. Depressingly, there’s already a standout example here.</p>
<p>In 2017, it <a href="https://www.amazon.co.uk/Mindf-Inside-Cambridge-Analyticas-Break/dp/1788164997">transpired</a> that the UK company, Cambridge Analytica, had assisted the UK’s 2016 Brexit Leave campaign by providing it with targeted political advertising services. These services were facilitated by <a href="https://www.amazon.co.uk/Targeted-Cambridge-Analytica-Facebook-Democracy/dp/0008363900/ref=pd_sbs_14_3/260-3563907-5589956?_encoding=UTF8&pd_rd_i=0008363900&pd_rd_r=d51cebbf-62e9-4b1b-8010-4096301aadc9&pd_rd_w=Y40pl&pd_rd_wg=0Ns5p&pf_rd_p=2304238d-df78-4b25-a9a0-b27dc7bd722e&pf_rd_r=NHGR5WBF8DE4896SXXRB&psc=1&refRID=NHGR5WBF8DE4896SXXRB">access to Facebook data</a>, in a major breach of Facebook’s own policies. </p>
<p>Such so-called “dark” ads are usually sent to the very people most likely to be susceptible to them. Unlike old-school pamphleteering and letterboxing, the ads aren’t distributed helter-skelter. They’re targeted, based on in-depth mining of people’s browsing histories, Facebook likes, tweets, and online purchases. What’s more, a dark ad is typically sent without the receiver having the benefit of hearing the opposing view. </p>
<p>This isn’t how the democratic “marketplace of ideas” is supposed to work. Indeed, how we’re to understand and regulate the influence of algorithms on our perceptions is among the most important questions AI poses today. Another question worth pondering is why <a href="https://www.nytimes.com/2020/08/20/world/europe/uk-england-grading-algorithm.html">so many governments</a> around the world seem bent on <a href="https://theconversation.com/from-robodebt-to-racism-what-can-go-wrong-when-governments-let-algorithms-make-the-decisions-132594">automating public administration</a> when there’s plenty of evidence to suggest it’s often neither efficient <a href="https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules?CMP=Share_iOSApp_Other">nor fair</a>. </p>
<p>Basic lack of understanding obstructs more fruitful civic engagement with AI, data and big tech. But as citizens, we should know what’s going on — and who benefits. </p>
<p>That’s why my colleagues and I put our heads together and <a href="https://mitpress.mit.edu/books/citizens-guide-artificial-intelligence">wrote a book</a> that we think will help people sort their way through the AI jungle. Citizens deserve more than a superficial acquaintance with tech — nothing to cause confusion, but enough to inform a principled understanding of the world around them. </p>
<p>As Time journalist <a href="http://content.time.com/time/subscriber/article/0,33009,916784,00.html">Frank Trippett put it</a> way back in 1979: “The expert will have to play a more conscious role as citizen, just as the ordinary (citizen) will have to become ever more a student of technical lore.” </p>
<p>Our hope is that more journalists, industry leaders and academics will fulfill Trippett’s vision by becoming expert citizens themselves. This means giving people as much clear information as they need to make informed, responsible democratic choices. Democracy demands no less.</p><img src="https://counter.theconversation.com/content/156820/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Zerilli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It would be naïve to think that the rise of science and technology hasn’t made it more difficult to understand the problems we face as citizensJohn Zerilli, Assistant Professor in AI, Data, and the Rule of Law, The University of EdinburghLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1554112021-02-25T17:48:29Z2021-02-25T17:48:29ZThree ways to encourage companies to keep our data safe<figure><img src="https://images.theconversation.com/files/384482/original/file-20210216-19-oxfwy5.jpg?ixlib=rb-1.1.0&rect=8%2C58%2C1500%2C1041&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How can we keep our personal data safe? </span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/jimkaskade/15538115700/in/photolist-pF3Usy-RptCsT-T2D3w3-CEmgyU-oKUMHg-2f6axYh-2f6ayus-24yiUyM-5Xma5d-2f6aB71-2d53akR-RZmHWQ-oVpQbV-24yiU12-mZDZ8r-bq9U2H-bq9TZF-Mvahk4-oCpukD-Dyed6e-9eAVm8-2goBaQN-22hocSg-Msr4ew-Msr3d3-MvahQx-MsqXw9-2b9Kh8L-24yiUPg-LNdz9B-2khdskm-2hUwUZj-2i7KKXE-MEzZpm-MhztwS-25Q4aBN-MsqWHf-pZWAB7-L6s3DG-Wdz53p-LF5S9s-MvahLz-LF5RKG-MsqTUj-Msr2cf-UWePAj-22emP4d-25TTEVH-MvahGM-GeH5Mc">Jim Kaskade/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>With online shopping, loyalty programs, smart devices and many other aspects of our daily lives, the companies that make it all possible can collect vast amounts of our personal data. Sometimes its just common sense, like when we hail a taxi with mobile app – we want the platform to know our location to match us to the closest driver. With this and other data, companies can personalize their products and services to fit our preferences and needs.</p>
<p>At the same time, the ubiquitous availability of data that is so deeply personal presents risks. If the company that gathered it is less than virtuous, we may find ourselves signed up for unwanted ads, or worse. A notorious example is the consulting firm Cambridge Analytica, which exploited the Facebook data of <a href="https://www.businessinsider.com/cambridge-analytica-a-guide-to-the-trump-linked-data-firm-that-harvested-50-million-facebook-profiles-2018-3?IR=T">50 million Americans in an attempt to sway the 2016 elections</a>. While this is an extreme example, smaller scale but similar data leakage and misuse incidents occur on a daily basis. </p>
<p>What measures can governments and regulators take to prevent such abuses? How should companies and digital businesses, for whom a large part of their business models are our data, change their practices and policies so that our data are safe? </p>
<h2>Why current regulation is inefficient</h2>
<p>To shed light on digital privacy and design measures that regulators and companies can undertake to preserve consumer privacy, a team of researchers from the US, UK and Canada studied the interaction between three parties who are concerned with our data: us as individuals, the companies we interact with, and third parties. Our research question was: how does a company’s data strategy – essentially, its decisions of how much data to collect and how to protect data – influence the interaction between these three parties?</p>
<p>We found that in general, when companies choose data policies based only on self-interest, more data are collected than what would be optimal for consumers. <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3704446">Our findings</a> indicate that when industry leaders – for example, <a href="https://www.wsj.com/articles/the-facts-about-facebook-11548374613">Mark Zuckerberg</a> – claim that they collect the exact amount of data (or even less) than their consumers wish, they’re not always being honest. </p>
<p>Our work highlights the need for regulation of such markets. In the United States the key data regulator is the Federal Trade Commission (FTC). After the Cambridge Analytica scandal erupted, the <a href="https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions">FTC fined Facebook $5 billion</a>, even as it left the company’s business model untouched. The FTC’s major efforts are now essentially directed at asking companies to enforce their data-protection policies and deliver at least a minimal level of data protection. Our research shows that this is simply not enough.</p>
<h2>Two solutions to reduce data collection</h2>
<p>We propose two key types of instruments for discouraging companies from collecting more data than is strictly necessary:</p>
<ul>
<li><p>A tax proportional to the amount of data that a company collects. The more data a company collects about its customers, the higher the financial costs of these data to the company.</p></li>
<li><p>Liability fines. The concept is that the fines levied by regulators on companies after a data breach should be proportional to the damage that consumers suffer. In the case of Cambridge Analytica, the breach was massive so the company should have to pay a substantial fine.</p></li>
</ul>
<p>Both these instruments can help in restoring efficiency in these kinds of markets and help a regulator like the FTC to push companies to collect only the exact amount of data that customers are willing to share.</p>
<h2>Rethinking revenue management</h2>
<p>Recent years have seen an emergence of data-driven revenue management. Companies increasingly harness our personal data in order to sell to us products and services. Insurance companies offer personalized quotes based on intimate details of our lives including our medical histories. The financial industry designs loans that fit our spending patterns. Facebook and Google decide how to build our news feeds with an eye on their advertisers. Amazon chooses an assortment of products to offer to each customer based on their past purchases.</p>
<figure class="align-center ">
<img alt="Facebook offices." src="https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/384483/original/file-20210216-21-1akztco.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Facebook’s advertising-driven business model gives it incentive to gather as much information as possible about its users.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/eston/691552110">Eston Bond/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>What is common to all these seemingly different companies is the way in which they decide which price to set or which assortment to show each individual customer. The key ingredient is customers’ data: companies engaged in personalized revenue management apply sophisticated machine-learning techniques and algorithms on the historical data of their previous customers in order to build models of human behavior. In essence, the company can come up with the best possible price (or assortment, for example) for the new customer because he or she will resemble previous customers with similar characteristics.</p>
<p>With this kind of decision-making framework usually used in the data-driven revenue management applications, which heavily relies on the (potentially sensitive) historical data, there are pressing privacy risks. While a hacker might simply steal historical data, they don’t necessarily have to hack into a database. <a href="https://arxiv.org/abs/1912.01667">Recent research in computer science</a> shows that adversaries can actually reconstruct sensitive individual-level information by observing companies’ decisions, for example personalized prices or assortments.</p>
<h2>Privacy-preserving revenue management</h2>
<p>In our work we design “privacy-preserving” algorithms to be used by companies engaged in data-driven decision-making. These algorithms are aimed at helping such companies to limit harm imposed on their customers due to data leakage or misuse, while still allowing profit. While data cannot be made 100% safe, the goal is to reduce potential harm as much as possible, striking the right balance between benefits and risks.</p>
<p>One possible way to design privacy-preserving algorithms for the companies engaged in data-driven revenue management is to impose an additional constraint on the companies’ decision-making framework. </p>
<p>In particular, we can require that the decisions of the company (i.e., an insurance quote or an assortment of products) should not be too dependent on (or too informative of) the data of any particular customer from a historical dataset that the company used to derive this decision. An adversary, thus, should not be able to backtrace company’s decisions and infer sensitive information of the customers in the historical dataset. Formally, such requirement corresponds to designing <a href="https://en.wikipedia.org/wiki/Differential_privacy">“differentially private”</a> revenue-management algorithms. The concept has become an established de facto privacy standard in the industry used by companies such as Apple, Microsoft, and Google as well as public agencies such as the US Census Bureau.</p>
<p>We find that such privacy-preserving (or differentially private) algorithms can be designed through addition of carefully adjusted “noise” – essentially, any meaningless data that is akin to a flip of a coin – to companies’ decisions or to the sensitive data that a company uses. For instance, an insurance company designing a quote for a particular customer can first calculate the true-optimal price (for instance, the price that would maximize company’s revenue from this particular customer), then flip a coin and add $1 if getting heads and subtract $1 if getting tails. By adding such “noise” to the original true-optimal price, the company makes the carefully designed price “less optimal”, which potentially reduces profits. However, adversaries will have less information (or less inference power) to deduct anything meaningful about sensitive information regarding the company’s customers.</p>
<p>In our study we show that the company does not have to add a lot of noise to provide sufficiently strong consumer privacy guarantees. In fact, the more historical data the company has, the less expensive such privacy preservation is. In fact, in some cases privacy can be achieved almost for free.</p>
<hr>
<p><em>This article is based on the working paper <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3704446">“Privacy-Preserving Personalized Revenue Management”</a>, co-written with Yanzhe Lei of Queen’s University and Sentao Miao of McGill University, and the working paper <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3459274">“Digital Privacy”</a>, co-written with Itay P. Fainmesser of Johns Hopkins University and Andrea Galeotti of London Business School.</em></p>
<p><em>An earlier version of this article was published on <a href="https://www.hec.edu/en/knowledge/articles/how-can-we-force-companies-keep-our-data-safe">Knowledge@HEC</a>.</em></p><img src="https://counter.theconversation.com/content/155411/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ruslan Momot receives funding from HEC Paris Foundation and a grant of the French National Research Agency (ANR), "Investissements d’Avenir" (LabEx Ecodec/ANR-11-LABX-0047).</span></em></p>Companies today collect vast amounts of our personal data. What measures can governments and regulators take to reduce the inherent risks and keep our data?Ruslan Momot, Assistant professor of operations management, HEC Paris Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1475092020-10-21T15:45:32Z2020-10-21T15:45:32ZWe must make moral choices about how we relate to social media apps<figure><img src="https://images.theconversation.com/files/364456/original/file-20201020-14-nybccz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span></figcaption></figure><p>Recently a South African <a href="https://www.kfm.co.za/Show/kfm-breakfast">radio show</a> asked, “If you had to choose between your mobile phone and your pet, which would choose?” Think about that for a moment. Many callers responded they would choose their phone. I was shocked… But to be honest, I give more attention to my phone than to my beloved dogs!</p>
<p>Throughout history there have been discoveries that have changed society in unimaginable ways. Written language made it possible to communicate over space and time. The printing press, say historians, helped shape societies <a href="https://www.jstor.org/stable/24357082">through</a> the mass dissemination of ideas. New modes of transport <a href="https://hrcak.srce.hr/index.php?id_clanak_jezik=237992&show=clanak">radically transformed</a> social norms by bringing people into contact with new cultures.</p>
<p>Yet these pale in comparison to how the internet is shaping, and misshaping, our individual and social <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">identities</a>. I remember the first time I heard a teenager speaking with an American accent and discovered she’d never been out of South Africa but picked up her accent from watching YouTube. We shape our technologies, but they also shape us. </p>
<p>The potentially negative impacts of social media have again been highlighted by <a href="https://www.imdb.com/title/tt11464826/"><em>The Social Dilemma</em></a> on Netflix. The documentary, which Facebook has <a href="https://www.indiewire.com/2020/10/facebook-response-the-social-dilemma-1234590361/">slammed</a> as sensational and unfair, shows how dominant and largely unregulated social media companies manipulate users by harvesting personal data, while using <a href="https://theconversation.com/do-social-media-algorithms-erode-our-ability-to-make-decisions-freely-the-jury-is-out-140729">algorithms</a> to push information and ads that can lead to social media addiction – and dangerous anti-social behaviour. Among others, the show makes an example of the conspiracy theory <a href="https://theconversation.com/how-qanon-uses-satanic-rhetoric-to-set-up-a-narrative-of-good-vs-evil-146281">QAnon</a>, which is <a href="https://www.dailymaverick.co.za/article/2020-09-26-qanon-originated-in-south-africa-now-that-the-global-cult-is-back-here-we-should-all-be-afraid/">increasingly</a> <a href="https://www.thedailybeast.com/qanon-targets-africa-with-new-conspiracy-that-democrats-are-stealing-local-children">targeting</a> Africans.</p>
<p>Despite its flaws, the doccie got me wondering what our relationship should be to social media? As an ethics professor, I’ve come to realise that we must make moral choices about how we relate to our technologies. This requires an honest evaluation of our needs and weaknesses, and a clear understanding of the intentions of these platforms. </p>
<h2>Tug-of-war with technology</h2>
<p><a href="https://www.ynharari.com">Yuval Noah Harari</a>, author of <a href="https://www.theguardian.com/books/2014/sep/11/sapiens-brief-history-humankind-yuval-noah-harari-review"><em>Sapiens</em></a>, contends it’s our ability to inhabit “fiction” that differentiates humans. <a href="https://www.harpercollins.com/products/sapiens-yuval-noah-harari?variant=32207215656994">He claims</a> you “could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven”. Humans have a capacity to believe in things we cannot see – which changes things that do exist. Ideas like prejudice and hatred, for example, are powerful enough to cause wars that displace thousands. </p>
<p>The wall between Israel and Palestine was conceived in people’s minds before being transformed into bricks and barbed wire. Philosopher Oliver Razac’s book <a href="https://thenewpress.com/books/barbed-wire"><em>Barbed Wire: A political history</em></a> traces how this razor-sharp technology has been deployed from farms that displaced indigenous peoples to the trenches of World War I and the prisons of contemporary democracies. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman in a bathroom is engaged with her mobile phone, reflected in a mirror." src="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=245&fit=crop&dpr=1 600w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=245&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=245&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=308&fit=crop&dpr=1 754w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=308&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=308&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sophia Hammons as Isla in <em>The Social Dilemma</em>.</span>
<span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span>
</figcaption>
</figure>
<p>Technology is in a constant psychological, political and economic tug-of-war with humanity. Yet, some of today’s technologies are much more subtle than barbed wire. They are deeply <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">integrated into</a> our lives – they know us better than we know ourselves.</p>
<p>I have thousands of ‘friends’ on social media – far too many to relate to meaningfully. Yet, at times I can be more present to people that I have never met than I am to my family. This is not by chance – social media platforms are <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">designed</a> to seek and hold our attention. They are businesses, intent on making money. Harvard University professor <a href="https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy">Shoshana Zuboff</a>, who features in the documentary, explains in <a href="https://profilebooks.com/the-age-of-surveillance-capitalism.html"><em>The Age of Surveillance Capitalism</em></a> that social media “trades exclusively in human futures”.</p>
<h2>We are the product</h2>
<p>Zuboff says that social media platforms exploit our emotions and pre-cognate needs like belonging, recognition, acceptance and pleasure that are ‘hard wired’ into us to secure our survival. </p>
<p>Recognition relates to two of the primary <a href="https://books.google.co.za/books/about/The_Primal_Feast.html?id=TJF_xQAuLOYC&redir_esc=y">functions of the brain</a>, avoiding danger and finding ways to meet our basic survival needs (such as food or a mate to perpetuate our gene pool). These corporations, she says, are hiring the smartest engineers, social psychologists, behavioural economists and artists to hold our attention, while interspersing adverts between our videos, photos and status updates. They make money by offering a future that their advertisers will sell you. </p>
<p>Or, as former Google and Facebook employee Justin Rosenstein, says in <em>The Social Dilemma</em>:</p>
<blockquote>
<p>Our attention is the product being sold to advertisers. </p>
</blockquote>
<p>If our adult brains are so susceptible to this kind of manipulation, what effects are they having on the developing minds of children?</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/uaaC57tcci0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for <em>The Social Dilemma</em>.</span></figcaption>
</figure>
<p>The documentary also reminds the viewer that social media has a more subtle and powerful influence on our lives – shaping our social and political realities. </p>
<h2>Fake news and hate speech</h2>
<p>The documentary uses an example from 2017 in which Facebook use is linked to <a href="https://www.reuters.com/article/us-facebook-india-content-idUSKBN1X929F">violence</a> that led to the displacement of close to 700,000 Rohingya persons in Myanmar. Something that doesn’t really exist (a social media platform) violently changed something that does exist (the safety of people). Facebook was a primary means of communication in Myanmar. New phones came with Facebook pre-installed. What users were unaware of was a ‘third person’ – Facebook’s algorithms – feeding information that included hate speech and fake news into their conversations. In Africa, similar reports have emerged from <a href="https://www.buzzfeednews.com/article/jasonpatinkin/how-to-get-people-to-murder-each-other-through-fake-news-and#.cfxZRym4z">South Sudan</a> and <a href="https://theconversation.com/a-vicious-online-propaganda-war-that-includes-fake-news-is-being-waged-in-zimbabwe-99402">Zimbabwe</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<p>Another example used is the <a href="https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook">Cambridge Analytica</a> <a href="https://theconversation.com/why-facebook-is-the-reason-fake-news-is-here-to-stay-94308">scandal</a>, which also played out in <a href="https://qz.com/africa/1089911/bell-pottinger-and-cambridge-analyticas-work-in-south-africa-kenya-is-raising-questions/">Africa</a>, most notably in <a href="https://theconversation.com/how-the-nigerian-and-kenyan-media-handled-cambridge-analytica-128473">Nigeria and Kenya</a>. Facebook user information was mined and sold to nefarious political actors. This information (like what people feared and what upset them) was used to spread misinformation and manipulate their voting decisions on important elections.</p>
<h2>What to do about it?</h2>
<p>So, what do we do? We can’t very well give up on social media completely, and I don’t think it is necessary. These technologies are already deeply intertwined with our daily lives. We cannot deny they have some value. </p>
<p>However, just like humans had to adapt to the responsible use of the printing press or long distance travel, we will need to be more intentional about how we relate to these new technologies. We can begin by cultivating healthier social media <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">habits</a>.</p>
<p>We should also develop a greater awareness of the aims of these companies and how they achieve them, while understanding how our information is being used. This will allow us to make some simple commitments that align our social media usage to our better values.</p><img src="https://counter.theconversation.com/content/147509/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dion Forster receives funding from the South African National Research Foundation. </span></em></p>As more comes to light about the money-making tactics of social media platforms we need to reevaluate our relationship with them.Dion Forster, Head of Department, Systematic Theology and Ecclesiology, Stellenbosch UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1479492020-10-15T11:17:16Z2020-10-15T11:17:16ZHow we discovered that VR can profile your personality<figure><img src="https://images.theconversation.com/files/363382/original/file-20201014-21-11r9pgc.jpg?ixlib=rb-1.1.0&rect=47%2C53%2C4446%2C3078&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/cheerful-girl-hands-wearing-virtual-reality-603345986">Mark Nazh/Shutterstock</a></span></figcaption></figure><p><a href="https://theconversation.com/facebooks-virtual-reality-push-is-about-data-not-gaming-145730">Virtual reality</a> (VR) has the power to take us out of our surroundings and transport us to far-off lands. From a quick round of golf, to fighting monsters or going for a skydive, all of this can be achieved from the comfort of your home. </p>
<p>But it’s not just gamers who love VR and see its potential. VR is used a lot in psychology research to investigate areas such as <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0226805">social anxiety</a>, <a href="https://www.nature.com/articles/s41598-017-13909-9">moral decision-making</a> and <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0223881">emotional responses</a>. And in our <a href="https://www.nature.com/articles/s41598-020-74421-1">new research</a> we used VR to explore how people respond emotionally to a potential threat.</p>
<p>We knew from <a href="https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-017-0463-6">earlier work</a> that being high up in VR provokes strong feelings of fear and anxiety. So we asked participants to walk across a grid of ice blocks suspended 200 metres above a snowy alpine valley. </p>
<p>We found that as we increased the precariousness of the ice block path, participants’ behaviour became more cautious and considered – as you would expect. But we also found that how people behave in virtual reality can provide clear evidence of their personality. In that we were able to pinpoint participants with a certain personality trait based on the way they behaved in the VR scenario.</p>
<p>While this may be an interesting finding, it obviously raises concerns in terms of <a href="https://theconversation.com/think-facebook-can-manipulate-you-look-out-for-virtual-reality-93118">people’s data</a>. As technology companies could profile people’s personality via their VR interactions and then use this information to target advertising, for example. And this clearly raises concerns about how data collected through VR platforms can be used.</p>
<h2>Virtual fall</h2>
<p>As part of our study, we used head-mounted VR displays and handheld controllers, but we also attached sensors to people’s feet. These sensors allowed participants to test out a block before stepping onto it with both feet. </p>
<p>As participants made their way across the ice, some blocks would crack and change colour when participants stepped onto them with one foot or both feet. As the experiment progressed, the number of crack blocks increased. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LmFyxmGqkLM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>We also included a few fall blocks. These treacherous blocks were identical to crack blocks until activated with both feet, when they shattered and participants experienced a short but uncomfortable virtual fall.</p>
<p>We found that as we increased the number of crack and fall blocks, participants’ behaviour became more cautious and considered. We saw a lot more testing with one foot to identify and avoid the cracks and more time spent considering the next move.</p>
<p>But this tendency towards risk-averse behaviour was more pronounced for participants with a higher level of a personality trait called neuroticism. People with high neuroticism are more sensitive to negative stimuli and potential threat. </p>
<h2>Personality and privacy</h2>
<p>We had participants complete a personality scale before performing the study. We specifically looked at neuroticism, as this measures the extent to which each person is likely to experience negative emotions such as anxiety and fear. And we found that participants with higher levels of neuroticism could be identified in our sample based on their behaviour. These people did more testing with one foot and spent longer standing on “safe” solid blocks when the threat was high.</p>
<p>Neuroticism is one of the five major personality traits most commonly used to profile people. These traits are normally assessed by a self-report questionnaire, but can also be assessed based on behaviour – as demonstrated in our experiment. </p>
<figure class="align-center ">
<img alt="Excited teen hipster girl playing virtual reality video game wear vr goggles headset." src="https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/363384/original/file-20201014-13-1xopeqm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">As advances in technology continue to develop, so too does the power of surveillance.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/excited-teen-hipster-girl-playing-virtual-1739807582">insta_photos/Shutterstock</a></span>
</figcaption>
</figure>
<p>Our findings show how users of VR could have their personality profiled in a virtual world. This approach, where private traits are predicted based on implicit monitoring of digital behaviour, was demonstrated with a <a href="https://www.pnas.org/content/110/15/5802">dataset</a> derived from Facebook likes back in 2013. This paved the way for controversial commercial applications and the <a href="https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html">Cambridge Analytica scandal</a> – when psychological profiles of users were allegedly harvested and sold to political campaigns. And our work demonstrates how the same approach could be applied to users of commercial VR headsets, which raises major concerns for people’s privacy. </p>
<p>Users should know if their data is being tracked, whether historical records are kept, whether data can be traced to individual accounts, along with what the data is used for and who it can be shared with. After all, we wouldn’t settle for anything less if such a comprehensive level of surveillance could be achieved in the real world.</p><img src="https://counter.theconversation.com/content/147949/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stephen Fairclough received funding from Emteq Labs and Liverpool John Moores University for this work.</span></em></p>How much does your virtual reality headset know about your life?Stephen Fairclough, Professor of Psychophysiology in the School of Psychology, Liverpool John Moores UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1303362020-01-23T12:58:07Z2020-01-23T12:58:07ZHow political party data collection may turn off voters<figure><img src="https://images.theconversation.com/files/311171/original/file-20200121-117938-gu5u7z.jpg?ixlib=rb-1.1.0&rect=89%2C462%2C2757%2C1482&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Voters head to cast their ballots in Canada's federal election in Dartmouth, N.S., on Oct. 21, 2019. </span> <span class="attribution"><span class="source">THE CANADIAN PRESS/Andrew Vaughan</span></span></figcaption></figure><p>The data collection practices used by political parties could pose a threat to voters’ engagement in the democratic process and could have negative implications for Canadian democracy. </p>
<p><a href="https://www.oipc.bc.ca/investigation-reports/2278">British Columbia’s Privacy Commissioner recently found</a> that political parties in British Columbia collect sensitive personal information about voters such as income, ethnicity, religious affiliation and political beliefs without their knowledge or consent.</p>
<p>In the United Kingdom, an investigation launched following the Cambridge Analytica scandal “<a href="https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf">uncovered a disturbing disregard for voters’ personal privacy</a>.” It concluded that Cambridge Analytica had aggregated Facebook users’ personal information without their consent for use in political campaigns, such as in the U.K., where it was used it to send <a href="https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf">micro-targeted information</a> to voters.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/psychographics-the-behavioural-analysis-that-helped-cambridge-analytica-know-voters-minds-93675">Psychographics: the behavioural analysis that helped Cambridge Analytica know voters' minds</a>
</strong>
</em>
</p>
<hr>
<p>Similar practices have been uncovered <a href="https://comprop.oii.ox.ac.uk/research/cybertroops2019/">around the world</a>.</p>
<p>British Columbia’s privacy commissioner noted that similar — and largely unchecked — digital practices in Canada pose significant risks for “<a href="https://www.oipc.bc.ca/investigation-reports/2278">our democratic system of governance</a>.”</p>
<p>Elections all over the world are becoming increasingly electronic and <a href="https://firstmonday.org/article/view/2975/2627">data-driven</a>. At the same time, data is quickly becoming such a highly valued commodity that recent publications have not only described it as the <a href="https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data">“oil” of the 21st century</a>, but as <a href="https://business.financialpost.com/technology/jim-balsillie-data-is-not-the-new-oil-its-the-new-plutonium">its “plutonium.”</a></p>
<p>Canadian voters <a href="https://www.theglobeandmail.com/politics/article-majority-of-poll-respondents-express-support-for-extending-privacy/">remain largely unaware</a> of the types of personal information that political parties collect. Many are also not informed that federal privacy legislation does not apply to political parties in Canada, leaving parties largely beyond the reach of federal law in their use and collection of data.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=547&fit=crop&dpr=1 754w, https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=547&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/311167/original/file-20200121-117927-9g5ln4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=547&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A voter, perhaps unaware of the data political parties had amassed about her, heads to the polls in October 2019 in Nova Scotia.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Andrew Vaughan</span></span>
</figcaption>
</figure>
<p>That may change, however. Canada’s <a href="https://www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/eng/home">Competition Bureau</a> recently announced it’s investigating the three largest federal political parties amid allegations they “<a href="https://www.ctvnews.ca/politics/major-political-parties-under-competition-probe-over-harvesting-of-canadians-personal-info-1.4768501">made misleading statements about the manner in which they collect and use Canadian voters’ personal information</a>.”</p>
<h2>How are voters affected?</h2>
<p>As researchers, we sought to find out: How does the collection and use of personal information by federal political parties affect voters’ willingness to interact with campaign personnel during elections? </p>
<p>As a part of the <a href="https://www.digitalecosystem.ca/">Digital Ecosystem Research Challenge</a>, <a href="https://ncgl.humanities.mcmaster.ca/political-parties-and-data-privacy/">we conducted a survey</a> of Canadians during the 2019 federal election campaign. We informed respondents that political parties may collect a wide variety of personal information about voters, including their religion, gender, ethnicity and political views. </p>
<p>After being informed about these data collection practices, more than half (51 per cent) of respondents said that they would be “somewhat” or “very” unlikely to speak with campaign personnel. This suggests that awareness of federal parties’ collection of personal information may discourage some voters from engaging with party officials during elections. This is concerning given that engagement between voters and parties during campaigns is an important way to mobilize the vote. </p>
<p>At the same time, our study suggests something can be done. More than 40 per cent of respondents said that a party’s stance on privacy made a “moderate” or “major difference” in their willingness to engage with campaign officials. </p>
<h2>Strong privacy practices appeal to voters</h2>
<p>If data is the new means of parties targeting and reaching out to voters, good privacy laws and policies could perhaps promote Canadians’ willingness to engage with them and enhance voter confidence in party institutions.</p>
<p>Current federal laws merely require <a href="https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/gd_pp_201904">political parties to publish their privacy policies publicly</a>, including online. Federal privacy law does not set minimum standards for the content of policies and establishes no mechanism for ensuring parties’ compliance. Nor does it provide a mechanism to ensure parties abide by their privacy promises. </p>
<p>Canadian federal law provides no way for people to find out what information parties hold about them. British Columbia is the only province in Canada where privacy law binds political parties. This allows the B.C. privacy commissioner to examine political parties’ data collection practices.</p>
<h2>Stronger powers required</h2>
<p>Extending federal privacy laws to political parties will not fix all of the troubling data collection practices of the digital 21st century, but it’s a start.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/311165/original/file-20200121-117954-1w4yjbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Privacy Commissioner Daniel Therrien is seen during a news conference in September 2018.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Sean Kilpatrick</span></span>
</figcaption>
</figure>
<p>Daniel Therrien, Canada’s privacy commissioner, <a href="https://www.ourcommons.ca/Content/Committee/421/ETHI/Reports/RP9932875/ethirp16/ethirp16-e.pdf">supports extending privacy legislation to political parties</a>, as does the <a href="https://www.cbc.ca/news/politics/privacy-committee-cambridge-analytica-1.4712471">House of Commons Ethics Committee</a>. </p>
<p>The Cambridge Analytica scandal erupted in a country where political parties are bound by privacy legislation. U.K. privacy laws did not stop political parties from amassing large quantities of data and using it to micro-target British voters.</p>
<p>However, British privacy laws did enable the U.K. information commissioner to investigate and to require parties to provide information about their practices. These laws also enabled the <a href="https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf">U.K. privacy commissioner</a> <a href="https://www.theguardian.com/uk-news/2019/feb/01/leave-eu-arron-banks-insurance-company-fined-data-breaches-information-commissioner-audit">to impose fines</a>, <a href="https://www.ndtv.com/world-news/cambridge-analytica-may-be-closing-but-investigations-will-continue-in-uk-1846321">to instigate criminal investigations</a> and to issue warnings to all major British political parties.</p>
<p>In Canada, voters have little information about what exactly is being done with their data. That’s because federal privacy law does not impose specific legal requirements on how political parties collect and use their data, and there are insufficient enforcement mechanisms or investigatory powers.</p>
<p>Left unchecked, this power could have consequences for voter trust in federal political parties and democratic engagement in elections.</p>
<p>[ <em>Like what you’ve read? Want more?</em> <a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=likethis">Sign up for The Conversation’s daily newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/130336/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sara Bannerman receives funding from the Canada Research Chairs program, the Social Sciences and Humanities Research Council, the Digital Ecosystem Research Challenge, and McMaster University. She has previously received funding from the Office of the Privacy Commissioner's Contributions Program. This survey discussed in this piece was funded under the Digital Ecosystem Research Challenge. </span></em></p><p class="fine-print"><em><span>Nicole Goodman receives funding from the Social Science and Humanities Research Council of Canada, Brock University and the Digital Ecosystem Research Challenge. She is also Director at the Centre for e-Democracy and a Senior Associate at the Innovation Policy Lab in the Munk School of Global Affairs at the University of Toronto.</span></em></p><p class="fine-print"><em><span>Julia Kalinina does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Political parties protect themselves rather than voters in refusing to be bound by privacy laws.Sara Bannerman, Associate Professor and Canada Research Chair in Communication Policy and Governance, McMaster UniversityJulia Kalinina, Law student, York University, CanadaNicole Goodman, Assistant Professor, Political Science, Brock UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1284732020-01-09T17:09:54Z2020-01-09T17:09:54ZHow the Nigerian and Kenyan media handled Cambridge Analytica<figure><img src="https://images.theconversation.com/files/307645/original/file-20191218-11909-qdrzct.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There's a growing awareness that Cambridge Analytica harnessed social media and personal data to influence elections. </span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Kenyan president Uhuru Kenyatta recently <a href="https://allafrica.com/stories/201911180439.html">signed</a> into law the Data Protection Bill. Passed after several years of debate and delay, the new law places restrictions on the collection and use of digital data by governments and private corporations. The restrictions are similar to those included in a new <a href="https://www.huntonprivacyblog.com/2019/04/05/nigeria-issues-new-data-protection-regulation/">data protection regulation</a> passed by Nigeria this year.</p>
<p>These protection laws are welcome advancements in the light of investigations that revealed that British political consulting firm <a href="https://twitter.com/camanalytica?lang=en">Cambridge Analytica</a>
had worked on presidential <a href="https://edition.cnn.com/2018/04/03/africa/nigeria-kenya-cambridge-analytica-elections-intl/index.html">campaigns</a> in both countries. </p>
<p>It’s been widely known for some time that the firm <a href="https://www.theguardian.com/uk-news/2018/mar/23/leaked-cambridge-analyticas-blueprint-for-trump-victory">helped elect</a> Donald Trump in the US and <a href="https://www.theguardian.com/uk-news/2019/jul/30/cambridge-analytica-did-work-for-leave-eu-emails-confirm">worked on</a> the Brexit referendum in the UK. But in March 2018 a number of startling exposés were published by The Guardian, The New York Times and Channel 4 showing the firm’s dubious campaign practices in <a href="https://www.theguardian.com/uk-news/2018/mar/21/cambridge-analyticas-ruthless-bid-to-sway-the-vote-in-nigeria">Nigeria</a> and <a href="https://www.channel4.com/news/cambridge-analytica-revealed-trumps-election-consultants-filmed-saying-they-use-bribes-and-sex-workers-to-entrap-politicians-investigation">Kenya</a>. An <a href="https://www.theguardian.com/uk-news/2020/jan/04/cambridge-analytica-data-leak-global-election-manipulation">ongoing leak</a> of tens of thousands of internal documents is set to show in great detail Cambridge Analytica’s work in 68 countries around the world.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Q91nvbJSmS4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is the Cambridge Analytica scandal?</span></figcaption>
</figure>
<p>The legal responses to the revelations suggest a growing awareness that social media and personal data are being harnessed by outside actors to influence elections around the world. </p>
<p>In a <a href="https://www.tandfonline.com/doi/full/10.1080/23743670.2019.1679208">recent article</a> we analysed press coverage of Cambridge Analytica in Nigeria and Kenya. We wanted to see if local coverage reflected international media coverage of the scandal. To do this we focused on three key themes: data privacy and protection, unethical political campaigning on social media, and foreign involvement in African elections.</p>
<p>We found that most newspaper articles focused on data privacy and social media campaigning. The Nigerian and Kenyan press focused on Facebook and data. But very few stories wrestled with the role of foreign actors in national elections. Important questions about campaigning and election interference received less attention.</p>
<p>This could mean that the door has been left open to ongoing foreign involvement in future elections, given that Cambridge Analytica used African elections as a testing ground for campaign tactics it later exported into more lucrative markets. It did this with little regard for the negative consequences on the emerging democracies. </p>
<h2>Cambridge Analytica in Africa</h2>
<p>It is easy to <a href="http://theconversation.com/claims-about-cambridge-analyticas-role-in-africa-should-be-taken-with-a-pinch-of-salt-93864">overstate the impact</a> of Cambridge Analytica in Nigeria and Kenya. So let’s review what the March 2018 exposés revealed.</p>
<p>According to a <a href="https://www.theguardian.com/uk-news/2018/mar/21/cambridge-analyticas-ruthless-bid-to-sway-the-vote-in-nigeria">detailed report</a>, Cambridge Analytica was hired by a wealthy Nigerian to support the 2015 reelection campaign of then-president Goodluck Jonathan. During the campaign, the firm worked with the Israeli intelligence firm Black Cube to acquire hacked medical and financial information about Jonathan’s opponent Muhammadu Buhari.</p>
<p>Cambridge Analytica also promoted a graphic anti-Buhari <a href="https://www.theguardian.com/uk-news/2018/apr/04/cambridge-analytica-used-violent-video-to-try-to-influence-nigerian-election">video</a>. It suggested Buhari would support the terrorist group Boko Haram and end women’s rights. </p>
<p>Jonathan eventually lost the 2015 election to Buhari. Earlier this year Buhari was <a href="https://www.bbc.com/news/world-africa-47380663">reelected</a> to a second term.</p>
<p>In Kenya, the firm worked on both Uhuru Kenyatta’s 2013 presidential campaign and his 2017 reelection campaign. To date, it is unclear exactly what it did during either campaign. One bit of evidence emerged in an undercover <a href="https://www.channel4.com/news/cambridge-analytica-revealed-trumps-election-consultants-filmed-saying-they-use-bribes-and-sex-workers-to-entrap-politicians-investigation">video</a> of executive Mark Turnbull in which he made a number of claims. These included claims that the firm had rebranded Kenyatta’s party twice, had written their manifesto and had done two rounds of 50,000 surveys.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/mpbeOCKZFfQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Cambridge Analytica Uncovered: Secret filming reveals election tricks.</span></figcaption>
</figure>
<h2>Covering Cambridge Analytica</h2>
<p>To gather articles for our study, we searched the archives of two Nigerian newspapers — <a href="https://punchng.com/">Punch</a> and <a href="https://www.vanguardngr.com/">Vanguard</a> — and two Kenyan newspapers — <a href="https://www.nation.co.ke/">Daily Nation</a> and <a href="https://www.standardmedia.co.ke/">East African Standard</a>. We looked for mentions of Cambridge Analytica in relation to Nigeria or Kenya. </p>
<p>We found 31 articles in the Nigerian newspapers and 74 articles in the Kenyan newspapers published prior to December 2018.</p>
<p>All 31 articles in Nigerian newspapers were published after March 2018. In the case of Kenya, 17 of the 74 articles were published prior to this. Cambridge Analytica was little known at the time of the 2015 elections in Nigeria. But the firm had garnered significant public attention in 2016 because of its connection to Trump and Brexit. As a result the Kenyan media was paying attention when the firm joined the Kenyatta campaign in 2017.</p>
<p>After March 2018, national newspapers in <a href="https://punchng.com/alleged-hacking-of-buharis-records-fg-investigates-cambridge-analytica-pdp/">Nigeria</a> and <a href="https://www.nation.co.ke/news/Cambridge-Analytica-and-Kenya-elections/1056-4349392-204vmx/index.html">Kenya</a> published several articles that summarised what Cambridge Analytica did in their respective countries.</p>
<p>But none of the articles we examined provided any further details on specific activities by Cambridge Analytica. They simply repeated what had already been reported in the international media.</p>
<p>Nigerian newspapers quickly framed the Cambridge Analytica scandal as a <a href="https://www.vanguardngr.com/2018/03/961728/">partisan issue between two competing political parties</a>.</p>
<p>Kenyan newspaper coverage, on the other hand, was more comprehensive in quantity and quality. For one, the Kenyan press was covering Cambridge Analytica prior to March 2018; the first story appeared in <a href="https://www.nation.co.ke/news/politics/Row-as-state-moves-to-cut-foreign-funds-for-Nasa-campaign/1064-3926074-4qbx87/index.html">May 2017</a>.</p>
<p>Because Cambridge Analytica had become known for its work with Trump and Brexit, Kenyan journalists and writers were discussing the <a href="https://www.standardmedia.co.ke/article/2001244727/cyber-warfare-as-politicians-turn-to-internet-propaganda-to-woo-voters">implications</a> of the firm working in their country early on. They <a href="https://www.nation.co.ke/oped/blogs/dot9/walubengo/2274560-4349730-ldnsrp/index.html">cautioned readers</a> that the firm might be involved in targeting sensational messages and misinformation on social media. They also considered the ramifications of foreign actors interfering in local political campaigns.</p>
<p>After the March 2018 revelations, Kenyan newspapers responded with more news and opinion pieces. These wrestled with the implications for data privacy, political campaigning on social media, and Kenya’s democratic institutions. For example, a <a href="https://www.nation.co.ke/oped/opinion/Cambridge-Analytica-manipulate-elections/440808-4357302-qrmahuz/index.html">column</a> asked plainly whether the firm undermined democracy and made a mockery of elections by manipulating people’s emotions. The column also questioned whether the firm deepened ethnic division in society.</p>
<h2>Digital colonialism?</h2>
<p>Recently, the Kenyan writer, political analyst and activist <a href="https://www.cigionline.org/articles/nanjala-nyabola-digital-colonialism-transforming-kenyas-political-discourse">Nanjala Nyabola</a> asked whether Africa was entering a new era of digital colonialism. By this she means a form of exploitation in which foreign actors use African nations for their own benefit without regard for the safety of citizens and the stability of institutions.</p>
<p>In the context of Cambridge Analytica’s work in Nigeria and Kenya, the answer may be yes. It’s important that African countries update their data privacy and protection laws. But as the ongoing document leak <a href="https://www.nation.co.ke/news/politics/Mystery-men-behind-Uhuru-poll-strategy/1064-5407174-u76bnq/">demonstrates</a>, the Cambridge Analytic scandal runs deeper than access to Facebook data.</p><img src="https://counter.theconversation.com/content/128473/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>What role do foreign actors play in African elections? Cambridge Analytica’s case sheds some light.Brian Ekdale, Associate Professor of Journalism & Mass Communication, University of IowaMelissa Tully, Associate Professor Director of Undergraduate Studies School of Journalism and Mass Communication, University of IowaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1289522020-01-08T11:12:08Z2020-01-08T11:12:08ZWhy people leave Facebook – and what it tells us about the future of social media<figure><img src="https://images.theconversation.com/files/308095/original/file-20191220-11939-bukb5t.jpg?ixlib=rb-1.1.0&rect=0%2C543%2C2252%2C1675&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/I6wCDYW6ij8">NeONBRAND/Unsplash</a>, <a class="license" href="http://artlibre.org/licence/lal/en">FAL</a></span></figcaption></figure><p>The number of active users of Facebook (those people who have logged onto the site in the previous month) has reached a historic high of <a href="https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/">2.45 billion</a>. To put this in some context, approximately 32% of the global population now use the social media platform, and the trend line of participation is still going up.</p>
<p>With the exception of Google, there has never been a company that has had this many people using its services. In this context, it may seem strange to talk about those who are choosing to leave Facebook. But those who are leaving the platform represent a small, but by no means insignificant, counter current. And many people, perhaps looking to eke back some time from busy lives, are choosing to quit social media as a new year’s resolution.</p>
<p>In 2018, a <a href="https://www.businessinsider.com/delete-facebook-statistics-nearly-10-percent-americans-deleted-facebook-account-study-2018-4?r=US&IR=T">US survey</a> revealed that 9% of those surveyed had recently deleted their Facebook account, while a further 35% reported that they were using the social media platform less. Despite its economic success and popularity, there seems to be something going on in the original heartlands of Facebook. </p>
<p>Building on my <a href="https://pure.aber.ac.uk/portal/en/publications/nudging-around-the-world-a-critical-geography-of-the-behaviour-change-agenda(b7724852-8d52-490c-9adb-b0fbb43705f4).html">previous work</a> on <a href="https://changingbehaviours.blog">behavioural influence</a>, I have been trying to find out more about these so called “Facebook deleters”, to better understand their motivations and the implications of choosing to leave the world’s most powerful social network.</p>
<h2>The motivation</h2>
<p>In conversations I’ve had with those who have deleted Facebook, it has become evident that people’s motivations for leaving the platform are varied and complex. </p>
<p>My assumption had been that major events, such as the Snowden leaks, the <a href="https://theconversation.com/uk/topics/cambridge-analytica-51337">Cambridge Analytica</a> scandal, and revelations about Mark Zuckerberg’s <a href="https://www.theguardian.com/commentisfree/2019/nov/22/surprised-about-mark-zuckerbergs-secret-meeting-with-trump-dont-be">secret meeting</a> with the US president, Donald Trump, were the key motivations for deleting Facebook accounts. But the Facebook deleters I speak to rarely raise political scandals or concerns over data privacy as their primary motivations for leaving the network. </p>
<p>Indeed, when our conversation turns to the Cambridge Analytica scandal, many suggest that this had only confirmed what they had always assumed about how their personal data was being exploited (at least one person had never even heard of Cambridge Analytica).</p>
<p>Many of those who delete Facebook speak of widely recognised reasons for leaving the platform: concerns with its echo chamber effects, avoiding time wasting and procrastination, and the negative psychological effects of perpetual social comparison. But other explanations seem to relate more to what Facebook is becoming and how this evolving technology intersects with personal experiences. </p>
<p>While many people find it difficult to articulate precisely why they joined Facebook (being intrigued or attracted by the site’s novelty, it seems), it is clear that for many the platform has started to play a very different role in their lives. The notion of “oversharing” is discussed as an aspect of what Facebook has turned into, as users find their feeds clogged with information they find gratuitously personal and irrelevant. </p>
<h2>Digital natives</h2>
<p>Those who joined Facebook at a young age tend to describe their social networks getting too large. The size of a social media network appears to be a significant factor in how useful and trustworthy people find it. We know that social groups in excess of 150 tend to be too large to effectively know and maintain – this is the so-called <a href="https://books.google.co.uk/books?id=nN5DFNT-6ToC&pg=PA77&redir_esc=y#v=onepage&q&f=false">Dunbar number</a>, named after the anthropologist Robin Dunbar. It appears that in the context of Facebook, those with networks consisting of several thousand people find them increasingly difficult to trust (even when applying rigorous privacy settings).</p>
<p>A further problem for digital natives is the length of time they have been archiving their lives on Facebook. Their Facebook archive often goes back to a time when they were less selective in the curation of their online selves. Such careless sharing is now seen as a threat to the social image they are keen to establish in adulthood.</p>
<p>A recurring theme is the social commitment of being on Facebook. While Facebook enables people to stay connected with their friends, family and communities, it is also seen as generating a new form of digital domestic labour. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=361&fit=crop&dpr=1 600w, https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=361&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=361&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=453&fit=crop&dpr=1 754w, https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=453&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/308098/original/file-20191220-11924-1qywei6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=453&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">How many people is too many for a social network?</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/sUXXO3xPBYo">Rob Curran/Unsplash</a>, <a class="license" href="http://artlibre.org/licence/lal/en">FAL</a></span>
</figcaption>
</figure>
<p>One of the reasons for the success of social media, of course, is its ability to tap into our social instinct for knowledge sharing and exchange. But as social networks grow on Facebook, it appears that the costs of mutual obligation (they liked my post, so I had better like theirs) start to outweigh the benefits to being connected. </p>
<p>This is where digital forms of mutual obligation are different to real ones – in the real world we shake hands and say nice things to each other in the moment of encounter. But in the digital world social obligations can quickly accumulate to unsustainable levels.</p>
<h2>Implications</h2>
<p>Although Facebook may still continue to grow, those who leave the platform reveal interesting trends which hint at how future relationships with smart technology and social media will play out.</p>
<p>We are in an era of historically unprecedented opportunities for social connection and engagement. Those who leave Facebook are at one end of a spectrum we all inhabit as we try and work through questions of digital identity, responsibility and collective customs.</p>
<p>Leaving social networks is one of several options we can choose as we attempt to navigate this new world. But Facebook deletion is not just a process of people redefining their digital self. Deletion is also a response to a set of emerging tensions between an evolving technology and social life.</p>
<p>As the economic model of Facebook changes (in both scale, intensity and profit-making) it appears likely that it will encounter clear barriers to its social usefulness and desirability. This is, of course, where we begin to see a clash in values within Facebook itself, as it <a href="http://www.isrf.org/about/fellows-and-projects/mark-whitehead">seeks to reconcile</a> its stated desire to connect the world, with its highly monetised mode of operation. </p>
<p>The small numbers of people who delete Facebook are not going to change Facebook’s economic model anytime soon. But the future may see the company testing the limits of engagement with social media platforms.</p><img src="https://counter.theconversation.com/content/128952/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Whitehead receives funding from the Independent Social Research Foundation, Leverhulme Trust, and Economic and Social Research Council. </span></em></p>Those who are leaving the platform represent a small, but by no means insignificant, counter current to the norm.Mark Whitehead, Professor of Human Geography, Aberystwyth UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1274442019-11-26T04:33:36Z2019-11-26T04:33:36ZThe ugly truth: tech companies are tracking and misusing our data, and there’s little we can do<figure><img src="https://images.theconversation.com/files/303641/original/file-20191126-84268-9nsdjk.jpg?ixlib=rb-1.1.0&rect=66%2C5%2C3627%2C3074&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">While leaks and whistleblowers continue to be valuable tools in the fight for data privacy, we can't rely on them solely to keep big tech companies in check.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/computer-keyboard-multiple-social-media-images-114119137?src=4760a9b5-01c2-4efd-b8ff-7d5518288498-1-2">SHUTTERSTOCK</a></span></figcaption></figure><p>As survey results pile, it’s becoming clear Australians are sceptical about how their online data is tracked and used. But one question worth asking is: are our fears founded?</p>
<p>The short answer is: yes.</p>
<p>In <a href="https://privacyaustralia.net/online-privacy-survey-results/">a survey</a> of 2,000 people completed last year, Privacy Australia found 57.9% of participants weren’t confident companies would take adequate measures to protect their data. </p>
<p>Similar scepticism was noted in results from the 2017 <a href="https://www.oaic.gov.au/assets/engage-with-us/research/acaps-2017/acaps-2017-report.pdf">Australian Community Attitudes to Privacy Survey</a> of 1,800 people, which found:</p>
<p>• 79% of participants felt uncomfortable with targeted advertising based on their online activities</p>
<p>• 83% were uncomfortable with social networking companies keeping their information</p>
<p>• 66% believed it was standard practice for mobile apps to collect user information and</p>
<p>• 74% believed it was standard practice for websites to collect user information.</p>
<p>Also in 2017, the <a href="https://ses.library.usyd.edu.au/bitstream/handle/2123/17587/USYDDigitalRightsAustraliareport.pdf">Digital Rights in Australia</a> report, prepared by the University of Sydney’s <a href="http://digitalrightsusyd.net/">Digital Rights and Governance Project</a>, revealed 62% of 1,600 participants felt they weren’t in control of their online privacy. About 47% were also concerned the government could violate their privacy. </p>
<h2>The ugly truth</h2>
<p>Lately, a common pattern has emerged every time malpractice is exposed. </p>
<p>The company involved will provide an “opt-out” mechanism for users, or a dashboard to see what personal data is being collected (for example, <a href="https://myaccount.google.com/intro/privacycheckup">Google Privacy Checkup</a>), along with an apology.</p>
<p>If we opt-out, does this mean they stop collecting our data? Would they reveal collected data to us? And if we requested to have our data deleted, would they do so? </p>
<p>To be blunt, we don’t know. And as end users there’s not much we can do about it, anyway. </p>
<p>When it comes to personal data, it’s extremely difficult to identify unlawful collections among legitimate collections, because multiple factors need to be considered, including the context in which the data is collected, the methodology used to obtain user consent, and country-specific laws.</p>
<p>Also, it’s almost impossible to know if user data is being misused within company bounds or in business-to-business interactions.</p>
<p>Despite ongoing public outcry to protect online privacy, last year we witnessed the <a href="https://www.wired.com/amp-stories/cambridge-analytica-explainer/">Cambridge Analytica scandal</a>, in which a third party company was able to the gather personal information of millions of Facebook users and use it in political campaigns.</p>
<p>Earlier this year, both <a href="https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio">Amazon</a> and <a href="https://www.theguardian.com/technology/2019/aug/29/apple-apologises-listen-siri-recordings">Apple</a> were reported to be using human annotators to listen to personal conversations, recorded via their respective digital assistants Alexa and Siri. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-if-the-companies-that-profit-from-your-data-had-to-pay-you-100380">What if the companies that profit from your data had to pay you?</a>
</strong>
</em>
</p>
<hr>
<p>More recently, <a href="https://www.nytimes.com/2019/11/04/business/secret-consumer-score-access.html">a New York Times article</a> exposed how much fine granular data is acquired and maintained by relatively unknown consumer scoring companies. In one case, a third-party company knew the writer <a href="https://www.nytimes.com/by/kashmir-hill">Kashmir Hill</a> used her iPhone to order chicken tikka masala, vegetable samosas, and garlic naan on a Saturday night in April, three years ago.</p>
<p>At this rate, without any action, scepticism towards online privacy will only increase.</p>
<h2>History is a teacher</h2>
<p>Early this year, we witnessed the <a href="https://www.gizmodo.com.au/2019/02/apple-is-removing-do-not-track-from-safari/">bitter end of the Do-Not-Track initiative</a>. This was proposed as a privacy feature where requests made by an internet browser contained a flag, asking remote web servers to not track users. However, there was no legal framework to force web server compliance, so many web servers ended up discarding this flag.</p>
<p>Many companies have made it too difficult to opt-out from data collections, or request the deletion of all data related to an individual. </p>
<p>For example, as a solution to the backlash on human voice command annotation, Apple <a href="https://www.theguardian.com/technology/2019/oct/30/apple-lets-users-opt-out-of-having-siri-conversations-recorded">provided an opt-out mechanism</a>. However, doing this for an Apple device is not straightforward, and the option isn’t prominent in the device settings. </p>
<p>Also, it’s clear tech companies don’t want to have <a href="https://www.securityweek.com/youre-opted-default-know-when-and-where-opt-out">opting-out of tracking</a> as users’ default setting. </p>
<p>It’s worth noting that since Australia doesn’t have social media or internet giants, much of the country’s privacy-related debates are focused on <a href="https://www.smh.com.au/technology/australians-are-rightly-questioning-my-health-record-says-privacy-commissioner-20180730-p4zui3.html">government legislation</a>.</p>
<h2>Are regulatory safeguards useful?</h2>
<p>But there is some hope left. Some recent events have prompted tech companies to think twice about the undeclared collection of user data.</p>
<p>For example, <a href="https://www.smh.com.au/world/north-america/facebook-fined-us5-billion-in-cambridge-analytica-privacy-probe-20190713-p526xb.html">a US$5 billion fine is on air for Facebook</a>, for its role in the Cambridge Analytica incident, and related practices of sharing user data with third parties. The exposure of this event has forced Facebook to <a href="https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/">take measures</a> to improve its privacy controls and be forthcoming with users. </p>
<p>Similarly <a href="https://www.bbc.com/news/technology-46944696">Google was fined EU$50 million under the General Data Protection Regulation</a> by French data regulator CNIL, for lack of transparency and consent in user-targeted ads. </p>
<p>Like Facebook, Google responded by taking measures to improve the privacy of users, by <a href="https://blog.google/products/gmail/g-suite-gains-traction-in-the-enterprise-g-suites-gmail-and-consumer-gmail-to-more-closely-align/">stopping reading our e-mails to provide targeted ads</a>, <a href="https://www.theverge.com/2017/9/8/16276000/google-dashboard-my-account-privacy-security-redesign">enhancing its privacy control dashboard</a>, and <a href="https://www.washingtonpost.com/technology/2019/05/07/google-vows-greater-user-privacy-after-decades-data-collection/">revealing its vision to keep user data in devices rather than in the cloud</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/imagine-what-we-could-learn-if-we-put-a-tracker-on-everyone-and-everything-50123">Imagine what we could learn if we put a tracker on everyone and everything</a>
</strong>
</em>
</p>
<hr>
<h2>No time to be complacent</h2>
<p>While it’s clear current regulatory safeguards are having a positive effect on online privacy, there is ongoing debate about whether they are sufficient.</p>
<p><a href="https://thenextweb.com/contributors/2018/08/05/gdpr-privacy-eroding-bad/">Some have</a> argued about possible loopholes in the European Union’s General Data Protection Regulation, and the fact that <a href="https://medium.com/mydata/five-loopholes-in-the-gdpr-367443c4248b">some definitions of legitimate use of personal data</a> leave room for interpretation. </p>
<p>Tech giants are multiple steps ahead of regulators, and are in a position to exploit any grey areas in legislation they can find. </p>
<p>We can’t rely on accidental leaks or whistleblowers to hold them accountable.</p>
<p>Respect for user privacy and ethical usage of personal data must come intrinsically from within these companies themselves. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/if-youve-given-your-dna-to-a-dna-database-us-police-may-now-have-access-to-it-126680">If you've given your DNA to a DNA database, US police may now have access to it</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/127444/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Suranga Seneviratne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Most of us are probably having our data tracked in some form. And while there are regulatory safeguards in place to protect user privacy, it’s hard to say whether these are enough.Suranga Seneviratne, Lecturer - Security, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1219182019-10-18T11:19:20Z2019-10-18T11:19:20Z21st-century lobbying: how big data lets big businesses get the upper hand<p>In July Facebook was <a href="https://www.theguardian.com/technology/2019/jul/24/facebook-to-pay-5bn-fine-as-regulator-files-cambridge-analytica-complaint">fined US$5.1bn</a> by the US Federal Trade Commission (FTC) for not being honest about how it was handling user data. The personal data of 87 million of its users had been harvested by <a href="https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie">Cambridge Analytica</a>, a firm which stands accused of building political profiles of individual people in order to influence elections. It is believed that the information was used to influence voters in the US presidential election and the EU referendum of 2016.</p>
<p>The fine is the biggest ever handed down by the FTC. It certainly sounds like a lot, but it isn’t. Facebook’s Q2 earnings, reported the same week, were a <a href="https://www.theguardian.com/technology/2019/jul/24/facebook-revenue-fines-second-quarter">better-than-expected $16.9bn</a>. The fine is the amount of money Facebook makes in 27 days. Facebook investors are relieved. Its share price has risen.</p>
<p>The same week, Brexit Party leader Nigel Farage launched World4Brexit, a US-based fundraising operation aimed at enabling “friends of the UK from around the world” to donate to the ongoing campaign to ensure that the UK leaves the EU. World4Brexit is based in Michigan and is a registered non-profit organisation, which means that it can accept individual donations of up to $5,000 without having to name donors. </p>
<p>The Labour Party has called for an investigation following concerns that the organisation will allow foreign “<a href="https://www.theguardian.com/politics/2019/jul/31/labour-acts-on-risk-of-dark-money-from-farage-led-lobbyists">dark money</a>” to subvert the democratic process. Similar concerns have been raised about the Brexit Party possibly receiving donations through <a href="https://www.theguardian.com/politics/2019/jun/18/brexit-party-check-donations-for-illegal-funding-nigel-farage">Paypal</a> in ways that don’t comply with UK electoral law.</p>
<p>Even if the Labour Party gets its investigation, nothing much will happen. These two events dramatise a tension between money and politics that is unique to the 21st century.</p>
<h2>Bigger business, deeper pockets</h2>
<p>Wealthy individuals and organisations have a disproportionate influence over elected representatives. This has long been the case, and certainly predates the rise of companies such as Facebook, Google and Cambridge Analytica. But the amount of money these companies have to spend is unprecedented. Apple was the first company to be valued at more than $1 trillion. Alphabet (Google’s parent company) is worth around $900bn; Facebook $560bn. The world’s ten richest corporations now own more wealth than the poorest 180 states combined.</p>
<p>As the wealth of these corporations has grown, so the amount they’ve spent on lobbying has grown too. In 2002, for example, Google spent less than $50,000 on lobbying Washington. In 2017, it spent $18m – more than any other company in the world. In 2018, it spent <a href="https://www.theguardian.com/technology/2017/jul/30/google-silicon-valley-corporate-lobbying-washington-dc-politics">$21m</a>. Google also spent more money than any other corporation in the US on political donations in the <a href="https://www.cnbc.com/2019/07/03/after-years-of-spending-techs-political-machine-turns-to-high-gear.html">2016 presidential election</a></p>
<p>The use of lobbyists by corporations is widespread. Non-corporate organisations haven’t been able to keep up. For every $1 spent on lobbying by labour unions and public interest groups combined in 2015, $34 was spent by large corporations and the <a href="https://www.theatlantic.com/business/archive/2015/04/how-corporate-lobbyists-conquered-american-democracy/390822/">associations which represent them</a>.</p>
<h2>The new technology of lobbying</h2>
<p>There is nothing especially new in corporations translating their wealth into political power or in the fact that power is concentrated among wealthy organisations and individuals. </p>
<p>What is new is that technologies have emerged which concentrate power in the hands of the wealthy and increase inequality of access – on a scale unimaginable only a few years ago. Wealthy companies can move money around the globe instantly and anonymously, in official currencies or in cryptocurrencies, issued by no state. They can use this invisible money to purchase the services of lobbyists, or to donate to political campaigns, in ways that are almost impossible to trace. And they can access information that can be used to bolster their lobbying efforts in ways not open to poorer individuals and organisations. </p>
<p>Human civilisation now has greater access to information than at any point in its history. In an ideal world, this fact should be liberating and empowering. It should enrich democracy, inform citizens and make for a more reflective and egalitarian politics. But we don’t live in an ideal world. </p>
<p>We live in a world in which information – and the expertise necessary to analyse it – comes at a price that only wealthy organisations can pay. From the outside, lobbying may look the same as it always has, but it isn’t – it’s driven increasingly by an information divide. A divide between those who can afford it and those who can’t.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1153753603703746572"}"></div></p>
<p>If you want to glimpse this, take a look at the new niche industry which has emerged to help companies leverage complex data in the service of supplying political intelligence and altering the political agenda.</p>
<p>FiscalNote is an example. Based in the US, it uses “artificial intelligence, machine learning, and natural language processing” to scrape the internet for data on politicians, regulations and public policy developments in order to provide paying clients with up-to-the-minute data they can use to target politicians and mobilise grassroots support. </p>
<p>Its <a href="https://fiscalnote.com/resources?type=caseStudies">clients</a> are some of the wealthiest companies on Earth, including Nestlé, the world’s largest food company ($247bn), Nouryon, the Dutch chemical company ($5bn), and the US Corn Growers Association, which represents the interests of the US field corn and Ethanol industry ($50bn). FiscalNote charges between $US10,000 and <a href="https://www.technologyreview.com/s/611817/tim-hwangs-fiscalnote-is-revolutionizing-washington-lobbying-with-big-data/">“several hundred thousand dollars”</a> a year to enable organisations that can afford it to achieve lobbying successes far beyond what smaller, less resourced organisations can achieve.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1135947122807857154"}"></div></p>
<p>We all know now that new technologies have made it easier than ever to influence elections, and to move money around the world in ways which are harder to trace. What’s less visible is the extent to which they have made it easier for the wealthy to lobby more effectively for their interests. </p>
<p>Big data hasn’t levelled the playing field. It has simply allowed wealthy organisations and individuals to further entrench their dominance. And regulators can’t keep up. New technologies make it almost impossible to trace the global flow of money World4Brexit may receive to fund its political campaigning. The FTC’s stern rebuke of Facebook was nothing of the kind. Institutions are scrambling, and failing, to keep pace with the technological changes shaping our politics.</p><img src="https://counter.theconversation.com/content/121918/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Phil Parvin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New technologies and user-level data mean that wealthy companies can influence our politics in ways not open to ordinary people.Phil Parvin, Senior Lecturer in Politics, Loughborough UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1239712019-09-23T11:34:59Z2019-09-23T11:34:59Z3 tips for Justin Trudeau on how to say ‘I’m sorry’<p>“I’m sorry.”</p>
<p>These two words may seem simple, but the ability to express them when you’re in the wrong is anything but – particularly for those in the public eye. </p>
<p>Canadian Prime Minister Justin Trudeau, to name a recent example, <a href="https://www.usatoday.com/story/news/world/2019/09/20/justin-trudeau-says-hell-ban-military-style-weapons/2388136001/">had to apologize several times</a> since a photo and a video of him in brownface and blackface makeup surfaced recently. Trudeau’s troubles echo Virginia Gov. Ralph Northam’s <a href="https://www.cnn.com/2019/02/01/politics/democrats-call-on-northam-to-resign/index.html">difficulty</a> apologizing for a <a href="https://www.huffingtonpost.com/entry/ralph-northam-response-racist-yearbook-photo_us_5c54bca6e4b0871047536bed">photo on his medical school yearbook page</a> of a man in blackface and another wearing the dress of a Ku Klux Klan member.</p>
<p>As a <a href="https://www.middlebury.edu/institute/people/lisa-leopold">language scholar</a>, I’ve tried to get to the bottom of just what makes an apology effective by analyzing dozens of mea culpas. While some offered authentic apologies, <a href="https://www.nytimes.com/interactive/2017/10/05/us/statement-from-harvey-weinstein.html?mtrref=www.wmagazine.com">many more seemed defensive</a>, <a href="https://www.cnn.com/2018/05/31/us/southwest-airlines-lindsay-gottlieb-biracial-baby-trnd/index.html">insincere</a> or <a href="https://www.youtube.com/watch?v=G6DOhioBfyY">forced</a>.</p>
<p>With the help of insights from <a href="https://global.oup.com/academic/product/sorry-about-that-9780199300914?cc=us&lang=en&">linguists</a>, <a href="https://www.youtube.com/watch?v=R7vP01U8qr4">psychologists</a> and <a href="https://link.springer.com/article/10.1007/s10551-011-0915-9">business ethicists</a> who study apologies, I found that there are three main elements each needs to have to be effective.</p>
<h2>Not all apologies are equal</h2>
<p>Much is at stake with a public apology.</p>
<p>When done right, it can rebuild trust and <a href="https://ac.els-cdn.com/S0378216608003007/1-s2.0-S0378216608003007-main.pdf?_tid=65a4d09e-c6be-4eeb-a055-c41490e57dea&acdnat=1549406972_fcd23b7de5022a7f8239b687c7ee5a9d">restore a damaged reputation</a>. However, a poorly crafted apology can lead to widespread criticism and further damage credibility. <a href="https://hbr.org/2015/08/research-for-a-corporate-apology-to-work-the-ceo-should-look-sad">Research shows</a> that the <a href="https://on.ft.com/2DmdS1n">way a company crafts an apology</a> can even affect its future financial performance. Leaders who apologize <a href="https://www.researchgate.net/publication/225822507_Apologies_and_Transformational_Leadership">tend to be viewed more favorably</a> than those who don’t.</p>
<p>In “<a href="https://www.moodypublishers.com/books/marriage-and-family/when-sorry-isnt-enough/">When Sorry Isn’t Enough: Making Things Right with Those You Love</a>,” Gary Chapman and Jennifer Thomas cite a survey of what people preferred in an apology. It found that almost four-fifths wanted their would-be penitent to either express regret or accept responsibility, as opposed to make restitution, repent or seek forgiveness. </p>
<p>In 2011, David Boyd, now dean emeritus at Northeastern University’s D’Amore-McKim School of Business, <a href="https://link.springer.com/article/10.1007/s10551-011-0915-9">identified seven strategies</a> that make public apologies effective. I believe three of them – revelation, responsibility and recognition – are the most significant because they overlap with those identified by prominent scholars in other fields, including <a href="https://docs.google.com/a/umn.edu/viewer?a=v&pid=sites&srcid=dW1uLmVkdXxhbmRyZXdkY29oZW58Z3g6MTRlNmUzYWUxMGJmZjMxZg">linguists Andrew Cohen and Elite Olshtain</a> and <a href="https://www.youtube.com/watch?v=R7vP01U8qr4">psychologist Robert Gordon</a>. </p>
<p>That is, an admission for the lapse using the words “I am sorry” or “I apologize,” ownership for the offense and empathy for those who have been hurt all contribute to an effective apology. But it’s not enough for an apology just to contain these three ingredients. It’s also about the exact wording used.</p>
<p>In my analysis of infamous public apologies that celebrities, CEOs and political figures have delivered over the past two years, I was looking for how they fared according to Boyd’s standards of revelation, responsibility and recognition. I also closely examined the language of each apology, applying many insights from linguist Edwin Battistella’s book “<a href="https://global.oup.com/academic/product/sorry-about-that-9780199300914?cc=us&lang=en&">Sorry About That: The Language of Public Apology</a>.”</p>
<h2>1. ‘I am sorry’</h2>
<p>This may seem obvious but sadly isn’t: Any respectable apology must include an actual apology with a specific acknowledgment of what was done. Surprisingly, some people attempting to own up to something never get around to actually apologizing. </p>
<p>Comedian Louis C.K., for example, <a href="https://www.nytimes.com/2017/11/10/arts/television/louis-ck-statement.html">never actually used words</a> like “apologize” or “sorry” after <a href="https://www.nytimes.com/2017/11/09/arts/television/louis-ck-sexual-misconduct.html?module=inline">being accused of sexual misconduct</a> by several women. He called the stories “true” and said he was “remorseful” but dodged the actual apology. </p>
<p>Others try to apologize in a general way to avoid being pinned down to a specific transgression, weakening the impact. Or they may admit to a lesser offense. A case in point is <a href="https://techcrunch.com/2017/12/28/apple-apologizes-for-not-being-clearer-about-slowing-down-iphones-with-older-batteries/">Apple’s non-apology apology</a> in December 2017 over the performance of iPhone batteries.</p>
<p>“We’ve been hearing feedback from our customers about the way we handle performance for iPhones with older batteries and how we have communicated that process,” the company said. “We know that some of you feel Apple has let you down. We apologize.”</p>
<p>Was Apple apologizing for the poor-performing batteries, its communication process or the feelings of its customers? Distancing the actual apology from the transgressions is a common tactic in corporate apologies, used in recent years both by <a href="https://community.withairbnb.com/t5/Hosting/Discrimination-and-Belonging/td-p/191832">Airbnb</a> and <a href="https://www.bizjournals.com/portland/news/2018/04/16/uber-tries-to-make-nice-with-the-city-of-portland.html">Uber</a> as well.</p>
<h2>2. ‘I did it’</h2>
<p>Any well-crafted apology must claim responsibility for the transgression – not attribute one’s actions to happenstance or external factors.</p>
<p>Amid the Cambridge Analytica scandal, <a href="https://www.youtube.com/watch?v=G6DOhioBfyY">Facebook</a> CEO Mark Zuckerberg used the passive voice to distance himself from any wrongdoing: “I’m really sorry that this happened,” he said in an interview to CNN.</p>
<p>That wasn’t the first time he used the passive voice this way. In an earlier <a href="https://www.reuters.com/article/us-facebook-zuckerberg/zuckerberg-seeks-forgiveness-for-division-caused-by-his-work-idUSKCN1C61XY">apology issued in 2017</a> after Facebook was criticized for Russia’s meddling in the 2016 election, he said, “For the ways my work was used to divide people rather than bring us together, I ask forgiveness and I will work to do better.” </p>
<p>The choice of the passive suggests that he has little control over the ways his work was used by others.</p>
<p><a href="https://twitter.com/charlierose/status/932747035069034496">Another example</a> is Charlie Rose, a television journalist <a href="https://www.adweek.com/tvnewser/cbs-fired-charlie-rose-one-year-ago-today/385128">fired by CBS</a> following accusations of sexual misconduct. He issued an apology in the following manner: “I have learned a great deal as a result of these events, and I hope others will too. All of us, including me, are coming to a newer and deeper recognition of the pain caused by conduct in the past, and have come to a profound new respect for women and their lives.”</p>
<p>By including himself as one of several people and embedding his actions as part of a broader group’s actions, he minimized responsibility for his own transgressions.</p>
<p>Others simply try to deflect attention from the transgression as part of an apology, as actor <a href="https://twitter.com/KevinSpacey/status/924848412842971136">Kevin Spacey</a> did when he announced his sexual orientation instead of apologizing over accusations he sexually assaulted a massage therapist, or like disgraced media mogul Harvey Weinstein’s <a href="https://www.nytimes.com/interactive/2017/10/05/us/statement-from-harvey-weinstein.html?mtrref=www.wmagazine.com">vow to direct his anger to the National Rifle Association</a> when he was accused of sexual misconduct.</p>
<p>In contrast, Starbucks CEO Kevin Johnson in April 2018 gave an <a href="https://news.starbucks.com/views/a-follow-up-message-from-starbucks-ceo-in-philadelphia">example of an apology</a> that takes real ownership after two African American men were arrested while waiting for a friend at one of his stores: “These two gentlemen did not deserve what happened, and we are accountable. I am accountable.”</p>
<h2>3. ‘I feel your pain’</h2>
<p>Finally, apologies should meet the standard of recognition: expressing empathy to those who have been hurt.</p>
<p>Many so-called apologies fail to acknowledge victims’ feelings, focusing instead on justifications or excuses. For example, <a href="https://www.cnn.com/2018/07/13/entertainment/henry-cavill-me-too-apology/index.html">actor Henry Cavill apologized</a> for his controversial statements about the #MeToo movement by saying he’s sorry for “any confusion and misunderstanding that” his comments created. In doing so, he insinuated that there was no transgressor or victim, as more than one party is typically to blame for a misunderstanding.</p>
<p>Expressions of empathy are further weakened anytime a word such as “may” is used to cast doubt on whether the transgression had a negative impact on others. In <a href="https://www.hollywoodreporter.com/news/russell-simmons-pens-response-sexual-assault-allegations-1061061">an apology issued</a> by the record producer Russell Simmons for sexual misconduct, his use of “may” ultimately suggests that women may or may not have been offended by his actions: “For any women from my past who I may have offended, I sincerely apologize. I am still evolving.”</p>
<p>Furthermore, those last four words show that he’s focusing on his own growth, rather than the pain of his victims.</p>
<p>So if you’re finding it difficult to parse the multitude of public apologies, look closely for these three ingredients, along with the language each uses. </p>
<p><em>This is an updated version of an <a href="https://theconversation.com/how-to-say-im-sorry-whether-youve-appeared-in-a-racist-photo-harassed-women-or-just-plain-screwed-up-107678">article originally published</a> on Feb. 8, 2019.</em></p>
<p>[ <em><a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=expertise">Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day.</a></em> ]</p><img src="https://counter.theconversation.com/content/123971/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lisa Leopold does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Canadian prime minister is the latest public figure struggling to apologize for past misbehavior. A language scholar explains how to do it right.Lisa Leopold, Associate Professor of English Language Studies, The Middlebury Institute of International Studies at Monterey, MiddleburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1211642019-08-05T15:54:17Z2019-08-05T15:54:17ZData-driven elections and the key questions about voter surveillance<figure><img src="https://images.theconversation.com/files/286125/original/file-20190729-43145-144jlzh.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6500%2C4699&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Using data during election campaigns is nothing new. But as the Canadian federal election approaches, authorities must be diligent that data tracking doesn't become surveillance.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The upcoming Canadian federal election once again raises the spectre of interference and disruption through the misuse and abuse of personal data.</p>
<p>This is a surveillance issue, because as experts who study surveillance, we know political consultancy companies are collecting, analyzing and using data in order to powerfully influence populations who are <a href="https://ssrn.com/abstract=3146964">generally unaware of how their data is being processed</a>. Opacity and complexity are <a href="https://www.wiley.com/en-us/The+Culture+of+Surveillance%3A+Watching+as+a+Way+of+Life-p-9780745671734">common features of contemporary surveillance issues</a>.</p>
<p>These questions have come to global public attention as a result of the <a href="https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html">Cambridge Analytica and Facebook scandals</a>. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/286119/original/file-20190729-43153-1tr2nh5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The logo of the now-defunct Cambridge Analytica.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>The now-defunct Cambridge Analytica has become a symbol of all that is intrusive and manipulative about data-driven elections.</p>
<p>Nonetheless, data and data analytics have played a role in elections for years. All modern campaigns in all democracies use data — even if it’s simply polling data. </p>
<p>But today’s massive voter relationship management platforms use digital campaigning practices that leverage the power of social media, mobile apps, geo-targeting and artificial intelligence to take it to another level.</p>
<p>A recent workshop organized through the <a href="https://www.sscqueens.org/projects/big-data-surveillance">Big Data Surveillance</a> project and hosted by the <a href="https://www.oipc.bc.ca/">Office of the Information and Privacy Commissioner of British Columbia</a>, brought together international scholars, civil society advocates and regulators to take stock in the wake of the Cambridge Analytica scandal.</p>
<p>How can we understand the nature and effects of data-driven elections in different countries? What issues will tax our regulators in the years ahead?</p>
<h2>Myths versus realities</h2>
<p>Digital campaigning and harnessing the power of Big Data <a href="https://scholar.harvard.edu/files/todd_rogers/files/political_campaigns_and_big_data_0.pdf">has long been considered a key to electoral success in the United States and increasingly in other countries</a>.</p>
<p>Politicians the world over now believe they can win elections if they just have better, more refined and more accurate data on the electorate.</p>
<p>At one stage, Cambridge Analytica claimed to have about 5,000 different data points on the American electorate. They were not alone. The voter analytics industry in the U.S. — including companies like <a href="https://www.catalist.us">Catalist</a>, <a href="https://www.i-360.com">i360</a> and <a href="https://haystaqdna.com">HaystaqDNA</a> — boasts an extraordinary volume of personal data under their control. The data is both free and purchased, and from public and commercial sources.</p>
<p>A recent report by the <a href="https://tacticaltech.org/projects/data-politics/">Tactical Tech collective</a> in Germany documents the range of companies, consultancies, agencies and marketing firms — from local startups to global strategists — that aggressively target parties and campaigns across the political spectrum. Data is used as an asset, as intelligence and as influence.</p>
<p>At the same time, the power of data-driven elections is exaggerated. Evidence on how and whether Big Data actually does win elections is difficult to determine empirically. Research by U.S. communications expert Jessica Baldwin-Philippi suggests that <a href="https://doi.org/10.1080/10584609.2017.1372999">data-driven campaign strategies are far more effective at mobilizing adherents and donors than in persuading voters</a>. Emphasis on size and scale often is conflated as claims of effectiveness.</p>
<h2>The U.S. versus the rest</h2>
<p>Generally, voter analytics have been pioneered in the U.S. and exported to other democratic countries. A startling recent illustration is the pernicious use of WhatsApp in Brazil for the <a href="https://www.washingtonpost.com/news/theworldpost/wp/2018/11/01/whatsapp-2/">spread of racist, misogynistic and homophobic messages by Jair Bolsonaro’s campaign when he successfully ran for president</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/286126/original/file-20190729-43109-18snuwo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In this May 2019 photo, supporters hold up their smart phones to take a photo of Brazilian President Jair Bolsonaro in Brasilia. Bolsonaro has referred to Globo, Brazil’s largest media company, as ‘the enemy’ in Whatsapp messages that were leaked to the media.</span>
<span class="attribution"><span class="source">(AP Photo/Eraldo Peres)</span></span>
</figcaption>
</figure>
<p>In other countries, the field of voter analytics faces constraints that temper and perhaps twist its impact.</p>
<p>These include campaign finance restrictions, varying party and electoral systems and many different electoral laws and data protection rules. </p>
<p>How are local political party workers and volunteers to navigate the terrain, especially when the actual methods and alleged impacts of voter analytics are so unclear? </p>
<p>No political party wants to appear dated in its methods or fall behind its rivals for failing to recognize the supposed benefits of data analysis for success. </p>
<p>But as researchers, we know too little about how data-driven campaigning interacts with different institutional and cultural practices. Nor do we know how data is assessed by professionals and volunteers at local and central levels of campaigns around the world.</p>
<p>It’s also clear that the major platforms of Google and Facebook perform differently in different countries. University of North Carolina journalism and media professor Daniel Kreiss <a href="https://danielkreiss.files.wordpress.com/2018/01/kreissmcgregortechnology-firms-shape-political-communication-the-work-of-microsoft-facebook-twitter-and-google-with-campaigns-during-the-2016-u-s-presidential-cycle.pdf">compares Google and Facebook as “democratic infrastructures”</a> in terms of the services offered. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/google-and-facebook-wont-rule-the-world-if-we-dont-buy-their-fantasies-about-big-data-95734">Google and Facebook won't rule the world – if we don't buy their fantasies about big data</a>
</strong>
</em>
</p>
<hr>
<p>Even platforms claiming to be non-ideological, like the prominent voter-tracker <a href="https://nationbuilder.com/">Nationbuilder</a>, are hardly apolitical, as Concordia University’s Fenwick McKelvey <a href="https://doi.org/10.1177%2F1461444816675439">has shown</a>. Google algorithms also demonstrate <a href="https://comprop.oii.ox.ac.uk/">the inherent political biases built into its search functions</a>.</p>
<h2>New practices versus dated laws</h2>
<p>Outdated laws govern the voter analytics industry and digital campaigning. These include elections laws that control the circulation of lists, and data protection laws that, until recently, <a href="https://ico.org.uk/media/action-weve-taken/2259369/democracy-disrupted-110718.pdf">have not been used to regulate the capture, use and dissemination of personal data by political campaigns</a>.</p>
<p>Data protection laws, such as the <a href="https://eugdpr.org">European Union’s General Data Protection Regulation (GDPR)</a>, constrain the
capture and processing of sensitive personal data on political opinions. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-doesnt-need-new-regulations-to-make-the-internet-safer-gdpr-can-do-the-job-111438">Social media doesn't need new regulations to make the internet safer – GDPR can do the job</a>
</strong>
</em>
</p>
<hr>
<p>But the problems don’t just involve privacy and intrusiveness — they also include data governance, freedom of speech, disinformation and democracy itself. Data-driven elections require new thinking about the balance between the democratic interest of an informed and mobilized public on one side and the dangers of excessive voter surveillance on the other.</p>
<h2>Transparency versus secrecy</h2>
<p>A related key issue, not limited to data-driven elections but illustrated acutely by them, is the question of transparency.</p>
<p>There is a divide between how little is publicly known about what actually goes on in platform businesses that create online networks, like Facebook or Twitter, and what supporters of proper democratic practices argue should be known. </p>
<p>After all, when it comes to elections, the open sharing of relevant information is critical. Voter management platforms such as Cambridge Analytica are inherently secretive, both about their political paymasters and their actual practices. Few know who pays for political ads, for instance. </p>
<p>Those running and participating in elections, on the other hand, have a vital interest in the transparency of all parties as the prerequisite of accountability. Because the use of data to influence election outcomes is fundamentally opaque, the tension is palpable.</p>
<p>It’s therefore difficult to know what actually transpires within data-driven electioneering. </p>
<p>University of Wisconsin professor <a href="https://journalism.wisc.edu/wp-content/blogs.dir/41/files/2018/04/Anonymous-Groups-Targeted-Key-Battlegrounds-on-Facebook.YMK_.Project-Brief.v.6.1.final_.pdf">Young Mie Kim</a> runs a stealth media project: a user-based, real-time digital ad tracking app that enables researchers to trace the sponsors of political campaigns in the U.S., identify suspicious sources and assess the patterns of voter-targeting. </p>
<p>The officials responsible for the conduct of elections should be paying close attention to this kind of information in Canada as the federal election approaches — and around the world.</p>
<p>[ <em><a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=expertise">Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day.</a></em> ]</p><img src="https://counter.theconversation.com/content/121164/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Lyon directs the Surveillance Studies Centre at Queen's University. </span></em></p><p class="fine-print"><em><span>COLIN BENNETT receives funding from the Social Sciences and Humanities Research Council of Canada</span></em></p>Data analytics have played a role in elections for years. But today’s massive voter relationship management platforms use digital campaigning practices to take it to another level.David Lyon, Director, Surveillance Studies Centre, Professor of Sociology, Queen's University, OntarioColin Bennett, Professor, Political Science, University of VictoriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1191582019-06-24T20:11:02Z2019-06-24T20:11:02ZExplainer: what is surveillance capitalism and how does it shape our economy?<figure><img src="https://images.theconversation.com/files/280653/original/file-20190621-149818-1jo3s1f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The increasing use of sensors in smart homes adds to an ever expanding amount of user data that can be collected and commodified.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/hand-holding-3d-rendering-mobile-connect-1038714985?src=L8Yv2Mg5xseF1BNfpphtpw-1-0&studio=1">Shutterstock</a></span></figcaption></figure><p>I recently purchased a bedroom bundle (mattress, bed base, pillows and sheets) from a well-known Australian startup for my son, who has flown the nest. Now I’m swamped with Google and Facebook ads for beds and bedding. The week before it was puffer jackets. </p>
<p>Ever wonder why and how this happens? The answer is surveillance capitalism.</p>
<p>Surveillance capitalism describes a market driven process where the commodity for sale is your personal data, and the capture and production of this data relies on mass surveillance of the internet. This activity is often carried out by companies that provide us with free online services, such as search engines (Google) and social media platforms (Facebook).</p>
<p>These companies collect and scrutinise our online behaviours (likes, dislikes, searches, social networks, purchases) to produce data that can be further used for commercial purposes. And it’s often done without us understanding the full extent of the surveillance.</p>
<p>The term surveillance capitalism was coined by academic Shoshana Zuboff in 2014. She <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754">suggests</a> that surveillance capitalism depends on:</p>
<blockquote>
<p>…the global architecture of computer mediation […] [which] produces a distributed and mostly uncontested new expression of power that I christen: “Big Other”.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-it-time-to-regulate-targeted-ads-and-the-web-giants-that-profit-from-them-95308">Is it time to regulate targeted ads and the web giants that profit from them?</a>
</strong>
</em>
</p>
<hr>
<h2>The big data economy</h2>
<p>The late 20th century has seen our economy move away from mass production lines in factories to become progressively more reliant on knowledge. Surveillance capitalism, on the other hand, uses a business model based on the digital world, and is reliant on “big data” to make money.</p>
<p>The data used in this process is often collected from the same groups of people who will ultimately be its targets. For instance, Google collects personal online data to target us with ads, and Facebook is likely selling our data to organisations who want us to vote for them or to vaccinate our babies.</p>
<p>Third-party <a href="https://theconversation.com/its-time-for-third-party-data-brokers-to-emerge-from-the-shadows-94298">data brokers</a>, as opposed to companies that hold the data like Google or Facebook, are also on-selling our data. These companies buy data from a variety of sources, collate information about individuals or groups of individuals, then sell it.</p>
<p>Smaller companies are also cashing in on this. Last year, HealthEngine, a medical appointment booking app, was found to be <a href="https://www.abc.net.au/news/2018-06-25/healthengine-sharing-patients-information-with-lawyers/9894114">sharing clients’ personal information</a> with Perth lawyers particularly interested in workplace injuries or vehicle accidents.</p>
<h2>Cambridge Analytica was a wake-up call</h2>
<p>Last year’s <a href="https://theconversation.com/how-cambridge-analyticas-facebook-targeting-model-really-worked-according-to-the-person-who-built-it-94078">Cambridge Analytica revelations</a> highlighted the extent to which internet companies surveil online activity. Cambridge Analytica’s actions broke Facebook’s own rules by collecting and on-selling data under the pretence of academic research. Their dealings <a href="https://www.theguardian.com/uk-news/2018/mar/26/cambridge-analytica-trump-campaign-us-election-laws">may have violated election law</a> in the United States. </p>
<p>Despite the questionable nature of Cambridge Analytics’ actions, the bigger players and leading actors in surveillance capitalism, Facebook and Google, are still legally amassing as much information as they can. That includes information about their users, their users’ online friends, and even their users’ offline friends (known as <a href="https://theconversation.com/shadow-profiles-facebook-knows-about-you-even-if-youre-not-on-facebook-94804">shadow profiling</a>). A shadow profile is a profile created about someone who hasn’t signed up to particular social platform, but might have some data stored about them because they have interacted with someone who has. Platforms make huge profits from this.</p>
<p>In this sense, Cambridge Analytica was a small player in the big data economy.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/big-brother-is-watching-how-new-technologies-are-changing-police-surveillance-115841">Big brother is watching: how new technologies are changing police surveillance</a>
</strong>
</em>
</p>
<hr>
<h2>Where surveillance capitalism came from</h2>
<p>Surveillance capitalism practices were <a href="https://www.ft.com/content/7fafec06-1ea2-11e9-b126-46fc3ad87c65">first consolidated at Google</a>. They used data extraction procedures and packaged users’ data to create new markets for this commodity.</p>
<p>Currently, the biggest “Big Other” actors are Google, Amazon, Facebook and Apple. Together, they collect and control unparalleled quantities of data about our behaviours, which they turn into products and services.</p>
<p>This has resulted in astonishing business growth for these companies. Indeed, Amazon, Microsoft, Alphabet (Google), Apple and Facebook are now ranked in the <a href="https://www.investopedia.com/articles/active-trading/111115/why-all-worlds-top-10-companies-are-american.asp">top six</a> of the world’s biggest companies by market capitalisation. </p>
<p>Google, for instance, <a href="https://www.internetlivestats.com/google-search-statistics/">processes an average</a> of 40,000 searches per second, 3.5 billion per day and 1.2 trillion per year. Its parent company, Alphabet, was recently valued at US$822 billion.</p>
<h2>Sources of data are increasing</h2>
<p>Newly available data sources have dramatically increased the quantity and variety of data available. Our expanding sensor-based society now includes wearables, smart home devices, drones, connected toys and automated travel. Sensors such as microphones, cameras, accelerometers, and temperature and motion sensors add to an ever expanding list of our activities (data) that can be collected and commodified.</p>
<p>Commonly used wearables like smart watches and fitness trackers, for example, are becoming part of everyday health care practices. Our activities and biometric data can be stored and used to interpret our health and fitness status. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/280654/original/file-20190621-61767-1dk9v7o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/data-synchronization-health-book-between-smartwatch-188507768?src=w0_k1jIBWPA9dIjDdGeaTw-1-0&studio=1">Shutterstock</a></span>
</figcaption>
</figure>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-artificial-intelligence-systems-could-threaten-democracy-109698">How artificial intelligence systems could threaten democracy</a>
</strong>
</em>
</p>
<hr>
<p>This same data is of great value to health insurance providers. In the US, some insurance providers <a href="https://www.zdnet.com/article/smartwatch-data-collection-rush-raises-privacy-backlash-fears/">require a data feed</a> from the policyholder’s device in order to qualify for insurance cover. </p>
<p><a href="https://www.zionmarketresearch.com/report/smart-toys-market">Connected toys</a> are another rapidly growing market niche associated with surveillance capitalism. There are educational benefits from children playing with these toys, as well as the possibility of drawing children away from screens towards more physical, interactive and social play. But <a href="https://www.bbc.com/news/technology-42620717">major data breaches</a> around these toys have already occurred, marking childrens’ data as another valuable commodity.</p>
<p>In her latest book, <a href="https://www.theguardian.com/books/2019/feb/02/age-of-surveillance-capitalism-shoshana-zuboff-review">The Age of Surveillance Capitalism</a>, Zubboff suggests that our emerging sensor based society will make surveillance capitalism more embedded and pervasive in our lives.</p>
<hr>
<p><em>Correction: This article has been updated to correct the number of searches conducted on Google each second. The correct number is 40,000.</em></p><img src="https://counter.theconversation.com/content/119158/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Donell Holloway receives funding from the Australian Research Council for a DISCOVERY project titled 'The Internet of Toys: Benefits and risks of connected toys for children'.</span></em></p>Companies scrutinise our online likes, dislikes, searches and purchases to produce data that can be used commercially. And it’s often done without us understanding the full extent of the surveillance.Donell Holloway, Senior research fellow, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1178892019-06-11T23:11:39Z2019-06-11T23:11:39ZWhy should we regulate social media giants? Because it’s 2019, Prime Minister Trudeau<figure><img src="https://images.theconversation.com/files/278614/original/file-20190610-52785-16nnnwf.jpg?ixlib=rb-1.1.0&rect=0%2C12%2C4288%2C2785&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Regulating the internet out of concern for citizens' privacy should be a key issue in the upcoming election.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Political cartoonist Serge Chapleau summed it up neatly. <a href="http://plus.lapresse.ca/screens/c5740d9c-f19c-4bf6-9150-8d371183397c__7C___0.html">His April 27 cartoon</a> in Montréal’s <em>La Presse</em> showed Facebook founder Mark Zuckerberg wearing a T-shirt with a thumb representing Facebook’s “like” button transformed into a middle-finger salute.</p>
<p>Facebook’s cavalier attitude is palpable <a href="https://www.priv.gc.ca/en/opc-actions-and-decisions/investigations/investigations-into-businesses/2019/pipeda-2019-002/">in the report that Daniel Therrien, Canada’s privacy commissioner, released two days earlier</a>. “We asked Facebook to provide us with information,” wrote Therrien. “We are disappointed to note that many of our questions have remained unanswered or have not been satisfactorily answered.”</p>
<p>The commissioner was investigating the impact of the so-called <a href="https://www.cbc.ca/news/politics/cambridge-analytica-data-strategy-1.5054943">Cambridge Analytica scandal</a> on Canadians. His conclusion? Facebook “has waived its responsibility for the personal information under its control.”</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=380&fit=crop&dpr=1 600w, https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=380&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=380&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=478&fit=crop&dpr=1 754w, https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=478&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/272868/original/file-20190506-103045-a57tef.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=478&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Privacy Commissioner Daniel Therrien tells an April 25 news conference in Ottawa that Facebook has waived its responsibility for the personal information under its control.</span>
<span class="attribution"><span class="source">The Canadian Press/Adrian Wyld</span></span>
</figcaption>
</figure>
<h2>An impotent system</h2>
<p>What is also clear in this report is the impotence of the Office of the Privacy Commissioner. Of course, Canada has the <a href="https://laws-lois.justice.gc.ca/eng/acts/P-8.6/">Personal Information Protection and Electronic Documents Act</a>, but even if Therrien demonstrated that Facebook had violated the act, he doesn’t have the power to sanction the company. And Facebook knows that very well.</p>
<p>Personal data protection laws are no longer sufficient. Artificial intelligence, fuelled by considerable volumes of data (popularly known as Big Data), raises new, complex, constantly and rapidly evolving ethical, economic and political issues.</p>
<p>I’m not the only one who thinks so. So does the British government, in a report published in late April. The <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf">Online Harms White Paper</a> is a particularly remarkable document. It highlights the Trudeau government’s prodigious indolence in addressing these issues. It also describes various types of harm caused by online activities in general, and digital social platforms in particular. These range from mismanagement of personal data to crimes such as child abuse, fraud and misinformation (which is my area of interest).</p>
<hr>
<p>
<em>
<strong>
À lire aussi :
<a href="https://theconversation.com/what-the-u-k-s-online-harms-white-paper-teaches-us-about-internet-regulation-115337">What the U.K.’s Online Harms white paper teaches us about internet regulation</a>
</strong>
</em>
</p>
<hr>
<h2>Taking the digital bull by the horns</h2>
<p>The report is remarkable because of the solutions it proposes — solutions that Canadian legislators must urgently consider. Digital technology is everywhere in today’s society, and regulating it is a crucial mission that requires the creation of a new independent body with legislative reach. I have identified four powers that are mentioned in the British report.</p>
<h3>1. Accountability through fines and prosecutions</h3>
<p>The <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679">European Union’s General Data Protection Regulation</a>, adopted in 2016, already provides for fines of up to 20 million euros (CDN$30 million), or four per cent of a company’s worldwide annual profit. In the case of Facebook, this would represent more than CDN$3 billion.</p>
<p>The Online Harms report goes further by saying that business leaders should also be accountable in court if their company is liable for damage to British society. This is subject to future consultation, but mentioning it is revolutionary.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/272878/original/file-20190506-103057-dh8i1c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Online Harms report recommends that the managers of a company responsible for harming British society should be held accountable in court.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h3>2. Disrupt the activities of delinquent companies</h3>
<p>I rubbed my eyes in amazement when I read the recommendation: “Disruption of business activities” in the British report. The United Kingdom government is considering reserving the right to interfere with the activities of online businesses that do not comply with the regulations. What form might these disruptions take? Block internet traffic to problematic services? This extreme measure would only be taken in the most serious cases where the safety of citizens is clearly at stake.</p>
<h3>3. Monitor algorithms</h3>
<p>As Therrien pointed out in his report, his British counterpart already has investigative powers under the <a href="http://www.legislation.gov.uk/ukpga/2018/12/pdfs/ukpga_20180012_en.pdf">Data Protection Act of 2018</a>. In the UK, a warrant can be used to show what happens to collected data. The act gives the British Information Commissioner investigative powers that are very similar to those available to police officers.</p>
<p>Although these powers are already light years away from what is possible in Canada, the Online Harms report goes even further by proposing upstream monitoring of algorithms. It wants to require companies that use collected data to publish annual transparency reports. The regulator will need to be able to comprehend what is being done with citizens’ data and understand how the algorithms of the web giants work. Companies that refuse will be subject to penalties.</p>
<h3>4. Collaborate with researchers</h3>
<p>Finally, the British report welcomes the opening of some platforms to collaboration with the academic research community. However, in my point of view, it is not demanding enough in this crucial respect.</p>
<p>It is precisely one of the roles of research to study societal phenomena. As more and more of these phenomena occur in private online platforms, the law must require companies to open their windows for researchers to observe what’s happening inside.</p>
<p>Most platforms, like Google, Twitter or Facebook, have <a href="https://medium.com/@perrysetgo/what-exactly-is-an-api-69f36968a41f">application programming interfaces or APIs</a>. Researchers have been using them to study, for example, the <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/viewFile/2847/3275">polarization of political discourse</a> or <a href="https://doi.org/10.1073/pnas.1517441113">the spread of misinformation</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/272876/original/file-20190506-103045-dgr95c.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Unlike Twitter, which has a fairly open API, Instagram and WhatsApp, both of which belong to Facebook, have nothing that can inform researchers.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>But not all platforms have APIs. Twitter has a fairly open and data-rich API. But Instagram and WhatsApp, both of which belong to Facebook, have none that can be used by researchers. Not being able to understand how information flows through these platforms is a concern, and it’s in the public interest to understand the flow of information on digital social media.</p>
<p>In April 2018, the Association of Internet Researchers (AoIR) asked online platforms to <a href="https://aoir.org/facebook-shuts-the-gate-after-the-horse-has-bolted/">give researchers access to their data through APIs specifically dedicated to scientific research</a>. For the most part, the web giants have remained deaf to this petition. Here again, it is up to legislators to force their hands in the public interest.</p>
<p>I have a wish for 2020: that the Trudeau government enters the 21st century, that it stops cowering in front of the big tech companies like Google, Amazon, Facebook and Apple, and that it adopts laws that will finally allow Canadians to know and understand what these companies really do with their data.</p>
<p>This should be a top issue in the upcoming federal election campaign.</p><img src="https://counter.theconversation.com/content/117889/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jean-Hugues Roy ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>The UK Online Harms White Paper outlines possible internet regulation measures, and Canada would do well to study its approach.Jean-Hugues Roy, Professeur, École des médias, Université du Québec à Montréal (UQAM)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1158372019-05-02T20:13:52Z2019-05-02T20:13:52ZFriday essay: networked hatred - new technology and the rise of the right<figure><img src="https://images.theconversation.com/files/271897/original/file-20190501-136797-my57y5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Senators during the Senate Select Intelligence Committee's hearing on the social media influence in the 2016 U.S. elections in Washington November 2017. The graphic shows conflict at a rally that was created and promoted by fake Facebook accounts run by Russian trolls. </span> <span class="attribution"><span class="source">Shawn Thew/EPA</span></span></figcaption></figure><p><em>This is an edited extract of <a href="https://griffithreview.com/articles/networked-hatred-new-technology-rise-of-the-right/">an essay</a> in <a href="https://griffithreview.com/editions/the-new-disruptors/">The New Disruptors</a>, the 64th edition of Griffith Review.</em> </p>
<p>Every era is defined by its sustaining myths. Among ours is surely “disruption”. The book that seeded the mythology, Clayton Christensen’s <a href="https://en.wikipedia.org/wiki/The_Innovator%27s_Dilemma">The Innovator’s Dilemma</a>, is only a little more than 20 years old, yet its “technological disruption” thesis has become an article of faith for business and government, trafficked like narcotics from TEDx to trading floor to ministerial report.</p>
<p>Established companies often fail, so the thesis argues, not because they make bad decisions but because they make good ones. They stick with “sustaining technologies” because they are too successful to risk otherwise. Meanwhile, minnows with little to lose bring new technologies to market that initially may not be as good as seemingly entrenched “sustaining technologies”, but are cheaper and more accessible: think mainframe computers versus PCs, high-res CDs versus low-res MP3s, SLR cameras versus phone cameras, encyclopaedia versus Wikipedia. This is how Apple became a music company, Amazon became the world’s biggest bookstore and Facebook became your news feed.</p>
<p>But as with most mythologies there’s something a little too convenient about “disruption”. Christensen’s thesis emerged just as Silicon Valley venture capitalists – whose reverence for Christensen has helped make him a perennial keynote – were looking for new justifications for cutthroat business models and a rationale to gloss up their claims to “innovation”. </p>
<p>Behind their glitzy mission statements, many tech companies are little more than middlemen with a point-and-click front end: “intermediaries” in tech speak, “aggregators” of other people’s content that deliver product based on someone’s else’s property and/or labour to consumers, and then deliver those same consumers to advertisers.</p>
<p>The “technological disruption” mythology has provided cover for downsizing, lay-offs, the theft of intellectual property, the casualisation of labour and normalisation of precarity, the exploitation of free labour by users of digital platforms (as in when, say, Facebook users create content for free), the normalisation of totalitarian levels of surveillance, and what is no doubt the greatest misappropriation of personal information in human history.</p>
<p>As the historian Jill Lepore showed in a <a href="https://www.newyorker.com/magazine/2014/06/23/the-disruption-machine">devastating takedown</a> in The New Yorker in 2013, Christensen’s thesis is based on dodgy premises. Yet the “disruption” myth prospers. It sits all too comfortably alongside other myths of neoliberal times. As Christensen acknowledges, his theory reprises Joseph Schumpeter’s “<a href="https://en.wikipedia.org/wiki/Creative_destruction">creative destruction</a>” thesis, published in 1942 and since corralled by free-market ideologues to justify the wholesale destruction of jobs, industries and ways of life in the name of economic “efficiency”.</p>
<p>Now, to be clear, I’m not suggesting technological disruption doesn’t happen. What I am questioning is the magical transformative powers attributed to the process. The disruption myth ties our notions of progress to the concept of destruction. It does so under the technologically determinist assumption that such progress is inevitable and good, and skips past the possibility that as well as being liberating, technology can have devastating social impacts.</p>
<p>What happens, then, when the thing being “disrupted” is the fabric of democratic culture itself?</p>
<h2>Proto-fascism</h2>
<p>In mid 2018, I was walking through an Australian airport when I spotted several mini billboards for Facebook’s “Here Together” campaign. “Fake news is not our friend” said one. “Data misuse is not our friend” said another. The campaign was launched in the wake of the Cambridge Analytica scandal, in which more than 50 million people had their Facebook data improperly shared with the right-wing polling company. This happened amid ongoing scandals about fake news and fake accounts on the platform. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=406&fit=crop&dpr=1 600w, https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=406&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=406&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=510&fit=crop&dpr=1 754w, https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=510&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/270335/original/file-20190423-15233-1mpfsit.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=510&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The founder and CEO of Facebook Mark Zuckerberg leaves the European Parliament in May 2018 after appearing before the European Parliament representatives to answer questions on data information breach by Cambridge Analytica and also how Facebook uses personal data in general.</span>
<span class="attribution"><span class="source">Stephanie Lecocq/EPA</span></span>
</figcaption>
</figure>
<p>At the time I had just spent four weeks trawling the Facebook pages of eight Australian far-right groups to gather data for a research project. I’ll spare you the full details, but suffice to say that Australia’s Islamic population isn’t popular with the far right. Nor are refugees, feminists or environmentalists.</p>
<p>As I read posts from the pages, two trends stood out. The first was their pointed incivility. Not only were they uncivil, they revelled in their incivility. Their attacks on perceived opponents were not merely intended to rebut or disagree, but to undermine their credibility and delegitimise their humanity using any possible means. Incivility was being used in lieu of debate as a weapon to shut people down. </p>
<p>Much of the material that I analysed met the formal criteria for hate speech. It was clearly intended to incite animosity and hatred against target groups, to intentionally inflict emotional distress, and to threaten or incite violence. It defamed entire groups and used slurs and insults in an attempt to marginalise and silence their perceived opponents. </p>
<p>Many posts advocated strong-arm tactics without due process to target minority groups. In these angry calls for the demolition of longstanding human rights conventions and the subversion of law to suit the interests of the dominant (white, male) group, a form of proto-fascism could be heard.</p>
<h2>Inspired by the US alt-right</h2>
<p>The second trend is that Australian far-right Facebook pages increasingly model themselves on the US alt-right. Whereas old-school, far-right groups stick to white nationalism, the sites I looked at mixed race, gender, sexuality and a smattering of science issues. This mirrors the way in which the US alt-right has built links between white supremacists and so-called “men’s rights” groups as well as a growing emphasis on climate science among the far right in the US and Europe. Figures associated with the alt-right such as Milo Yiannopoulos, Lauren Southern, Steve Bannon and Gavin McInnes are revered on the Facebook pages I surveyed. </p>
<p>News of their proposed Australian tours is greeted with glee and multiple repostings. Also common are alt-right memes such as “cultural Marxist”, “social justice warrior”, talk of “white decline” and the idea that the “white race” is being subject to a form of “genocide”, crowded out by minorities and multiculturalism, a myth long nurtured by US white supremacist <a href="https://www.splcenter.org/hatewatch/2017/06/07/bob-whitaker-author-racist-mantra-white-genocide-has-died">Bob Whitaker</a>. There was considerable support for absolute “freedom of speech”, an alt-right meme that functions as cover for open racism and the public spread of hatred. These memes were often intermingled with imagery featuring alt-right mascot Pepe the Frog, Donald Trump, on occasion imaginatively mashed with images of Pauline Hanson or Sonia Kruger (celebrated for <a href="https://www.abc.net.au/news/2019-02-15/sonia-kruger-vilified-muslims-but-comments-not-racist/10817772">publicly questioning</a> Islam). </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=404&fit=crop&dpr=1 600w, https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=404&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=404&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=508&fit=crop&dpr=1 754w, https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=508&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/271899/original/file-20190501-136803-1obeqsi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=508&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sonia Kruger at the 2018 Logie Awards: she features in the occassional alt-right meme.</span>
<span class="attribution"><span class="source">Regi Varghese/AAP</span></span>
</figcaption>
</figure>
<p>The popularity of the sites was also notable. Between them they had over 400,000 followers. A much greater number of people are likely to have viewed the sites but not signed up. Most postings had been liked and/or shared hundreds of times, some thousands.</p>
<p>These sites are part of a global trend towards the networked industrialisation of hatred. Cambridge Analytica wanted those millions of Facebook profiles so they could target users with divisive messages to influence the 2016 Brexit vote and US election. According to Cambridge Analytica chief executive <a href="https://www.theguardian.com/uk-news/2018/mar/19/cambridge-analytica-execs-boast-dirty-tricks-honey-traps-elections">Alexander Nix</a>, “It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true as long as they’re believed”. </p>
<p>A leading entrepreneur of industrialised hatred is Steve Bannon, a former Cambridge Analytica board member, former Trump campaign adviser and then White House strategist, and before that editor of the right-wing Breitbart News. Bannon has lately become a globetrotting activist for white nationalism, offering advice to far-right figures such as Marine Le Pen and Viktor Orbán, and parties such as Alternative für Deutschland (AfD) and Italy’s the League and Five Star Movement, as part of a project to build support for his idea of a far-right populist “<a href="https://www.bbc.com/news/world-europe-44926417">supergroup</a>” to win seats in the European Parliament. </p>
<p>Fox News has positioned itself at the forefront of developing the hate business model, via the work of commentators such as Glenn Beck and Ann Coulter. In Australia, Sky News “after dark” does similar work. The fostering of division has long been ingrained and normalised in tabloid news business models, as seen in <a href="https://www.theguardian.com/media/2018/aug/02/andrew-bolts-tidal-wave-of-immigrants-article-prompts-press-council-complaint">a column by Andrew Bolt</a> in August 2018 that singled out Jews, Chinese, Cambodians and Indians as part of a “tidal wave of immigrants that sweeps away what’s left of our national identity”. </p>
<p>Platforms such as Facebook and Twitter are also complicit. Conflict creates clicks. A few weeks after the Facebook “Here Together” campaign was over I checked the sites mentioned above again. All survived intact.</p>
<h2>Decline of the public sphere</h2>
<p>The hate business model uses incivility as a weapon to forestall public discussion about bigotry and to attack opponents. It is not enough to sow division; it must be done with unapologetic aggression and an open contempt for ideological enemies. The aim, ultimately, is to move norms of acceptable public discussion far to the right. </p>
<p>When Lauren Southern touched down in Australia in July 2018, she disembarked wearing a T-shirt emblazoned with the alt-right meme “it’s okay to be white”, a statement first promoted by white supremacists on 4chan to “trigger” progressives and to underpin the idea that whites are somehow under threat. The following month Australian parliamentarian Fraser Anning unapologetically gave his race-baiting first speech to parliament. Two months later Pauline Hanson tabled a motion that asked the Senate to acknowledge “anti-white racism and attacks on Western civilisation” and that “it’s okay to be white”, which was <a href="https://www.abc.net.au/news/2018-10-16/morrison-regrets-senators-backing-anti-white-racism-support/10381038">supported by</a> the Coalition government. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/270336/original/file-20190423-15218-1xj9m5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Canadian far-right activist Lauren Southern speaks during a ‘Rally for South Africa’ demonstration in Sydney, July 28, 2018 in support of South African farmers.</span>
<span class="attribution"><span class="source">Jeremy Ng/AAP</span></span>
</figcaption>
</figure>
<p>It took less than a year for a meme concocted to sow racialised division to find its way from an internet bulletin board to a parliamentary vote in a major Western democracy.</p>
<p>The electoral harvest of the new incivility can be seen in the Brexit vote and the rise of UKIP, the 2017 electoral success of AfD in Germany, the impact of Marine Le Pen’s Front National in the 2017 French presidential elections, and the 2018 formation of a populist government in Italy comprised of ministers from the League and Five Star Movement. </p>
<p>It can be seen, too, Donald Trump’s 2016 electoral victory and presidency, and the persistent use of Indigenous peoples, Muslims and asylum seekers as political scapegoats in Australia. It can also be seen in the return of “strongman” politics: Duterte in the Philippines, Erdoğan in Turkey, Orbán in Hungary, Putin in Russia, Trump in the US. All fomented social division to gain power, and have since waged war on “elites” and attacked and diminished democratic institutions.</p>
<p>This industrialisation of hate exploits two revolutions. The first is the centrality of economic and human mobility to modern life and the anxieties it has created at a time of uncertainty, precarity and hardship for many. Growing public resentment at the failures of economic globalisation has created an audience ripe for harvest by the reactionary machinery of the culture wars. The second is the relative openness of the internet. Just as Bill Gates famously commented in the early optimistic days of the internet that it offered a “frictionless” environment for commerce, so it was no less available as a “frictionless” environment for the circulation of hate.</p>
<p>The white supremacist site Stormfront was one of the earliest online communities, even if, amid the hype for the economic and democratic possibilities of online communities, no one paid much attention at the time. As traditional intermediaries and the “managerial class” that presided over old-time democratic culture – editors, publishers, journalists, academics, civic leaders – were bypassed, and as the traditional journalism business model was weakened by the loss of print advertising revenues and the growing domination of click-based online models, so new space was created for figures such as Bannon, quick to understand the new media dynamics and leverage them to build audiences for publications such as Breitbart News with a business model based on division.</p>
<p>This hate-based business model enacts a sobering version of Christensen’s theories. Democracy is being “disrupted”. Low-quality, easily trafficked disinformation produced cheaply by “new entrants” is crowding out higher-quality information produced by established incumbents (journalists, civic leaders, academics), who accumulated power through older technologies and institutions (print media, broadcast television, the university). </p>
<p>The old expensive-to-maintain public sphere, supported by comprehensive education systems, robust journalism and informed critique, is giving way to a cheap-to-run, “near good” public sphere corralled in privatised platforms. But there is no magic attached to this process, only the malodorous waft of divisiveness, hatred and encroaching authoritarianism.</p>
<h2>What to defend?</h2>
<p>Right now, the story of the new incivility is only partway told. The reactionary right has not yet achieved its aims, and in Australia is currently on the defensive. But the struggle over the future of democracy is global and the forces of reaction are playing a long game. Power is gradually being ceded away from liberal democratic norms towards populism and proto-fascism. Should this trend continue then the prospects for freedom, social justice and the environmental viability of the planet are bleak. Those on the left who for a long time have derided liberalism and been deeply suspicious of the Enlightenment culture that produced it, are thus forced to make a difficult decision: what to defend?</p>
<p>In the 1930s, the Italian Marxist Antonio Gramsci famously wrote from his prison cell: “The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.” Morbid symptoms of our moment of interregnum are everywhere now. </p>
<p>Refugees of failed modernity roam the seas in leaky boats, hunt for gaps in fences on European borders, form caravans to walk the “<a href="https://www.sbs.com.au/news/migrant-caravan-enters-mexico-s-route-of-death">route of death</a>” through Mexico in the hope of getting to the US, rot in the asylum centres of Manus and Christmas Island, take indentured jobs sweeping floors in the penthouses of Singapore or walk girders on the construction sites of Abu Dhabi, searching for the wealth, freedom and security that Enlightenment did not give them. </p>
<p>They are met, when they get to those borders, with the forces of a new counter-Enlightenment: concrete walls and razor wire, white nationalism, militarised policing, curtailed human rights. Other refugees are within: casualties of deindustrialisation, casualisation and the winding back of welfare, who make up a new precariat.</p>
<p>Young people have felt the effects of this most brutally. Too often denied meaningful careers and kicked off the wealth-accumulation ladder, many hear the siren call of 4chan or are drawn to figures such as anti-“social justice warrior” celebrity psychologist Jordan Peterson, with his paeans against feminism and bestselling tips on how young men can recover their masculinity; or to reactionary activists like Southern, whose racist pitch to disaffected youth is summed up by the title of her book: <a href="https://www.goodreads.com/book/show/33587182-barbarians?from_search=true">Barbarians: How Baby Boomers, Immigrants, and Islam Screwed My Generation </a>.</p>
<p>Intellectual refugees are everywhere also. A displaced managerial elite searches for relevance: journalists with their Twitter accounts, academics with their online opinion pieces, public intellectuals declaiming at festivals and in the “serious” media. As the refugees of modernity spread out from its decaying outposts, so the logic of interregnum becomes almost irresistible.</p>
<p>The trouble with this logic is that people tend to fixate on what is ending rather than grapple with sparking new beginnings. Displaced progressive intellectuals, in particular, like conservatives, routinely complain that the world is about to end, but unlike conservatives are frozen into inaction rather than galvanised into action. For those who believe in democracy there is pressing work to do. Enlightenment ideals of rationalism, civility, universalism, cosmopolitanism and the social contract must necessarily inform any rebirth. </p>
<p>But new conditions of possibility mean that to succeed they can’t be simply a redux of enlightenments past. There are as yet no emancipatory leaders for the age of disintermediated knowledge. For want of messiahs, it is incumbent on those who possess educational, cultural and intellectual resources to understand their new roles as toolmakers of ideas, talking across rather than down, embedded at every social and cultural level, who can bring a variety of skills to civic culture and start telling new, productive stories about how democracy works in an age of dystopic disruption.</p>
<p>Imagining such a democracy will no doubt require a rethink of the oppositions that structure our world. It is essential, now, to think commonality without universality, citizenship beyond public versus private, growth and profits without inequality and externalities, nationalism without chauvinist particularism, and to think cosmopolitanism and particularism, and collectivity and individuality, in tandem. That is, to refashion Enlightenment oppositions for new times.</p>
<p>First, though, a more practical reckoning is no doubt required.</p>
<p>The underlying issue with democracy is that national and global social contracts have failed to deliver. The democratic connections between work, freedom, citizenship, rights and fairness have been lost, jettisoned in the name of liberalising markets and making labour more flexible with a promised pay-off that for many never came. Privilege has replaced citizenship as the arbiter of human destiny. </p>
<p>There can be no revitalised democracy without skewing economics away from the rich, emptying their apologists from parliamentary hallways and offices, scrapping the divisive politics of scapegoating, acting on planetary health and remembering what mutuality and social generosity look like. Without the dynamics of inclusion and trust that derive from a healthy social contract, why would anyone bother with civility?</p><img src="https://counter.theconversation.com/content/115837/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Davis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In the face of digital disruption that threatens the very fabric of democratic culture we must refashion Enlightenment oppositions for new times.Mark Davis, Associate Professor, Media and Communications, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1148832019-04-23T15:59:42Z2019-04-23T15:59:42ZWe’re all influenced by people in our networks – how to make this a force for good<figure><img src="https://images.theconversation.com/files/269828/original/file-20190417-139110-v9j5io.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/735596113?size=huge_jpg">Alex Gontar/Shutterstock</a></span></figcaption></figure><p>As the social and economic divides between groups grow ever wider, and <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/662744/State_of_the_Nation_2017_-_Social_Mobility_in_Great_Britain.pdf">social mobility declines</a>, the bonds that tie people together, within families or communities, have weakened over time.</p>
<p>At the same time, democracy seems to be broken. Facebook has been taken to task over its role in the <a href="https://www.theguardian.com/news/series/cambridge-analytica-files">Cambridge Analytica scandal</a>, in which advanced statistical methods are believed to have been used to influence the results of both the US election and the Brexit referendum in 2016. Cambridge Analytica stands accused of harvesting people’s clicks, likes and preferences to steer Facebook users towards a particular view through targeted advertising, as a cacophony of fake news left them incapable of sorting true from false. </p>
<p>These companies, and others like them, exploit the fact that our behaviours are shaped by those around us – what they do, what they say, what they think, and what they share on social media – which, taken together, form the science of “social influence”. </p>
<p>Yes, things are bleak. But in our new book, <a href="https://books.google.co.uk/books?id=ZJyRDwAAQBAJ&pg=PT6&dq=social+butterflies&hl=en&sa=X&ved=0ahUKEwiwkpzxvNfhAhVbVBUIHaKhD9QQ6AEIMzAC#v=onepage&q&f=false">Social Butterflies</a>, we argue that there is cause for hope. </p>
<p>At the same time as the ills of the world were being placed at the door of Facebook and Cambridge Analytica in 2018, the BBC was filming a <a href="https://www.bbc.co.uk/programmes/b0bs43b7">documentary</a> in one our old secondary schools in South Gloucestershire, charting the decline over time of the school’s budget and performance, and its effects on staff and students alike. </p>
<p>After the documentary aired, many former students took to social media, coming together not just to restore the morale of the school’s teachers, but to coordinate an effort to donate time and money to make a real difference to the school – something that could not have happened on this scale without coordination of people around the world over <a href="https://www.facebook.com/groups/1808297422632180/">Facebook</a>.</p>
<h2>Social nudges</h2>
<p>This shows that social influence – on social media or otherwise – can be a force for a good as well as ill, but it takes work. This is our main conclusion from the work we, and our former colleagues at the <a href="https://www.bi.team/">Behavioural Insights Team</a>, a social purpose company which span out of the UK government in 2014 and is known as the world’s first “nudge” unit, have been carrying out. We’ve been applying behavioural science to make policy more effective, coupled with rigorous, scientific testing. And now we’re finding that a particular class of nudges – social nudges – are showing promise. </p>
<p>Since the early work of the Behavioural Insights Team it’s been obvious that we are responsive to others. For example, tax repayment rates <a href="https://www.nber.org/papers/w20007.pdf">are increased</a> by telling people that nine out of ten people have already paid their taxes. Since then, we’ve learned more about social instincts, and how we can use them to build and boost social capital – the ties between us that help smooth our passage through life. </p>
<p>For example, one barrier to attending a selective university for young people from “non-traditional” backgrounds is that they don’t know anybody who went, and imagine the environment to be exclusive and exclusionary. Being unable to see ourselves, or anyone like us, in institutions like this is both a cause, and a consequence, of low social capital, and is one reason why young people with good grades from these backgrounds often don’t attend universities, or <a href="https://theconversation.com/bright-poor-students-less-likely-to-get-into-elite-universities-28560">attend less prestigious universities than they could</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/269833/original/file-20190417-139091-lyihtj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Nudged into making an application.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/601347773?src=0PqO9vS1DYxdoIgjGNo0MA-1-12&size=medium_jpg">edella/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Who you look up to</h2>
<p>To combat this, we worked with the Department for Education in the UK, and had two students from similar backgrounds write letters to 16-year-olds with good grades but who the data said were unlikely to attend university. We tested the impact of these letters using a randomised controlled trial – randomly choosing students at some schools to get the letters, and others not to. Just having a letter from a role model – someone like the recipient who had made the leap into that environment – <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/603737/Encouraging_people_into_university.pdf">increased the rate of applying</a> to, and accepting an offer from, a selective university by 34%. </p>
<p>Other studies have looked at the effect on pass rates when students nominate supporters from their network, such as a friend, family member or social worker. When these study supporters were <a href="https://www.bi.team/publications/retention-and-success-in-maths-and-english-a-practitioner-guide-to-applying-behavioural-insights/">sent messages prompting them</a> to encourage the learner who nominated them, it increased pass rates for people who had already failed their exams once by almost 50%, reduced college drop-outs by a quarter, and helped people make friends across social divides.</p>
<p>It’s not just in education that these “social” levers can have an effect. <a href="https://jamanetwork.com/journals/jama/article-abstract/2553448">Research</a> has shown that wearable fitness tracking devices don’t really do very much to increase activity. However, <a href="http://www.behaviouralinsights.co.uk/wp-content/uploads/2016/09/BIT-Update-Report-2015-16.pdf">we used</a> a combination of technology and social influence to get people moving, by setting them up in competition with other teams from the same company – and telling them how many steps they’d need to overtake their rivals. It worked – to the tune of an 8% increase in steps – but the effect was largest for the people who were the least active to begin with, and needed it most.</p>
<p>These are just a few examples of what we’re beginning to see as policy starts to embrace the opportunity posed by our social nature. The most prominent uses of social influence to date may have been negative, but the future is bright.</p><img src="https://counter.theconversation.com/content/114883/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The research described in this piece was conducted by the Behavioural Insights Team, at which Michael Sanders was Chief Scientist and Susannah Hume was Principal Research Advisor and Head of Skills. The research was funded by a combination of government and philanthropic sources. </span></em></p><p class="fine-print"><em><span>The research described in this piece was conducted by the Behavioural Insights Team, at which Michael Sanders was Chief Scientist and Susannah Hume was Principal Research Advisor and Head of Skills. The research was funded by a combination of government and philanthropic sources.</span></em></p>Social media manipulation is tearing societies apart – but it can help put us back together again.Michael Sanders, Reader in Public Policy, King's College LondonSusannah Hume, PhD Candidate, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1153682019-04-15T01:00:45Z2019-04-15T01:00:45ZDigital campaigning on sites like Facebook is unlikely to swing the election<figure><img src="https://images.theconversation.com/files/268917/original/file-20190412-44802-mem06u.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/624905726?src=MOL3K4RzaAWBwovpbSFBWQ-1-0&size=huge_jpg">Shutterstock</a></span></figcaption></figure><p>With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.</p>
<p>Invariably, this discussion leads straight to digital.</p>
<p>In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.</p>
<p>Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chinese-social-media-platform-wechat-could-be-a-key-battleground-in-the-federal-election-113925">Chinese social media platform WeChat could be a key battleground in the federal election</a>
</strong>
</em>
</p>
<hr>
<h2>Why parties use digital media</h2>
<p>In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.</p>
<p>The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.</p>
<p>Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.</p>
<p>Platforms like Facebook offer <a href="https://partners.livechatinc.com/blog/facebook-custom-lookalike-audience/">services</a> that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with <a href="https://www.wired.com/story/how-trump-conquered-facebookwithout-russian-ads/">tailored messaging</a>.</p>
<h2>Voting, persuasion and mobilisation</h2>
<p>While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often <a href="https://www.wired.com/2016/11/facebook-won-trump-election-not-just-fake-news/">claimed</a> to <a href="https://www.vox.com/policy-and-politics/2017/10/16/15657512/cambridge-analytica-facebook-alexander-nix-christopher-wylie">have</a>. </p>
<p>What we know is that digital messaging can have a <a href="https://journals.sagepub.com/doi/abs/10.1177/1354068815605304">small but significant effect</a> on <a href="https://www.tandfonline.com/doi/full/10.1080/10584609.2018.1548530">mobilisation</a>, that there are concerns about how it could be used to <a href="https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go">demobilise</a> voters, and that it is an effective way to <a href="https://www.politico.com/story/2018/09/30/democrats-midterms-money-donors-small-853856">fundraise</a> and <a href="https://techcrunch.com/2018/10/29/a-digital-revolution-is-reshaping-democratic-campaigns/">organise</a>. But its ability to independently <a href="https://www.wired.com/story/viral-political-ads-not-as-persuasive-as-you-think/">persuade voters</a> to change their <a href="https://www.cambridge.org/core/journals/american-political-science-review/article/minimal-persuasive-effects-of-campaign-contact-in-general-elections-evidence-from-49-field-experiments/753665A313C4AB433DBF7110299B7433">votes</a> is estimated to be <a href="http://people.umass.edu/schaffne/hersh_schaffner_jop.pdf">close</a> to <a href="https://www.vox.com/policy-and-politics/2017/9/28/16367580/campaigning-doesnt-work-general-election-study-kalla-broockman">zero</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australian-political-journalists-might-be-part-of-a-canberra-bubble-but-they-engage-the-public-too-114084">Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too</a>
</strong>
</em>
</p>
<hr>
<p>The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.</p>
<p>While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as <a href="https://www.amazon.com/Identity-Crisis-Presidential-Campaign-Meaning/dp/0691174199">Trump</a> in the US.</p>
<p>Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.</p>
<h2>Data privacy and foreign interference</h2>
<p>The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.</p>
<p>An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a <a href="https://ses.library.usyd.edu.au/bitstream/2123/17587/7/USYDDigitalRightsAustraliareport.pdf">majority of voters</a> see a distinction between commercial entities advertising to us online compared to parties and other campaigners.</p>
<p>We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.</p>
<p>The more vexing issue is that of <a href="https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Electoral_Matters/2016Election/2016_election_report/section?id=committees%2freportjnt%2f024085%2f26618">foreign interference</a>. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.</p>
<p>This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the <a href="https://which-50.com/cover-story-in-the-lead-up-to-this-years-election-digital-discord-is-brewing-in-australia/">broader strategy.</a></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australia-should-strengthen-its-privacy-laws-and-remove-exemptions-for-politicians-93717">Australia should strengthen its privacy laws and remove exemptions for politicians</a>
</strong>
</em>
</p>
<hr>
<h2>The digital battleground</h2>
<p>As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.</p>
<p>Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.</p>
<p>The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.</p>
<hr>
<p><em>This article has been co-published with <a href="https://lighthouse.mq.edu.au/">The Lighthouse</a>, Macquarie University’s multimedia news platform.</em></p><img src="https://counter.theconversation.com/content/115368/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Glenn Kefford receives funding from the Australian Research Council. Grant number DE190100210 </span></em></p>After the 2016 US election and ensuing Cambridge Analytic scandal, there was a lot of scaremongering around digital election campaigning. But this hysteria is, for the most part, unfounded.Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.