tag:theconversation.com,2011:/us/topics/data-consent-54249/articlesData consent – The Conversation2022-09-20T20:19:45Ztag:theconversation.com,2011:article/1907582022-09-20T20:19:45Z2022-09-20T20:19:45ZThis law makes it illegal for companies to collect third-party data to profile you. But they do anyway<figure><img src="https://images.theconversation.com/files/485463/original/file-20220920-875-n1syu1.jpeg?ixlib=rb-1.1.0&rect=57%2C24%2C5406%2C3612&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>A little-known provision of the Privacy Act makes it illegal for many companies in Australia to buy or exchange consumers’ personal data for profiling or targeting purposes. It’s almost never enforced. In a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4224653">research paper</a> published today, I argue that needs to change. </p>
<p>“Data enrichment” is the intrusive practice of companies going behind our backs to “fill in the gaps” of the information we provide. </p>
<p>When you purchase a product or service from a company, fill out an online form, or sign up for a newsletter, you might provide only the necessary data such as your name, email, delivery address and/or payment information.</p>
<p>That company may then turn to other retailers or <a href="https://www.oracle.com/au/cx/advertising/data-enrichment-measurement/#data-enrichment">data brokers</a> to purchase or exchange extra data about you. This could include your age, family, health, habits and more. </p>
<p>This allows them to build a more detailed individual profile on you, which helps them predict your behaviour and more precisely target you with ads. </p>
<p>For almost ten years, there has been a law in Australia that makes this kind of data enrichment illegal if a company can “reasonably and practicably” request that information directly from the consumer. And at least <a href="https://consultations.ag.gov.au/rights-and-protections/privacy-act-review-discussion-paper/consultation/view_respondent?_b_index=60&uuId=926016195">one major data broker</a> has asked the government to “remove” this law. </p>
<p>The burning question is: why is there not a single published case of this law being enforced against companies “enriching” customer data for profiling and targeting purposes? </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/its-time-for-third-party-data-brokers-to-emerge-from-the-shadows-94298">It's time for third-party data brokers to emerge from the shadows</a>
</strong>
</em>
</p>
<hr>
<h2>Data collection ‘only from the individual’</h2>
<p>The relevant law is Australian Privacy Principle 3.6 and is part of the federal <a href="https://www.legislation.gov.au/Details/C2022C00199">Privacy Act</a>. It applies to most organisations that operate businesses with annual revenues higher than A$3 million, and smaller data businesses. </p>
<p>The law says such organisations:</p>
<blockquote>
<p>must collect personal information about an individual only from the individual […] unless it is unreasonable or impracticable to do so.</p>
</blockquote>
<p>This “direct collection rule” protects individuals’ privacy by allowing them some control over information collected about them, and avoiding a combination of data sources that could reveal sensitive information about their vulnerabilities. </p>
<p>But this rule has received almost no attention. There’s only one published determination of the federal privacy regulator on it, and that was against the <a href="https://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/AICmr/2020/69.html">Australian Defence Force</a> in a different context.</p>
<p>According to Australian Privacy Principle 3.6, it’s only legal for an organisation to collect personal information from a third party if it would be “unreasonable or impracticable” to collect that information from the individual alone. </p>
<p>This exception was intended to apply to <a href="https://www.oaic.gov.au/privacy/australian-privacy-principles-guidelines/chapter-3-app-3-collection-of-solicited-personal-information#collecting-directly-from-the-individual">limited situations</a>, such as when:</p>
<ul>
<li>the individual is being investigated for some wrongdoing<br></li>
<li>the individual’s address needs to be updated for delivery of legal or official documents. </li>
</ul>
<p>The exception shouldn’t apply simply because a company wants to collect extra information for profiling and targeting, but realises the customer would probably refuse to provide it.</p>
<h2>Who’s bypassing customers for third-party data?</h2>
<p>Aside from data brokers, companies also exchange information with each other about their respective customers to get extra information on customers’ lives. This is often referred to as “data matching” or “data partnerships”.</p>
<p>Companies tend to be very vague about who they share information with, and who they get information from. So we don’t know for certain who’s buying data-enrichment services from data brokers, or “matching” customer data. </p>
<p>Major companies such as <a href="https://www.amazon.com.au/gp/help/customer/display.html?nodeId=202075050&ref_=footer_iba">Amazon Australia</a>, <a href="https://www.ebay.com.au/help/policies/member-behaviour-policies/user-privacy-notice-privacy-policy?id=4260&mkevt=1&mkcid=1&mkrid=705-53470-19255-0&campid=5337590774&customid=&toolid=10001#section4">eBay Australia</a>, <a href="https://www.facebook.com/privacy/policy/?subpage=1.subpage.4-InformationFromPartnersVendors">Meta</a> (Facebook), <a href="https://www.viacomcbsprivacy.com/en/policy">10Play Viacom</a> and <a href="https://twitter.com/en/privacy#twitter-privacy-1">Twitter</a> include terms in the fine print of their privacy policies that state they collect personal information from third parties, including demographic details and/or interests.</p>
<p><a href="https://policies.google.com/privacy?hl=en-US#infocollect">Google</a>, <a href="https://preferences.news.com.au/privacy">News Corp</a>, <a href="https://www.sevenwestmedia.com.au/privacy-policies/privacy">Seven</a>, <a href="https://login.nine.com.au/privacy?client_id=smh">Nine</a> and others also say they collect personal information from third parties, but are more vague about the nature of that information.</p>
<p>These privacy policies don’t explain why it would be unreasonable or impracticable to collect that information directly from customers. </p>
<h2>Consumer ‘consent’ is not an exception</h2>
<p>Some companies may try to justify going behind customers’ backs to collect data because there’s an obscure term in their privacy policy that mentions they collect personal information from third parties. Or because the company <em>disclosing</em> the data has a privacy policy term about sharing data with “trusted data partners”.</p>
<p>But even if this amounts to consumer “consent” under the relatively weak standards for consent in our current privacy law, this is not an exception to the direct collection rule. </p>
<p>The law allows a “consent” exception for government agencies under a separate part of the direct collection rule, but <em>not</em> for private organisations. </p>
<h2>Data enrichment involves personal information</h2>
<p>Many companies with third-party data collection terms in their privacy policies acknowledge this is personal information. But some may argue the collected data isn’t “personal information” under the Privacy Act, so the direct collection rule doesn’t apply.</p>
<p>Companies often exchange information about an individual without using the individual’s legal name or email. Instead they may use a unique advertising identifier for that individual, or <a href="https://help.abc.net.au/hc/en-us/articles/4402890310671">“hash” the email address</a> to turn it into a unique string of numbers and letters. </p>
<p>They essentially allocate a “code name” to the consumer. So the companies can exchange information that can be linked to the individual, yet say this information wasn’t connected to their actual name or email. </p>
<p>However, this information should still be treated as personal information because it can be linked back to the individual when combined with other <a href="https://www.austlii.edu.au/cgi-bin/viewdoc/au/cases/cth/FCAFC/2017/4.html">information about them</a>. </p>
<h2>At least one major data broker is against it</h2>
<p>Data broker <a href="https://www.experian.com.au/business/solutions/audience-targeting/digital-solutions-sell-side/digital-audiences-ss">Experian Australia</a> has asked the government to “remove” Australian Privacy Principle 3.6 “altogether”. In its <a href="https://consultations.ag.gov.au/rights-and-protections/privacy-act-review-discussion-paper/consultation/view_respondent?_b_index=60&uuId=926016195">submission</a> to the Privacy Act Review in January, Experian argued:</p>
<blockquote>
<p>It is outdated and does not fit well with modern data uses.</p>
</blockquote>
<p>Others who profit from data enrichment or data matching would probably agree, but prefer to let sleeping dogs lie.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A screenshot shows six different categories of consumer data offered by Experian." src="https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=463&fit=crop&dpr=1 754w, https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=463&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/485485/original/file-20220920-14-p8l88p.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=463&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">On its website, Experian claims to offer a ‘combination of demographic, geographic, financial and market research data - both online and offline’.</span>
<span class="attribution"><span class="source">Screenshot/Experian</span></span>
</figcaption>
</figure>
<p>Experian argued the law favours large companies with direct access to lots of customers and opportunities to pool data collected from across their own corporate group. It said companies with access to fewer consumers and less data would be disadvantaged if they can’t purchase data from brokers. </p>
<p>But the fact that some digital platforms impose extensive personal data collection on customers supports the case for stronger privacy laws. It doesn’t mean there should be a data free-for-all. </p>
<h2>Our privacy regulator should take action</h2>
<p>It has been three years since the consumer watchdog recommended <a href="https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf">major reforms</a> to our privacy laws to reduce the disadvantages consumers suffer from invasive data practices. These reforms are probably still years away, if they eventuate at all.</p>
<p>The direct collection rule is a very rare thing. It is an existing Australian privacy law that favours consumers. The privacy regulator should prioritise the enforcement of this law for the benefit of consumers.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/amazon-just-took-over-a-primary-healthcare-company-for-a-lot-of-money-should-we-be-worried-187627">Amazon just took over a primary healthcare company for a lot of money. Should we be worried?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/190758/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Katharine Kemp receives funding from The Allens Hub for Technology, Law and Innovation. She is a Member of the Advisory Board of the Future of Finance Initiative in India, and the Australian Privacy Foundation.</span></em></p>The terms of the Australian Privacy Principle 3.6 are quite clear. So why is there not a single published case of this law being enforced?Katharine Kemp, Senior Lecturer, Faculty of Law & Justice, UNSW, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1886452022-08-15T02:48:48Z2022-08-15T02:48:48ZInstagram and Facebook are stalking you on websites accessed through their apps. What can you do about it?<figure><img src="https://images.theconversation.com/files/478886/original/file-20220812-23-5q8caq.jpeg?ixlib=rb-1.1.0&rect=0%2C0%2C5472%2C3645&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Glen Carrie/Unsplash</span></span></figcaption></figure><p>Social media platforms have had some bad <a href="https://theconversation.com/concerns-over-tiktok-feeding-user-data-to-beijing-are-back-and-theres-good-evidence-to-support-them-186211">press</a> in recent times, largely prompted by the vast extent of their data collection. Now Meta, the parent company of Facebook and Instagram, has upped the ante. </p>
<p>Not content with following every move you make on its apps, Meta has reportedly devised a way to also know everything you do in external websites accessed <em>through</em> its apps. Why is it going to such lengths? And is there a way to avoid this surveillance?</p>
<h2>‘Injecting’ code to follow you</h2>
<p>Meta has a custom in-app browser that operates on Facebook, Instagram and any website you might click through to from both these apps.</p>
<p>Now ex-Google engineer and privacy researcher Felix Krause has discovered this proprietary browser has additional program code inserted into it. Krause developed a tool that <a href="https://krausefx.com/blog/ios-privacy-instagram-and-facebook-can-track-anything-you-do-on-any-website-in-their-in-app-browser?utm_source=tldrnewsletter">found</a> Instagram and Facebook added up to 18 lines of code to websites visited through Meta’s in-app browsers. </p>
<p>This “code injection” enables user tracking and overrides tracking restrictions that browsers such as Chrome and Safari have in place. It allows Meta to collect sensitive user information, including “every button and link tapped, text selections, screenshots, as well as any form inputs, like passwords, addresses and credit card numbers”.</p>
<p>Krause published his <a href="https://krausefx.com/blog/ios-privacy-instagram-and-facebook-can-track-anything-you-do-on-any-website-in-their-in-app-browser?utm_source=tldrnewsletter">findings</a> online on August 10, including samples of the <a href="https://connect.facebook.net/en_US/pcm.js">actual code</a>.</p>
<p>In response, Meta has said it isn’t doing anything users didn’t consent to. A Meta spokesperson said:</p>
<blockquote>
<p>We intentionally developed this code to honour people’s [Ask to track] choices on our platforms […] The code allows us to aggregate user data before using it for targeted advertising or measurement purposes.</p>
</blockquote>
<p>The “code” mentioned in the case is <a href="https://connect.facebook.net/en_US/pcm.js">pcm.js</a> – a script that acts to aggregate a user’s browsing activities. Meta says the script is inserted based on whether users have given consent – and information gained is used only for advertising purposes. </p>
<p>So is it acting ethically? Well, the company has done due diligence by informing users of its intention to collect <a href="https://www.facebook.com/privacy/policy">an expanded range</a> of data. However, it stopped short of making clear what the full implications of doing so would be. </p>
<p>People might give their consent to tracking in a more general sense, but “informed” consent implies full knowledge of the possible consequences. And, in this case, users were not explicitly made aware their activities on other sites could be followed through a code injection. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1557777320546635776"}"></div></p>
<h2>Why is Meta doing this?</h2>
<p>Data are the central commodity of Meta’s business model. There is astronomical value in the amount of data Meta can collect by injecting a tracking code into third-party websites opened through the Instagram and Facebook apps.</p>
<p>At the same time, Meta’s business model is being threatened – and events from the recent past can help shed light on why it’s doing this in the first place.</p>
<p>It boils down to the fact that Apple (which owns the Safari browser), Google (which owns Chrome) and the Firefox browser are all actively placing restrictions on Meta’s ability to collect data. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/stuff-up-or-conspiracy-whistleblowers-claim-facebook-deliberately-let-important-non-news-pages-go-down-in-news-blackout-182673">Stuff-up or conspiracy? Whistleblowers claim Facebook deliberately let important non-news pages go down in news blackout</a>
</strong>
</em>
</p>
<hr>
<p>Last year, Apple’s iOS 14.5 update came alongside a <a href="https://www.apple.com/au/privacy/control/">requirement</a> that all apps hosted on the Apple app store must get users’ explicit permission to track and collect their data across apps owned by other companies.</p>
<p>Meta has <a href="https://krausefx.com/blog/ios-privacy-instagram-and-facebook-can-track-anything-you-do-on-any-website-in-their-in-app-browser?utm_source=tldrnewsletter">publicly</a> said this single iPhone alert is costing its Facebook business US$10 billion each year. </p>
<p>Apple’s Safari browser also applies a default setting to block all third-party “cookies”. These are little chunks of <a href="https://www.trendmicro.com/vinfo/us/security/definition/cookies">tracking code</a> that websites deposit on your computer and which tell the website’s owner about your visit to the site. </p>
<p>Google will also soon be phasing out third-party cookies. And Firefox recently announced “total cookie protection” to prevent so-called cross-page tracking. </p>
<p>In other words, Meta is being flanked by browsers introducing restrictions on extensive user data tracking. Its response was to create its own browser that circumvents these restrictions. </p>
<h2>How can I protect myself?</h2>
<p>On the bright side, users concerned about privacy do have some options. </p>
<p>The easiest way to stop Meta tracking your external activities through its in-app browser is to simply not use it; make sure you’re opening web pages in a trusted browser of choice such as Safari, Chrome or Firefox (via the screen shown below).</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=548&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=548&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=548&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=689&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=689&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478879/original/file-20220812-20-6je7m8.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=689&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Click ‘open in browser’ to open a website in a trusted browser such as Safari.</span>
<span class="attribution"><span class="source">screenshot</span></span>
</figcaption>
</figure>
<p>If you can’t find this screen option, you can manually copy and paste the web address into a trusted browser. </p>
<p>Another option is to access the social media platforms via a browser. So instead of using the Instagram or Facebook app, visit the sites by entering their URL into your trusted browser’s search bar. This should also solve the tracking problem.</p>
<p>I’m not suggesting you ditch Facebook or Instagram altogether. But we should all be aware of how our online movements and usage patterns may be carefully recorded and used in ways we’re not told about. Remember: on the internet, if the service is free, you’re probably the product. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-it-even-possible-to-regulate-facebook-effectively-time-and-again-attempts-have-led-to-the-same-outcome-169947">Is it even possible to regulate Facebook effectively? Time and again, attempts have led to the same outcome</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/188645/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A privacy researcher found a ‘code injection’ that allows Instagram and Facebook to collect sensitive user data, including passwords and credit card details.David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1485192020-10-21T04:51:58Z2020-10-21T04:51:58ZThe US is taking on Google in a huge antitrust case. It could change the face of online search<p>The US Department of Justice (DoJ) has filed an <a href="https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws">antitrust lawsuit against Google</a> for unlawful monopolisation. The department says Google’s conduct harms competition and consumers, and reduces the ability of new innovative companies to develop and compete.</p>
<p>It’s the most important monopolisation case in the US since 1998, when the DoJ brought <a href="https://www.justice.gov/atr/us-v-microsoft-courts-findings-fact">proceedings against Microsoft</a>. </p>
<p>It’s possible the current proceedings, given their timing, are politically motivated. US President Donald Trump and other Republicans have repeatedly <a href="https://www.washingtonpost.com/technology/2019/08/06/trump-accuses-google-anti-conservative-bias-without-providing-evidence/">voiced</a> the <a href="https://www.theverge.com/2019/8/6/20756734/trump-google-anti-conservative-bias-claims-tweets">view</a> that Google is prejudiced against conservative beliefs. </p>
<p>But even if Democratic candidate Joe Biden is elected president, this action against Google is unlikely to go away.</p>
<p>The ramifications for Google, if the court rules against it, could ultimately be dramatic. The DoJ’s associate deputy attorney general, Ryan Shores, has refused to rule out seeking orders to break up the tech giant, <a href="https://www.nytimes.com/2020/10/20/technology/google-antitrust.html">saying</a> “nothing is off the table”.</p>
<h2>Google’s monopoly power</h2>
<p>Google’s economic power is no secret. Regulators around the world, including in the European Union, are investigating the company’s conduct and bringing actions under competition, consumer and privacy laws. </p>
<p>US Attorney General William Barr said the new DoJ action:</p>
<blockquote>
<p>[…] strikes at the heart of Google’s grip over the internet for millions of American consumers, advertisers, small businesses and entrepreneurs beholden to an unlawful monopolist. </p>
</blockquote>
<p>Specifically, the DoJ claims Google is illegally <a href="https://www.justice.gov/opa/press-release/file/1328941/download">monopolising the markets</a> for online search and search advertising (the advertising that appears alongside search results). </p>
<p>According to the DoJ, Google’s US market share is roughly:</p>
<ul>
<li><p>88% in the market for general search services</p></li>
<li><p>70% in the search advertising market. </p></li>
</ul>
<p>However, holding a dominant position isn’t against the law. A company is allowed to enjoy a dominant position or even a complete monopoly, as long as it doesn’t do so by unlawful means. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-accc-is-suing-google-for-misleading-millions-but-calling-it-out-is-easier-than-fixing-it-143447">The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it</a>
</strong>
</em>
</p>
<hr>
<h2>So what has Google allegedly done wrong?</h2>
<p>The DoJ’s main complaint is Google has entered into several “exclusionary agreements” that preserve its monopoly power by <a href="https://www.accc.gov.au/business/anti-competitive-behaviour/exclusive-dealing">hindering competition</a> from rivals (and potential rivals). Exclusionary agreements are deals that restrict the ability of at least one party to deal with other players. </p>
<p>The DoJ says Google spends billions of dollars each year on: </p>
<ul>
<li><p>long-term agreements with Apple that require Google to be the default search engine on Apple’s Safari browser</p></li>
<li><p>exclusivity agreements that forbid pre-installation of competing search services by certain mobile device manufacturers and distributors</p></li>
<li><p>arrangements that force certain mobile device manufacturers and distributors to pre-install Google search applications in prime locations on mobile devices and make them undeletable, regardless of consumer preference</p></li>
<li><p>using monopoly profits to buy preferential treatment for its search engine on devices, web browsers and other search access points. </p></li>
</ul>
<p>The DoJ claims these agreements have created a “continuous and self-reinforcing cycle of monopolisation” in the market for online search and search advertising (which relies on Google’s dominance in online search).</p>
<p>Google has responded by describing the court action as “deeply flawed”. In a <a href="https://blog.google/outreach-initiatives/public-policy/response-doj">blog post</a> it said: </p>
<blockquote>
<p>[…] people don’t use Google because they have to, they use it because they choose to. </p>
</blockquote>
<p>It also said users are free to switch to other search engines. </p>
<p>But even if that’s technically true, Google’s agreements for pre-installation, default settings and preferential treatment give it a substantial advantage over its rivals. </p>
<h2>Does any of this matter when Google is ‘free’?</h2>
<p>Google provides services that are hugely valued the world over — and with no direct financial cost to the user. That said, “free” services can still cause harm. </p>
<p>According to the DoJ, by restricting competition Google has harmed search users, in part “by reducing the quality of search (including on dimensions such as privacy, data protection, and use of consumer data)”. This is an important recognition that price is not all that matters.</p>
<p>The logic behind this claim is that other search engines with better track records on privacy, such as <a href="https://duckduckgo.com/privacy">DuckDuckGo</a>, might otherwise be more successful than they are. </p>
<p>Or, to frame that another way, Google might actually have to compete vigorously on privacy, instead of allegedly imposing privacy-degrading terms on its users. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="DuckDuckGo logo" src="https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/364645/original/file-20201021-15-q4rsc5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">DuckDuckGo says it ‘does not collect or share personal information’ from users.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>What might happen if the action succeeds?</h2>
<p>If Google is found to have contravened the prohibition against monopolisation under the <a href="https://www.justice.gov/atr/competition-and-monopoly-single-firm-conduct-under-section-2-sherman-act-chapter-1">US Sherman Act</a>, it could face substantial fines and damages claims.</p>
<p>But perhaps more concerning for Google would be the prospect of the DoJ seeking to break up Google’s various businesses. </p>
<p>Google owns a range of highly successful services, including Google search, Google Chrome, the Android operating system, and numerous ad tech (“advertising technology”) services. Google’s position and access to data in one business arguably give it advantages in its other businesses.</p>
<p>Eleven Republican attorneys general from various US states have joined the proceedings and could individually seek remedies.</p>
<p>The action won’t be having a major impact any time soon, though. Google’s lawyers estimate the case would only come before the US District Court for the District of Columbia in a year.</p>
<h2>Could our competition watchdog be taking notes?</h2>
<p>Google could contravene Australia’s misuse of market power law under the Competition and Consumer Act 2010, if it has engaged in conduct of the kind alleged by the DoJ that has an effect on Australian markets. </p>
<p>As part of its 2019 <a href="https://www.accc.gov.au/publications/digital-platforms-inquiry-final-report">Digital Platforms Inquiry</a>, the Australian Competition and Consumer Commission (ACCC) said Google has substantial market power in the general search and search advertising markets in Australia. It has a market share of about 95% in both cases. </p>
<p>If this is true, it would be unlawful for Google to engage in any conduct that substantially lessens competition in a market (or has the purpose or likely effect of doing so). This could include entering exclusionary agreements that affect Australian markets. </p>
<p>So far, the ACCC has twice brought <a href="https://www.abc.net.au/news/2019-10-29/google-faces-accc-federal-court-misleading-use-of-data/11649356">proceedings against Google</a>, alleging it misled users about how it collects and uses their data. It is also investigating the conduct of Google and Facebook, in particular, in digital advertising markets as part of its <a href="https://www.accc.gov.au/focus-areas/inquiries-ongoing/digital-advertising-services-inquiry/issues-paper">ad tech inquiry</a>. </p>
<p>While Australia’s consumer watchdog might wait and see how proceedings against Google fare in the US <a href="https://www.reuters.com/article/us-europe-tech-google-antitrust-analysis-idUSKBN242623">and the EU</a>, the recent DoJ action could encourage the ACCC in any action it might be contemplating under Australian law on misuse of market power.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/every-step-you-take-why-googles-plan-to-buy-fitbit-has-the-acccs-pulse-racing-141052">Every step you take: why Google's plan to buy Fitbit has the ACCC's pulse racing</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/148519/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Katharine Kemp receives funding from The Allens Hub for Technology, Law and Innovation. She is a Member of the Advisory Board of the Future of Finance Initiative in India, the Centre for Law, Markets & Regulation and the Australian Privacy Foundation.</span></em></p>It’s the biggest monopolisation case since a 1998 lawsuit against Microsoft. But it may be several years before a settlement of any kind is reached.Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1386132020-05-28T03:29:24Z2020-05-28T03:29:24ZDon’t be phish food! Tips to avoid sharing your personal information online<figure><img src="https://images.theconversation.com/files/337870/original/file-20200527-141320-1a7ikl1.jpg?ixlib=rb-1.1.0&rect=44%2C14%2C4947%2C3308&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/anonymous-mask-hide-identity-on-computer-518835055">Shutterstock</a></span></figcaption></figure><p>Data is the <a href="https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/">new oil</a>, and online platforms will siphon it off at any opportunity. Platforms increasingly demand our personal information in exchange for a service. </p>
<p>Avoiding online services altogether can limit your participation in society, so the advice to just opt out is easier said than done. </p>
<p>Here are some tricks you can use to avoid giving online platforms your personal information. Some ways to <a href="https://www.scamwatch.gov.au/get-help/protect-yourself-from-scams">limit your exposure</a> include using “alternative facts”, using guest check-out options, and a burner email.</p>
<h2>Alternative facts</h2>
<p>While “alternative facts” is a term coined by <a href="https://link.springer.com/chapter/10.1007/978-3-030-00813-0_4">White House press staff</a> to describe factual inaccuracies, in this context it refers to false details supplied in place of your personal information.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/hackers-are-now-targeting-councils-and-governments-threatening-to-leak-citizen-data-126190">Hackers are now targeting councils and governments, threatening to leak citizen data</a>
</strong>
</em>
</p>
<hr>
<p>This is an effective strategy to avoid giving out information online. Though platforms might insist you complete a user profile, they can do little to check if that information is correct. For example, they can check whether a phone number contains the correct amount of digits, or if an email address has a valid format, but that’s about it.</p>
<p>When a website requests your date of birth, address, or name, consider how this information will be used and whether you’re prepared to hand it over. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1147173290181627904"}"></div></p>
<p>There’s a distinction to be made between which platforms <a href="https://www.wired.com/2014/04/why-we-need-online-alter-egos-now-more-than-ever/">do or don’t warrant</a> using your real information. If it’s an <a href="https://www.avg.com/en/signal/website-safety">official</a> banking or educational institute website, then it’s important to be truthful.</p>
<p>But an online shopping, gaming, or movie review site shouldn’t require the same level of disclosure, and using an alternative identity could protect you.</p>
<h2>Secret shopper</h2>
<p>Online stores and services often encourage users to set up a profile, offering convenience in exchange for information. Stores value your profile data, as it can provide them additional revenue through targeted advertising and emails. </p>
<p>But many websites also offer a guest checkout option to streamline the purchase process. After all, one thing as valuable as your data is your money. </p>
<p>So unless you’re making very frequent purchases from a site, use guest checkout and skip profile creation altogether. Even without disclosing extra details, you can still track your delivery, as tracking is provided by transport companies (and not the store). </p>
<p>Also consider your payment options. Many credit cards and payment merchants such as PayPal provide additional <a href="https://www.paypal.com/au/smarthelp/article/what-is-paypal-buyer-protection-faq1269">buyer protection</a>, adding another layer of separation between you and the website. </p>
<p>Avoid sharing your bank account details online, and instead use an intermediary such as PayPal, or a credit card, to provide additional protection. </p>
<p>If you use a credit card (even prepaid), then even if your details are compromised, any potential losses are limited to the card balance. Also, with credit cards this balance is effectively the bank’s funds, meaning you won’t be charged out of pocket for any fraudulent transactions.</p>
<h2>Burner emails</h2>
<p>An email address is usually the first item a site requests. </p>
<p>They also often require email verification when a profile is created, and that verification email is probably the only one you’ll ever want to receive from the site. So rather than handing over your main email address, consider a burner email.</p>
<p>This is a fully functional but disposable email address that remains active for about 10 minutes. You can get one for free from online services including <a href="https://maildrop.cc/">Maildrop</a>, <a href="https://www.guerrillamail.com/">Guerilla Mail</a> and <a href="https://10minutemail.com/">10 Minute Mail</a>.</p>
<p>Just make sure you don’t forget your password, as you won’t be able to recover it once your burner email becomes inactive.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=276&fit=crop&dpr=1 600w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=276&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=276&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=346&fit=crop&dpr=1 754w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=346&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=346&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The 10 Minute Mail website offers free burner emails.</span>
<span class="attribution"><a class="source" href="https://10minutemail.com/">screenshot</a></span>
</figcaption>
</figure>
<h2>The risk of being honest</h2>
<p>Every online profile containing your personal information is another potential target for attackers. The more profiles you make, the greater the chance of your details being breached.</p>
<p>A breach in one place can lead to others. Names and emails alone are sufficient for email <a href="https://www.staysmartonline.gov.au/protect-yourself/recover-when-things-go-wrong/phishing">phishing attacks</a>. And a phish becomes more convincing (and more likely to succeed) when paired with other details such as your recent purchasing history. </p>
<p><a href="https://www.infosecurity-magazine.com/news/google-survey-finds-two-users/">Surveys indicate</a> about <a href="https://blog.avast.com/strengthening-passwords-on-world-password-day">half of us</a> recycle passwords across multiple sites. While this is convenient, it means if a breach at one site reveals your password, then attackers can hack into your other accounts.</p>
<p>In fact, even just an email address is a valuable piece of intelligence, as emails are used as a login for many sites, and a login (unlike a password) can sometimes be impossible to change. </p>
<p>Obtaining your email could open the door for targeted attacks on your other accounts, such as social media accounts.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-ugly-truth-tech-companies-are-tracking-and-misusing-our-data-and-theres-little-we-can-do-127444">The ugly truth: tech companies are tracking and misusing our data, and there's little we can do</a>
</strong>
</em>
</p>
<hr>
<p>In “password spraying” <a href="https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/">attacks</a>“, cybercriminals test common passwords against many emails/usernames in hopes of landing a correct combination.</p>
<p>The bottom line is, the safest information is the information you never release. And practising alternatives to disclosing your true details could go a long way to limiting your data being used against you.</p><img src="https://counter.theconversation.com/content/138613/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nik Thompson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While some online services such as banking do warrant using your true information, many sites shouldn’t require the same level of disclosure. Here’s how to protect yourself in such cases.Nik Thompson, Senior Lecturer, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1007572018-08-08T03:27:02Z2018-08-08T03:27:02ZUsing My Health Record data for research could save lives, but we must ensure it’s ethical<figure><img src="https://images.theconversation.com/files/230831/original/file-20180807-191041-sqr86j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Big data research could improve medical outcomes and reduce the waste of resources.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/doctor-psychiatrist-consulting-diagnostic-examining-stressful-1146221276?src=OGqu3Gu1FBo3vet-H9w0Wg-1-83">Shutterstock</a></span></figcaption></figure><p>There has been considerable debate about the <a href="http://theconversation.com/my-health-record-the-case-for-opting-in-99850">merits</a> and <a href="http://theconversation.com/my-health-record-the-case-for-opting-out-99302">risks</a> of the My Health Record (<a href="https://www.myhealthrecord.gov.au/">MHR</a>) scheme – ranging from the <a href="http://theconversation.com/opting-out-of-my-health-records-heres-what-you-get-with-the-status-quo-100368">deep inefficiencies</a> in the current system, to privacy issues and <a href="https://theconversation.com/what-could-a-my-health-record-data-breach-look-like-100090">control of data</a>. </p>
<p>There has been less discussion of some down-the-track intended uses of this data for secondary purposes – such as for research. </p>
<p>A rich dataset of health information could be used in studies that generate enormous benefits to society, but medical research is carried out under strict ethical guidelines. </p>
<p>Unfortunately, a consent process where people are required to opt out rather than opt in doesn’t meet ethical standards for research.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/my-health-record-the-case-for-opting-out-99302">My Health Record: the case for opting out</a>
</strong>
</em>
</p>
<hr>
<h2>Using MHR data for research</h2>
<p>The <a href="https://www.myhealthrecord.gov.au/about/privacy-policy">privacy policy</a> for My Heath Record states: </p>
<blockquote>
<p>We are authorised under the My Health Records Act to prepare and provide de-identified data for research and other public health purposes. De-identified data is data that has had information removed that could reasonably identify any individuals such as name, date of birth or address.</p>
</blockquote>
<p>This access is governed by a <a href="http://www.health.gov.au/internet/main/publishing.nsf/Content/F98C37D22E65A79BCA2582820006F1CF/$File/MHR_2nd_Use_Framework_2018_ACC_AW3.pdf">framework to guide secondary use of My Health Record system data</a>. Under the access controls in My Health Record you can indicate that you are not willing for your data to be shared for research purposes. But, as with My Health Record itself, this is done on an opt-out basis. </p>
<p>Somewhat worryingly, the framework suggests that it may permit some research use of this data without ethical approval: </p>
<blockquote>
<p>For applications involving de-identified data, the Board may require ethics approval to be obtained before data can be accessed or released.</p>
</blockquote>
<h2>The social benefits of research</h2>
<p>There is great potential benefit in having such a rich dataset available for research – as long as it is done in a way that supports and encourages, rather than undermines, public trust in it. </p>
<p>In medicine, big data research could improve outcomes and reduce the waste of resources. For example, a recent UK study looking at the impact of <a href="https://www.theguardian.com/science/2018/jul/19/routine-treatment-for-cardiac-arrest-doubles-risk-of-brain-damage-study">adrenaline on the treatment of sufferers of cardiac arrest</a> reported that the standard response of giving adrenaline in these cases has minimal impact on survival rates, but a significant increase in potential of subsequent death. </p>
<p>A key driver for that research was a <a href="https://jamanetwork.com/journals/jama/fullarticle/1105081">retrospective study in Japan</a> that examined the results of more than 400,000 cases. That research will almost certainly change practice across the world, and increase the quality of life of countless survivors of cardiac arrests. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/my-health-record-the-case-for-opting-in-99850">My Health Record: the case for opting in</a>
</strong>
</em>
</p>
<hr>
<p>But the history of research ethics suggests that research requires public support to be conducted effectively. When public support is not present, and research is done out of step with the values of society more generally, great harm can be done to the reputation of both research and science. </p>
<p>The abuses in the <a href="https://www.washingtonpost.com/news/retropolis/wp/2017/05/16/youve-got-bad-blood-the-horror-of-the-tuskegee-syphilis-experiment/?utm_term=.10caa81e6c24">Tuskegee syphilis study</a>, in which researchers observed the progression of untreated syphilis in 622 poor, African-American men without telling them they had the disease, and without treating them, still has an ongoing impact on the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354806/">willingness of African-American people to participate in medical research</a>.</p>
<h2>Ethical problems with using MHR data</h2>
<p>Two main concerns threaten public support for research based on the health data held within My Health Record: the robustness of the consent process and the risks of re-identification. </p>
<h3>Consent</h3>
<p>When things go <a href="https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science/">awry</a> with big data research, it is often due to issues with the consent process. In <a href="https://theconversation.com/consent-and-ethics-in-facebooks-emotional-manipulation-study-28596">previous cases</a>, involved parties have <a href="https://www.wired.com/story/dropbox-sharing-data-study-ethics/">pointed to</a> the existence of privacy policies or end-user licence agreements to lend legitimacy to their use of data. But these policies and agreements do not count as informed consent for participation in medical research, nor can they be. </p>
<p>The same concerns apply to My Health Record. </p>
<p>It is extremely likely that there will be some Australians in a few months’ time who do not even know they have a My Health Record, let alone that their health data may be used in research. Even those who do know may not have the technological literacy needed to opt out successfully. </p>
<p>This is hard to reconcile with the National Health and Ethical Research Council’s <a href="https://www.nhmrc.gov.au/book/chapter-3-2-databanks">National Statement on Ethical Conduct in Human Research</a>. The section on databanks states that when collecting data for deposit in a databank participants should be given clear and comprehensive information. They also must explicitly consent to how the data will be stored, and to the purposes for which it will be used. </p>
<p>There is no reason why the ethical parameters should change simply because the My Health Record database is intended to be much larger than previous databases of this kind.</p>
<h3>Risks of re-identification</h3>
<p>One argument made in favour of sharing My Health Record data for research purposes without explicit consent rests on the practice of de-identification. The thinking goes that once data has been de-identified, it is no longer data about a specific person. So their need to control that data for their own protection is diminished. </p>
<p>But de-identification is difficult. In some cases involving large datasets, it is possible to re-identify people. For example, when the federal health department released a large dataset of de-identified data in 2016, <a href="https://www.itnews.com.au/news/health-breached-privacy-law-in-open-data-bungle-oaic-487936">researchers showed</a> that it was in fact re-identifiable. </p>
<p>This is even more of a concern with the recent revelation that <a href="https://www.smh.com.au/healthcare/my-health-record-can-store-genomic-data-but-critics-say-it-s-not-ready-20180801-p4zuxz.html">genomic data may be included in My Health Record</a>. Genomic data cannot be permanently de-identified. </p>
<p>Even if your data is no longer about “you”, you still might be unhappy to have provided data for particular research projects – either because of their commercial nature, or perhaps because the research potentially impacts a group to which you belong (see the <a href="https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10395491">Warrior Gene controversy</a> in New Zealand).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/my-health-record-deleting-personal-information-from-databases-is-harder-than-it-sounds-100962">My Health Record: Deleting personal information from databases is harder than it sounds</a>
</strong>
</em>
</p>
<hr>
<h2>Getting buy-in from the public</h2>
<p>To achieve public support for research on My Health Record data, there should be an explicit opt-in, and ongoing public information campaigns to encourage participation. </p>
<p>This will almost certainly result in a smaller dataset, but it will be a much more ethically defensible one. </p>
<p>Unless the government gains public support for big data research, people will vote with their clicks and opt out of services where their data may be used without their explicit agreement.</p><img src="https://counter.theconversation.com/content/100757/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Hunter does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>If we want My Health Record data to be made available for medical research we need to make it opt in, not opt out. We’ll have a smaller dataset, but at least it will be ethically defensible.David Hunter, Associate Professor of Medical Ethics, Flinders UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/987452018-07-04T14:00:59Z2018-07-04T14:00:59ZCambridge Analytica used our secrets for profit – the same data could be used for public good<figure><img src="https://images.theconversation.com/files/225727/original/file-20180702-116139-4xho2n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How could we put the same strategy used by Cambridge Analytica to better use?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/1077206666">AlexandraPopova/Shutterstock</a></span></figcaption></figure><p>Ever since it was revealed that Cambridge Analytica had taken data from 87m users via a Facebook app <a href="https://www.theguardian.com/uk-news/2018/apr/13/revealed-aleksandr-kogan-collected-facebook-users-direct-messages">that exploited</a> the social media site’s privacy settings, it has been suggested that anything from Donald Trump’s election in the US to the European Union referendum result in the UK could have been the result of the persuasive power of targeted advertisements based on voter preferences.</p>
<p>But Aleksandr Kogan, the University of Cambridge researcher whose data-collecting app formed the basis for Cambridge Analytica’s subsequent work for various political groups, appeared to pour cold water on this idea when <a href="https://www.theguardian.com/technology/2018/jun/19/aleksandr-kogan-facebook-cambridge-analytica-senate-testimony">speaking to a US Senate committee</a>. “The data is entirely ineffective,” he said. “If the goal of Cambridge Analytica was to show personalised advertisements on Facebook, then what they did was stupid.”</p>
<p>Even <a href="https://www.channel4.com/news/exposed-undercover-secrets-of-donald-trump-data-firm-cambridge-analytica">if the boasts</a> by former Cambridge Analytica CEO Alexander Nix and the statements of <a href="https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump">whistleblower Christopher Wylie</a> of the company’s influence are overblown as Kogan claims, the firm nevertheless hit on something with its approach of harvesting data in order to influence voter behaviour. Before that approach becomes commonplace, we should survey the whole moral panic around the scandal and see what lessons can be learnt.</p>
<h2>Use and abuse of data</h2>
<p>The first issue is our misunderstanding of consent. Kogan’s data-scrape may have been unethical, but he didn’t steal the data from those that used the app – they gave it willingly. When you use a social media platform you, by definition, are publishing your private life. More so, you effectively sell your private life on an open market through giving your consent for it to be monetised by that platform. </p>
<p>Following <a href="https://www.cnbc.com/2018/04/06/facebook-sheryl-sandberg-users-would-have-to-pay-to-opt-out-targeted-ads.html">admissions</a> by Facebook chief operating officer Sheryl Sandberg, we now know that “online privacy” settings exist only as a means to allow Facebook users to believe they have a consumer’s right to privacy, when in fact they are not the consumer, but the product itself. If privatisation is a process of transferring ownership from the public to the private realm, this means privacy itself has been privatised. You publish your data, making it public, so that private companies can capitalise on what this data says about you by selling you things.</p>
<p>This leads to a paradoxical situation I call neoprivacy, following neoliberalism’s similar disregard for and exploitation of the private individual. In a neoprivate world privacy exists to be exploited financially. The neoprivate individual both values their personal life so much that they publish it, yet is so neglectful of their privacy that, well, they publish it.</p>
<p>Cambridge Analytica’s stroke of genius was to combine two different kinds of datasets, let’s call them deep and broad. The deep psychometric tests of a small sample (from Kogan’s app) were combined with the broad online behaviour of a massive sample. With this they claimed they could <a href="https://theconversation.com/psychographics-the-behavioural-analysis-that-helped-cambridge-analytica-know-voters-minds-93675">predict people’s behaviour simply by their actions on Facebook</a>. </p>
<p>The firm sold this to political campaigns and lobbyists as their “<a href="https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump">secret weapon</a>”. This model shows a real understanding of social media by grounding it on people’s actions on Facebook – what they click on, read, and like – rather than their expressed statements. It’s <a href="http://www.mediamasters.fm/william-watkin">what you do that matters</a>, not what you say.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/225728/original/file-20180702-116126-1e23zoc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Putting information up for only commercial interests is a wasted opportunity.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/1066441838">pixinoo/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Democratic data dividend</h2>
<p>I think this data-driven approach offers a democratic opportunity. Typically deep, expert research generates the evidence that informs policies. But data-driven governance appears increasingly disassociated from ordinary lives, with voters preferring crowd-pleasing factoids when it comes to major decisions. Indeed, suspicion of experts may even be a contributing factor in the <a href="https://www.bigissue.com/opinion/william-watkin-truth-post-truth/">rise of what could be called demagogcracy and fake news</a>. </p>
<p>In contrast, broad data is generated by people based on what they choose to do, not what an expert has asked them, or prompted them, to say. Neoprivate individuals feel a sense of ownership and investment when they share something on Facebook or Instagram. If anything online needs to be harvested, it is this sense of communal, social engagement. Yet our primal need for social engagement is both stymied by expert policy wonks with no grip of the grassroots, and monetised by the big platforms with no interest in civic society. </p>
<p>Evidence-based governance was instigated under former prime minister, Tony Blair, that was supposed to be a panacea for the uncertainties of political decision making. It has failed. In contrast, the activity-based influence of broad data is a political model that has been shown in the hands of Trump to be frighteningly effective. If we are to fix democracies then future leaders should engage with both – albeit more transparently than Cambridge Analytica did.</p>
<p>One final lesson: if we live in a neoprivate world, why couldn’t we monetise our own lives just as the big tech companies have? If Facebook knows enough about me to advise me on what sort of shelf brackets I need, why couldn’t this same level of insight be applied to more important, more technical, complex political decisions that need to be made by citizens, for their benefit?</p>
<p>If Cambridge Analytica can develop algorithms that are good predictors of our behaviour, shouldn’t that information be used to influence policy? Why shouldn’t politicians harvest it for the greater good rather than personal gain? Many <a href="http://criticallegalthinking.com/2017/05/10/michel-foucault-biopolitics-biopower/">biopolitical</a> theorists define our current age as that of <a href="https://theconversation.com/inside-view-prison-crisis-will-continue-until-we-hear-inmates-stories-83735">power through regulatory surveillance</a>; it is time that neoliberal democracies transitioned to power through participatory enhancement.</p>
<p>Two worlds remain an absolute mystery: Facebook’s algorithms and why we vote the way we do. Place both those secrets in the public realm rather than in the hands of the highest bidder, and maybe democracy can develop its own app and fix itself. Now that’s what I call neoliberalism.</p><img src="https://counter.theconversation.com/content/98745/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>William David Watkin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Something good could come from the Cambridge Analytica scandal if we used the same data to fix society, rather than profit from it.William David Watkin, Professor of Contemporary Philosophy and Literature, Brunel University LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/959512018-05-25T16:56:55Z2018-05-25T16:56:55ZGDPR: ground zero for a more trusted, secure internet<figure><img src="https://images.theconversation.com/files/220465/original/file-20180525-51102-a4jra3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/diversity-people-connection-digital-devices-browsing-392365027">Shutterstock</a></span></figcaption></figure><p>Most of us have been bombarded recently by a barrage of emails from companies begging us to “stay in touch” or “opt in” or informing us of a “policy update”. On May 25, an historic date for the internet, the EU’s <a href="https://gdpr-info.eu/">General Data Protection Regulation</a> (GDPR) comes into force. For some, it is the start of a more citizen-focused world, for others it will see the collapse of their digital marketing strategy.</p>
<p>The number and scope of serious data breaches have dramatically increased in the last few years. In 2013, around three billion Yahoo user accounts were <a href="http://www.bbc.co.uk/news/business-41493494">affected by a hacking attack</a>. Recently it was revealed that 143m customers of the credit score agency <a href="https://www.theregister.co.uk/2018/05/08/equifax_breach_may_2018/">Equifax</a> were hit by a similar breach. And on top of this, we see breaches of privacy in the mass harvesting of data from Cloud service providers, as highlighted by the <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Facebook/Cambridge Analytica</a> debacle. No wonder there is an increasing lack of trust in how companies capture and process our data.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=344&fit=crop&dpr=1 600w, https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=344&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=344&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=433&fit=crop&dpr=1 754w, https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=433&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/220455/original/file-20180525-51130-104ymc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=433&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Even Spotify has got in on the GDPR action.</span>
<span class="attribution"><a class="source" href="https://www.spotify.com/uk/">Spotify</a></span>
</figcaption>
</figure>
<h2>A new dawn</h2>
<p>GDPR replaces the EU’s 1995 <a href="https://whatis.techtarget.com/definition/EU-Data-Protection-Directive-Directive-95-46-EC">Data Protection Directive</a>, which set out minimum standards for processing data. With the new regulations, individuals are afforded the power to compel companies to reveal (or delete) any personal data they hold, and failure to adhere to the new rules will result in <a href="https://www.gdpr.associates/what-is-gdpr/understanding-gdpr-fines/">stiff penalties</a>, with a maximum fine of 4% of a company’s turnover. For a company like Facebook, this could mean around US$1.6 billion.</p>
<p>Many companies already work within audit/compliance regimes. In the finance industry, for example, this is typically the <a href="http://www.theukcardsassociation.org.uk/security/What_is_PCI%20DSS.asp">Payment Card Industry Data Security Standard</a> (PCI-DSS). But these regulations have often failed to stem the tide of data breaches within companies, necessitating more robust standards. At the core of GDPR there are four foundation elements:</p>
<h2>Consent and how your data is used</h2>
<p>As GDPR ensures that consent is explicit, the days of consent by default are over, and the need for users to opt out of mailing lists will become a thing of the past. Individuals have the right to withhold consent, request access to their personal information or delete it altogether from a site. Currently the general feeling is that few users are actually following up on the consent request emails, which means companies may experience problems with their current digital marketing strategies, as they see their contact lists collapse.</p>
<h2>Response to breaches</h2>
<p>In the past, companies have failed to respond promptly to data breaches, especially in the time taken to inform users, and are guilty of being vague about what they report. GDPR aims to overcome this by forcing companies to report within 72 hours, and have faster methods of investigating a breach. This is likely to see the rise of 24/7 security operation centres (SOCs) which continually monitor the data infrastructure for signs of a breach. With the current average time to detect a data breach measured in months, this will be a significant challenge for many companies.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/220308/original/file-20180524-51135-2vrpg2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Some of the focus areas of GDPR.</span>
<span class="attribution"><span class="source">Author Provided</span></span>
</figcaption>
</figure>
<h2>Encryption</h2>
<p>The real wake-up call for new regulation is that much of the internet is not trusted by users, who are concerned it does not embed security properly. Data must now be protected within encryption – encoding information so that it can only be accessed by an authorised user – wherever it relates to personally sensitive data. Companies must understand the scope of a data breach in terms of the information that can be extracted from a leak. If the data is encrypted, it will make it much more difficult to reveal the information. </p>
<p>In the past, the industry has generally struggled to implement encryption and rights protection on documents, but GDPR will force a move towards encryption by default and demand tighter controls on devices that access sensitive documents. The <a href="https://www.techrepublic.com/article/microsoft-outlook-rolling-out-end-to-end-encryption-to-protect-business-email/">recent moves</a> by Google and Microsoft to lock down their email systems with improved security sees some of the first proper attempts to integrate encryption and access control into documents accessed over the Cloud. </p>
<h2>Protecting your privacy</h2>
<p>Under GDPR, a particular challenge for many companies will be the separation of personally identifiable information (PII) – name, date of birth, phone number – from other elements of data. If we take the example of health records, a patient’s community health index number (CHI) will be stored under a “pseudo-anonymiser” – an identifier that disguises the connection between someone’s identity and their information.</p>
<p>This involves not just electronic separation on different databases, but physical separation on different systems. The merging of this information must then take place on a different system, and one that is highly trusted. But many see the concept of pseudo-anonymisation as weak from a privacy point of view, as details can often be mapped back to a particular ID if additional information is known.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=378&fit=crop&dpr=1 600w, https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=378&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=378&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=476&fit=crop&dpr=1 754w, https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=476&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/220424/original/file-20180525-117628-13v0o9p.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=476&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The pseudo-anonymiser process.</span>
</figcaption>
</figure>
<p>So will we see an improvement in our online world in terms of protection and security? Well, definitely some, though many of the changes will happen behind the scenes with improved processes and security methods. But what we should notice is companies taking computer security more seriously. We should see simpler terms and conditions, better reporting on data breaches and companies demonstrating a more reassuring and responsible attitude to our data.</p>
<p>GDPR moves us truly into a more equal information age, where we are finally moving away from the weaker legacy systems of the past to build a more trusted, secure and resilient internet for the future.</p><img src="https://counter.theconversation.com/content/95951/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bill Buchanan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Will GDPR usher in a fresh start for the internet? A look at the four main foundation elements and how they affect you.Bill Buchanan, Head, The Cyber Academy, Edinburgh Napier UniversityLicensed as Creative Commons – attribution, no derivatives.