tag:theconversation.com,2011:/uk/topics/online-trust-51487/articlesOnline trust – The Conversation2023-12-08T16:14:45Ztag:theconversation.com,2011:article/2182942023-12-08T16:14:45Z2023-12-08T16:14:45ZHow to protect yourself from cyber-scammers over the festive period<figure><img src="https://images.theconversation.com/files/562490/original/file-20231129-26-z85wnz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6134%2C3228&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">As online shopping increases over the festive period, so does the risk of cyber-scams. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/merry-xmas-eve-online-shopping-store-2089436578">Chay Tee/Shutterstock</a></span></figcaption></figure><p>The festive season is a time for joy, family and festive cheer. However, it’s also a prime target for cybercriminals. As online shopping ramps up, so does the risk of falling prey to cyber-attacks. That’s why it’s crucial to be extra vigilant about your <a href="https://blog.tctg.co.uk/12-cyber-security-tips-of-christmas">cybersecurity</a> during this time. </p>
<p>Here are some essential tips to safeguard yourself and your data during the festive period:</p>
<h2>Phishing</h2>
<p>Phishing is when criminals use scam emails, text messages or phone calls to trick their victims. Their <a href="https://www.ncsc.gov.uk/collection/phishing-scams">goal</a> is often to make you visit a certain website, which may download a virus on to your computer, or steal bank details or other personal data. </p>
<p>This type of scam tends to <a href="https://www.egress.com/blog/phishing/holiday-phishing-scam-guide">increase</a> at this time due to the amount of people having bought or received new gadgets and technology. </p>
<p>Look out for there being no direct reference to your name in any communications, with wording such as “Dear Sir/Madam” or other terms such as “valued customer” being used instead. Grammar and spelling mistakes are also often present. </p>
<p>Be wary of any suspicious links or attachments within emails too, and don’t click them. It’s better to contact the company directly to check if the message is genuine. You can also <a href="https://www.ncsc.gov.uk/collection/phishing-scams">report</a> suspicious messages and phishing scams to the government’s National Cyber Security Centre. </p>
<h2>Shopping safely online</h2>
<p>The convenience of online shopping is undeniable, especially during the festive season. However, it’s crucial to prioritise your security when buying online. </p>
<p>Before entering your personal and financial information on any website, ensure it’s legitimate and secure. Look for the “https” in the address bar and a <a href="https://theconversation.com/the-vast-majority-of-us-have-no-idea-what-the-padlock-icon-on-our-internet-browser-is-and-its-putting-us-at-risk-216581">padlock</a> icon, which indicates a secure and encrypted connection. </p>
<p>When creating passwords for online shopping accounts, use strong, unique combinations of letters, numbers and symbols. Avoid using the same password for multiple accounts, as a breach on one site could compromise all your others.</p>
<p>As with shopping in the real world, be cautious when encountering offers that are significantly below usual prices or which make extravagant promises. Always conduct thorough research on the seller and product before making a purchase. If a deal seems too good to be true, it probably is. </p>
<p>And if you are out shopping in towns or city centres, there will often be a large number of public wifi options available to you. However, criminals can intercept the data that is transferred across such open and unsecured wifi. So, avoid using public wifi where possible, especially when conducting any financial transactions. </p>
<figure class="align-center ">
<img alt="A person sits at a laptop with a coffee surrounded by festive packages." src="https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/562672/original/file-20231130-21-u6r9en.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Stay vigilant, exercise caution and don’t let your excitement for gifts and deliveries compromise your cybersecurity.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/christmas-online-shopping-top-view-female-520279837">Prostock-studio/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Social media</h2>
<p>While social media platforms provide people with a means to keep in touch with family and friends over the festive period, they are often a goldmine for <a href="https://www.which.co.uk/consumer-rights/advice/how-to-spot-a-social-media-scam-aMtwF3u1XKGt">scams</a> and malware (software designed to disrupt, damage or gain unauthorised access to a computer). In the spirit of the festive season, people often share an abundance of personal information on social media, often without considering the potential consequences. </p>
<p>This trove of data can make people vulnerable to cyber-attacks. Scammers can exploit this information to gain unauthorised access to social media accounts, steal personal information, or even commit identity theft. To protect yourself, be mindful of what you share. </p>
<p>Be wary when interacting with posts and direct messages, especially if they contain suspicious links or attachments. Before clicking on anything, hover over the link to verify its destination. If it shows a website you don’t recognise or seems unrelated to the message, do not click on it. If you receive a message from someone you know but the content seems strange or out of character, contact them directly through a trusted channel to verify its authenticity. </p>
<p>Likewise, be wary of messages containing urgent requests for money or personal information from businesses. Genuine organisations will never solicit sensitive details through social media.</p>
<p>There are many buy and sell platforms available on social media. But while such platforms can be a great place to find a unique gift, it is also important to remember that not all sellers may be legitimate. So, it’s vital that you don’t share your bank details. If the seller sends a link to purchase the item, do not use it. When meeting to collect an item, it’s generally safer to use cash rather than transferring funds electronically.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/aO858HyFbKI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Advice for staying safe online.</span></figcaption>
</figure>
<h2>Package delivery scams</h2>
<p>As well as being a time for giving and receiving gifts, the festive season is also ripe for cybercriminals to exploit the excitement surrounding <a href="https://www.citizensadvice.org.uk/about-us/about-us1/media/press-releases/scams-linked-to-parcel-deliveries-come-top-in-2023/">package deliveries</a>. </p>
<p>Scammers often pose as legitimate delivery companies, sending emails or text messages claiming that a delivery attempt was unsuccessful or requiring additional fees for processing, or even customs clearance. Typically, these messages contain links or phone numbers that, when clicked or called, lead to fake websites or automated phone systems designed to collect personal information or payments.</p>
<p>To protect yourself, always verify the legitimacy of any delivery notifications you receive. Check the sender’s email address or phone number against the official contact information for the delivery company. If the information doesn’t match or seems suspicious, don’t click any links or provide personal details. </p>
<p>Legitimate delivery companies will never ask for upfront payment or sensitive information through unsolicited messages or calls. </p>
<p>Remember, cybercriminals are skilled at manipulating the festive spirit to their advantage. Stay vigilant, exercise caution, and don’t let your excitement for gifts and deliveries compromise your cybersecurity.</p><img src="https://counter.theconversation.com/content/218294/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rachael Medhurst does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Cyber-scams tend to ramp up at this time of year, with criminals and scammers eager to exploit people’s generosity and excitement.Rachael Medhurst, Course Leader and Senior Lecturer in Cyber Security NCSA, University of South WalesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1844502022-06-07T23:15:52Z2022-06-07T23:15:52ZHidden costs, manipulation, forced continuity: report reveals how Australian consumers are being duped online<figure><img src="https://images.theconversation.com/files/467406/original/file-20220607-15990-frj1pn.jpeg?ixlib=rb-1.1.0&rect=59%2C35%2C3934%2C2958&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Australian consumers’ choices on websites and apps are being manipulated through online designs taking advantage of their weaknesses. That’s according to research on consumers’ online experiences and the presentation of websites and apps, <a href="https://cprc.org.au/wp-content/uploads/2022/06/CPRC-Duped-by-Design-Final-Report-June-2022.pdf">released today</a> by the Consumer Policy Research Centre (CPRC).</p>
<p>The research gives examples of consumers being manipulated or deceived into unintentionally buying items, paying more, or giving up more personal data than they meant to.</p>
<p>Examples include situations where an online store automatically added items to consumers’ carts, and “Hotel California” techniques which make it easy to subscribe to a service, but much harder to unsubscribe. </p>
<p>According to the CPRC’s findings, 83% of Australians surveyed had experienced one or more negative consequences – including financial harm or feeling manipulated – as a result of these “<a href="https://www.wired.com/story/how-to-spot-avoid-dark-patterns/#">dark patterns</a>”.</p>
<p>Some misleading designs breach the Australian Consumer Law. However, not all designs that have unfair consequences will necessarily be captured under the law. The latest report adds to existing calls to <a href="https://www.theguardian.com/australia-news/2019/sep/08/not-fair-why-judges-have-been-accused-of-failing-australian-consumers">amend consumer law</a> by introducing a ban on unfair trading practices. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/accc-world-first-australias-federal-court-found-google-misled-users-about-personal-location-data-159138">ACCC 'world first': Australia's Federal Court found Google misled users about personal location data</a>
</strong>
</em>
</p>
<hr>
<h2>What are dark patterns?</h2>
<p><a href="https://www.forbrukerradet.no/dark-patterns/">Experts</a> and <a href="https://www.ftc.gov/news-events/events/2021/04/bringing-dark-patterns-light-ftc-workshop">regulators</a> around the world have highlighted concerning online design techniques in recent years, labelling them “dark patterns” or “deceptive design”.</p>
<p>These designs often take advantage of a consumer’s recognised behavioural biases. For instance, “<a href="https://www.intereconomics.eu/contents/year/2018/number/1/article/using-behavioural-economics-for-rather-than-against-consumers-a-practitioners-perspective.html">default bias</a>” is consumers’ bias in favour of leaving default choices in place to avoid making complex decisions. Businesses take advantage of this by pre-ticking boxes in favour of the business’s preferences, despite consumer interests.</p>
<p>The <a href="https://www.accc.gov.au/system/files/DPB%20-%20DPSI%20-%20September%202021%20-%20Full%20Report%20-%2030%20September%202021%20%283%29_1.pdf">Australian Competition & Consumer Commission</a> has examined dark patterns, <a href="https://www.accc.gov.au/system/files/Digital%20platform%20services%20inquiry.pdf">defining</a> them as:</p>
<blockquote>
<p>The design of user interfaces intended to confuse users, make it
difficult for users to express their actual preferences, or manipulate
users into taking certain actions.</p>
</blockquote>
<p>The CPRC study conducted a randomised sweep of websites and apps to identify deceptive design features.</p>
<h2>Hidden costs: I bought what?</h2>
<p>The CPRC found several examples of online stores automatically adding items to consumers’ shopping carts, such as insurance or service plans.</p>
<p>For example, in one case a consumer buying a washing machine from a major online retailer for A$1,059, may or may not have noticed a single-line item, “3 Year Care Plan For Home - $160”, in the final steps of their purchase.</p>
<p>In other cases, customers were presented with offers of a product care plan at several points in the checkout process. The CPRC says:</p>
<blockquote>
<p>this design approach risks implying that […] a product care plan is required when most faults or problems are adequately covered by the consumer guarantees.</p>
</blockquote>
<p>For products sold in Australia, consumer guarantees about the quality of products are provided free of charge under the Australian Consumer Law.</p>
<h2>“Hotel California” or forced continuity</h2>
<p>Another concerningly common pattern is the relative difficulty consumers experience when trying to unsubscribe from a service, compared with how easy it is to sign up. CPRC labels this “Hotel California”, after the famous line in the Eagles’ song: “You can check out any time you like, but you can never leave”.</p>
<p>Examples from the CPRC’s findings included attempting to cancel an Amazon Music Unlimited subscription, which required a consumer to navigate more than five screens. Similarly, cancelling an eBay Plus subscription required four additional steps after selecting “cancel membership”.</p>
<p>The CPRC argues it should be as easy to opt-out of a service as it is to opt-in. While extra steps may not seem disastrous in isolation, they can especially disadvantage those already experiencing vulnerabilities, such as sudden illness, loss of a loved one, or low digital literacy.</p>
<p>This is sometimes combined with another manipulative design technique called “confirmshaming”. With this, consumers are asked to confirm a statement that makes them feel shamed or foolish, such as if they want to “lose their benefits” or if they “refuse to support” a good cause.</p>
<h2>Data grabs, colours and countdowns</h2>
<p>The CPRC also found the majority of consumers surveyed (89%) had experienced being asked for more personal information than was needed to access the relevant product or service. This was achieved in various ways, including by:</p>
<ul>
<li>pre-ticking the option to receive marketing communications</li>
<li>forcing the consumer to create a profile to browse or purchase a product, and</li>
<li>treating the mere use of a website as acceptance of data terms or conditions.</li>
</ul>
<p>Other examples of manipulative design included highlighting the business’s preference in a colour known to <a href="https://fil.forbrukerradet.no/wp-content/uploads/2018/06/2018-06-27-deceived-by-design-final.pdf">entice consumers to agree or act</a> (often green or blue), using a rapid countdown to create a false sense of urgency, and warning that a number of other customers are looking at a product. </p>
<p>Importantly, the research found consumers aged between 18 and 28 were more likely to suffer negative impacts from manipulative design, leading to substantial effects on their financial well-being and privacy. A significant proportion of consumers in this younger age bracket reported they:</p>
<ul>
<li>accidentally bought something (12%)</li>
<li>spent more than they intended (33%)</li>
<li>disclosed more personal information than they wanted to (27%)</li>
<li>created an online account when they didn’t want to (37%), and</li>
<li>accidentally signed up to something (39%).</li>
</ul>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Young man in a store peruses his phone, with a laptop open on a table in front of him" src="https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467413/original/file-20220607-18-5op3ky.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The research found young people in particular were vulnerable to manipulative techniques used by online businesses.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>We need to upgrade business practices and consumer law</h2>
<p>For businesses, using dark patterns to boost profit will likely lead to long-term losses in the form of consumer trust and loyalty. Almost one in three people surveyed said they stopped using a website or app (either temporarily or permanently) after experiencing dark patterns. </p>
<p>Misleading designs may also lead to penalties for businesses under the Australian Consumer Law. This happened last year when <a href="https://www.accc.gov.au/media-release/google-misled-consumers-about-the-collection-and-use-of-location-data">Google’s privacy settings</a> were found likely to mislead consumers.</p>
<p>However, other designs that have unfair consequences <a href="https://cprc.org.au/unfair-trading-practices-in-digital-market-evidence-and-regulatory-gaps-2/">might not fall foul of consumer laws</a>, if they don’t meet certain criteria set out by the law. </p>
<p>The CPRC’s research adds to evidence in support of the Australian Competition & Consumer Commission’s <a href="https://www.accc.gov.au/system/files/DPB%20-%20DPSI%20-%20September%202021%20-%20Full%20Report%20-%2030%20September%202021%20%283%29_1.pdf">existing recommendation</a> that our consumer law should include an unfair practices prohibition, similar to those in the European Union and the United Kingdom.</p><img src="https://counter.theconversation.com/content/184450/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Katharine Kemp receives funding from The Allens Hub for Technology, Law and Innovation. She is a Member of the Advisory Board of the Future of Finance Initiative in India, and the Australian Privacy Foundation.</span></em></p>Younger people aged 18 to 28 were more likely to be negatively impacted by manipulative designs on websites and apps.Katharine Kemp, Senior Lecturer, Faculty of Law & Justice, UNSW, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1754342022-01-25T19:02:16Z2022-01-25T19:02:16ZWhy online groups are parents’ best friends in getting ready for the school year<figure><img src="https://images.theconversation.com/files/442169/original/file-20220124-25-66em69.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4889%2C3249&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>If you’re a parent, chances are that, like me, you are frantically trying to get a head start on the new school year. In coping with the stress of COVID-19 lockdowns, restrictions, empty shelves in stores, working from home and minimal communications by schools over the holidays, we’ve turned to our virtual community of friends for help.</p>
<p>Let’s face it, most of us probably don’t have the time to or simply can’t pop in for a cuppa with one of the other parents to just have a chat. And there are pressing things to discuss, such as the school book list that has gone missing over the holidays, where to get the best deal on a headset with a microphone suitable for an eight-year-old, which brand of white sport shoes will last more than a week in the dusty schoolyard, or where to get the two boxes of facial tissues the teacher asked children to supply when there are none at the shops! </p>
<p>This is where our online friends can help.</p>
<h2>Our digital ‘tribes’</h2>
<p>People have formed tribes since the dawn of time. We are no different in this digital age. Members of a tribe typically share some similarities, which are like glue that holds the group together. Our online groups, or digital “tribes”, connect us based on a common interest, topic, location or school. They include:</p>
<ul>
<li><p>mum groups – for example, <a href="https://www.facebook.com/groups/australianschoolmums">Australian School Mums</a>, <a href="https://www.facebook.com/groups/761841587987059">School Mums Australia</a>, <a href="https://www.facebook.com/groups/2137302706589877">Organised Mums Australia</a></p></li>
<li><p>location-based groups – <a href="https://www.facebook.com/groups/525115141203861">6009 and 6010 Community Notice Board</a>, <a href="https://www.facebook.com/groups/community.mansfield/">Mansfield and District Community Noticeboard</a>, <a href="https://www.facebook.com/groups/brunswickheadscommunity/">Brunswick Heads Community</a></p></li>
<li><p>consumer groups – <a href="https://www.facebook.com/groups/158634060914178">Second Hand School Uniforms</a> and <a href="https://www.facebook.com/groups/sorschooluniforms">School Uniforms and Books Buy Swap and Sell</a></p></li>
<li><p>school-based groups.</p></li>
</ul>
<p>The pandemic has fuelled the rise in online tribes, as people have been restricted in their movement, locked down in their homes and limited in their access to family and friends. They now rely on their online connections for information, advice, help and friendship. </p>
<p>My team’s <a href="https://doi.org/10.1108/APJML-05-2020-0303">recent research</a> into online communities suggests these are rife with “prosumers”. Proactive consumers (“prosumers”) create and share online content, which makes them influential members of social networks. Our prosumer-friends are well informed, quick to respond and supportive when the school-work-life juggling act overwhelms us.</p>
<p>These are people like us. The digital tribe is much bigger than our real, physical community. We don’t have to know each member personally to be able to connect with them digitally. </p>
<p>And as our lives are so digitally integrated, we no longer differentiate between our real and virtual friends. Linda Thomas, who has two primary school-aged children, says:</p>
<blockquote>
<p>“As a full-time working mum, I’m often unable to keep in touch with my friends in person, which can be quite isolating, especially now during COVID. Facebook and WhatsApp groups have been so important to me in maintaining contact and community support by networking with parents similar to me.” </p>
</blockquote>
<figure class="align-center ">
<img alt="Mrs Linda Thomas" src="https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=650&fit=crop&dpr=1 600w, https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=650&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=650&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=817&fit=crop&dpr=1 754w, https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=817&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/442197/original/file-20220124-21-9zzmde.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=817&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Linda Thomas says online networking with other parents has been very important to her as a mother of two children in primary school.</span>
<span class="attribution"><span class="source">Linda Thomas</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Online marketplaces help with the budget</h2>
<p>With the rise in online groups comes a rise in online consumer marketplaces. Facebook groups, such as <a href="https://www.facebook.com/SustainableSchoolShop/">Sustainable School Shop</a> and <a href="https://www.facebook.com/groups/947637151958609/about/">Perth Buy and Sell</a>, can help parents manage the return-to-school budget. </p>
<p>Items that are no longer needed or unused, such as uniforms, books, electronics and stationery, are often given away, swapped or sold at a fraction of the original cost. An example is a Facebook local community group post by a mum giving away a spare laptop to someone who needs it for school.</p>
<p>In <a href="https://www.inderscience.com/info/ingeneral/forthcoming.php?jcode=ijasm">our research</a>, my colleagues and I found social media users’ exchanges have not been all negative during the pandemic – there has been a lot of positivity. The support, information and advice that social media users provide one another in these online groups have been invaluable for navigating purchasing at stores affected by supply disruptions.</p>
<p>Such positivity often reflects <a href="https://doi.org/10.1080/10696679.2017.1389246">online brand advocacy</a> (<a href="https://doi.org/https://doi.org/10.1108/JPBM-10-2018-2090">OBA</a>), with online group members recommending brands they have tried to others. This sort of advocacy is authentic as it is freely given and based on online group member’s actual experience with the brand. It is also influential as it is trusted more than brand-generated content, such as when a parent suggests trying Officeworks to find that headset for our eight-year-old.</p>
<p>Interestingly, targeted advertising is also rife online. When you interact with content on a school-related topic, be it <a href="https://www.facebook.com/ClarksAustralia">kids’ shoes</a>, <a href="https://www.facebook.com/BrightStarKids">school labels</a>, <a href="https://www.facebook.com/kumonanz/">tutoring</a> or kids sports, the platform’s algorithm will serve you ads that mirror your engagement. Such advertising is not necessarily a nuisance as it can help us in deciding what to buy.</p>
<p>As parents, we are in this “get our child ready for school” mission together. Online groups provide support, information and friendships beyond what we have access to in real life during these trying times. </p>
<p>So, if you haven’t already, join a digital tribe! It might make the start of the new school year that little bit easier.</p><img src="https://counter.theconversation.com/content/175434/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Violetta Wilk does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>People have formed tribes since the dawn of time and, in the digital age, online tribes are helping members deal with all the uncertainties and decisions involved in getting kids ready for school.Violetta Wilk, Lecturer & Researcher in Digital Marketing, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1530042021-01-18T13:11:14Z2021-01-18T13:11:14ZFake news: bold visual warnings needed to stop people clicking – new research<figure><img src="https://images.theconversation.com/files/378251/original/file-20210112-19-2eyj0z.jpg?ixlib=rb-1.1.0&rect=213%2C477%2C5196%2C2986&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/future-technology-smart-glass-red-touchscreen-201391274">Shutterstock/dencg</a></span></figcaption></figure><p>A senior doctor in charge of the NHS anti-disinformation campaign has said that language and cultural barriers could be causing people from ethnic minorities to reject the COVID-19 vaccine. Dr Harpreet Sood told the BBC it was “<a href="https://www.bbc.co.uk/news/uk-55666407">a big concern</a>” and officials were working hard to reach different groups “to correct so much fake news”. </p>
<p>Some of the disinformation is religiously targeted with messages falsely claiming the vaccines contain animal produce like pork and beef which goes against the religious beliefs of Muslims and Hindus, respectively.</p>
<p>The issue of language is key because most warnings about misinformation online are in a written format. Take Facebook’s adoption of new alerts supported by independent <a href="https://about.fb.com/news/2019/12/helping-fact-checkers/">fact-checkers</a>, for example. They warn users of fake news and to try to prevent them from sharing it unknowingly. It is certainly a step in the right direction. But text warnings can be easily misunderstood and ignored. And that’s the problem.</p>
<figure class="align-center ">
<img alt="A yellow and black 'stop danger' sign." src="https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=300&fit=crop&dpr=1 600w, https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=300&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=300&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=377&fit=crop&dpr=1 754w, https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=377&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/379066/original/file-20210115-19-1as6mnx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=377&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Warning signs in the real world use vivid imagery as well as text.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/stop-danger-sign-397941181">Shutterstock/AleksandarLevai</a></span>
</figcaption>
</figure>
<p>Our research, which will be published later in the year, explores this issue and examines new, more visual, ways to warn users about potential misinformation. For our study, we manipulated a standard Facebook page design to develop ten different visualisation effects. </p>
<p>These effects can be categorised under colour-based or “block” techniques where the text is essentially highlighted, blur effects which play with and alter the focus of the text and pictorial-based techniques – like an image of shattered glass superimposed over the suspicious post. What was of real importance to us was how the image could be used to help people decide what is and isn’t misinformation. </p>
<figure class="align-center ">
<img alt="A shattered glass warning sign designed by researchers." src="https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=229&fit=crop&dpr=1 600w, https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=229&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=229&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=288&fit=crop&dpr=1 754w, https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=288&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/378365/original/file-20210112-19-1cdqtj3.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=288&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A shattered glass warning sign designed by researchers.</span>
</figcaption>
</figure>
<p>In the physical world the design and use of warning signs is regulated by law and various <a href="https://www.iso.org/standard/72424.html">standards</a> must be followed. But online – and particularly in relation to misinformation – there are hardly any safety standards at all. So more attention needs to be given to the design of these warnings to support and motivate people to take more heed of the threat and its potential impact.</p>
<p>Our study with 550 adults found that people took more notice of warnings with assertive visuals highlighting the text, such as shattered glass or a block effect. </p>
<figure class="align-center ">
<img alt="A Facebook warning where text is covered in a block." src="https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=222&fit=crop&dpr=1 600w, https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=222&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=222&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=279&fit=crop&dpr=1 754w, https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=279&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/378362/original/file-20210112-21-1lpckhy.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=279&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The block visual effect warning.</span>
</figcaption>
</figure>
<p>For many, the block effect clearly warned of impending danger, alarm or misfortune. When we asked which visualisation effect made people question the validity of what they were reading, the block visualisation was more effective for men while the blur visualisation worked better for women. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=309&fit=crop&dpr=1 600w, https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=309&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=309&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=388&fit=crop&dpr=1 754w, https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=388&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/378244/original/file-20210112-23-pzwzr6.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=388&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The blur effect.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Interestingly, the blur effects raised participants suspicions and acted more like a caution, to afford careful and potentially more prudent behaviour on Facebook. </p>
<h2>Looking for clues</h2>
<p>People are still hugely reliant on clues and weaknesses in the presentation of online content as ways to detect misinformation. For example, many participants told us they watch for things like bad spelling and grammar and flaws in the interface (like unprofessional designs) as ways to identify if something is not quite right. Unfortunately, in the age of sophisticated and convincing misinformation attacks, this technique might not be as successful as it once was.</p>
<p>The participants in our study felt they needed more help to cope with misinformation and many mentioned the need for bold signs and warnings. They wanted help to recognise that something is not right and so not to believe it.</p>
<p>Misinformation is clearly not going away. In 2020 a massive outbreak of disinformation about COVID-19 <a href="https://www.oecd.org/coronavirus/policy-responses/combatting-covid-19-disinformation-on-online-platforms-d854ec48/">endangered lives</a> and hampered the recovery. So it is more crucial than ever that people are given the right visual tools to find important and reliable information online.</p>
<p>In the real world, there are bold signs that warn us of danger – whether its a red “no entry” sign on a road or an exclamation mark which shouts: keep clear. It’s time key players like Facebook, Google and Twitter considered how a simple tweak to their designs might just help people spot danger online too.</p><img src="https://counter.theconversation.com/content/153004/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This research was funded by the Welsh Crucible, a consortium of Welsh Higher Education Institutions and the Higher Education Funding Council for Wales (HEFCW). I am very grateful to Dr James Kolasinski, Cubric, Cardiff University who was a collaborator on this research project and also to Bastian Bonkel who was a research assistant.</span></em></p>Prominent ‘danger’ signs are needed online to warn people about misinformation.Fiona Carroll, Senior Lecturer in Digital Media and Smart Technologies, Cardiff Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1386132020-05-28T03:29:24Z2020-05-28T03:29:24ZDon’t be phish food! Tips to avoid sharing your personal information online<figure><img src="https://images.theconversation.com/files/337870/original/file-20200527-141320-1a7ikl1.jpg?ixlib=rb-1.1.0&rect=44%2C14%2C4947%2C3308&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/anonymous-mask-hide-identity-on-computer-518835055">Shutterstock</a></span></figcaption></figure><p>Data is the <a href="https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/">new oil</a>, and online platforms will siphon it off at any opportunity. Platforms increasingly demand our personal information in exchange for a service. </p>
<p>Avoiding online services altogether can limit your participation in society, so the advice to just opt out is easier said than done. </p>
<p>Here are some tricks you can use to avoid giving online platforms your personal information. Some ways to <a href="https://www.scamwatch.gov.au/get-help/protect-yourself-from-scams">limit your exposure</a> include using “alternative facts”, using guest check-out options, and a burner email.</p>
<h2>Alternative facts</h2>
<p>While “alternative facts” is a term coined by <a href="https://link.springer.com/chapter/10.1007/978-3-030-00813-0_4">White House press staff</a> to describe factual inaccuracies, in this context it refers to false details supplied in place of your personal information.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/hackers-are-now-targeting-councils-and-governments-threatening-to-leak-citizen-data-126190">Hackers are now targeting councils and governments, threatening to leak citizen data</a>
</strong>
</em>
</p>
<hr>
<p>This is an effective strategy to avoid giving out information online. Though platforms might insist you complete a user profile, they can do little to check if that information is correct. For example, they can check whether a phone number contains the correct amount of digits, or if an email address has a valid format, but that’s about it.</p>
<p>When a website requests your date of birth, address, or name, consider how this information will be used and whether you’re prepared to hand it over. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1147173290181627904"}"></div></p>
<p>There’s a distinction to be made between which platforms <a href="https://www.wired.com/2014/04/why-we-need-online-alter-egos-now-more-than-ever/">do or don’t warrant</a> using your real information. If it’s an <a href="https://www.avg.com/en/signal/website-safety">official</a> banking or educational institute website, then it’s important to be truthful.</p>
<p>But an online shopping, gaming, or movie review site shouldn’t require the same level of disclosure, and using an alternative identity could protect you.</p>
<h2>Secret shopper</h2>
<p>Online stores and services often encourage users to set up a profile, offering convenience in exchange for information. Stores value your profile data, as it can provide them additional revenue through targeted advertising and emails. </p>
<p>But many websites also offer a guest checkout option to streamline the purchase process. After all, one thing as valuable as your data is your money. </p>
<p>So unless you’re making very frequent purchases from a site, use guest checkout and skip profile creation altogether. Even without disclosing extra details, you can still track your delivery, as tracking is provided by transport companies (and not the store). </p>
<p>Also consider your payment options. Many credit cards and payment merchants such as PayPal provide additional <a href="https://www.paypal.com/au/smarthelp/article/what-is-paypal-buyer-protection-faq1269">buyer protection</a>, adding another layer of separation between you and the website. </p>
<p>Avoid sharing your bank account details online, and instead use an intermediary such as PayPal, or a credit card, to provide additional protection. </p>
<p>If you use a credit card (even prepaid), then even if your details are compromised, any potential losses are limited to the card balance. Also, with credit cards this balance is effectively the bank’s funds, meaning you won’t be charged out of pocket for any fraudulent transactions.</p>
<h2>Burner emails</h2>
<p>An email address is usually the first item a site requests. </p>
<p>They also often require email verification when a profile is created, and that verification email is probably the only one you’ll ever want to receive from the site. So rather than handing over your main email address, consider a burner email.</p>
<p>This is a fully functional but disposable email address that remains active for about 10 minutes. You can get one for free from online services including <a href="https://maildrop.cc/">Maildrop</a>, <a href="https://www.guerrillamail.com/">Guerilla Mail</a> and <a href="https://10minutemail.com/">10 Minute Mail</a>.</p>
<p>Just make sure you don’t forget your password, as you won’t be able to recover it once your burner email becomes inactive.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=276&fit=crop&dpr=1 600w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=276&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=276&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=346&fit=crop&dpr=1 754w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=346&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/337853/original/file-20200527-141287-1igcflj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=346&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The 10 Minute Mail website offers free burner emails.</span>
<span class="attribution"><a class="source" href="https://10minutemail.com/">screenshot</a></span>
</figcaption>
</figure>
<h2>The risk of being honest</h2>
<p>Every online profile containing your personal information is another potential target for attackers. The more profiles you make, the greater the chance of your details being breached.</p>
<p>A breach in one place can lead to others. Names and emails alone are sufficient for email <a href="https://www.staysmartonline.gov.au/protect-yourself/recover-when-things-go-wrong/phishing">phishing attacks</a>. And a phish becomes more convincing (and more likely to succeed) when paired with other details such as your recent purchasing history. </p>
<p><a href="https://www.infosecurity-magazine.com/news/google-survey-finds-two-users/">Surveys indicate</a> about <a href="https://blog.avast.com/strengthening-passwords-on-world-password-day">half of us</a> recycle passwords across multiple sites. While this is convenient, it means if a breach at one site reveals your password, then attackers can hack into your other accounts.</p>
<p>In fact, even just an email address is a valuable piece of intelligence, as emails are used as a login for many sites, and a login (unlike a password) can sometimes be impossible to change. </p>
<p>Obtaining your email could open the door for targeted attacks on your other accounts, such as social media accounts.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-ugly-truth-tech-companies-are-tracking-and-misusing-our-data-and-theres-little-we-can-do-127444">The ugly truth: tech companies are tracking and misusing our data, and there's little we can do</a>
</strong>
</em>
</p>
<hr>
<p>In “password spraying” <a href="https://www.microsoft.com/security/blog/2020/04/23/protecting-organization-password-spray-attacks/">attacks</a>“, cybercriminals test common passwords against many emails/usernames in hopes of landing a correct combination.</p>
<p>The bottom line is, the safest information is the information you never release. And practising alternatives to disclosing your true details could go a long way to limiting your data being used against you.</p><img src="https://counter.theconversation.com/content/138613/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nik Thompson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While some online services such as banking do warrant using your true information, many sites shouldn’t require the same level of disclosure. Here’s how to protect yourself in such cases.Nik Thompson, Senior Lecturer, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1274442019-11-26T04:33:36Z2019-11-26T04:33:36ZThe ugly truth: tech companies are tracking and misusing our data, and there’s little we can do<figure><img src="https://images.theconversation.com/files/303641/original/file-20191126-84268-9nsdjk.jpg?ixlib=rb-1.1.0&rect=66%2C5%2C3627%2C3074&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">While leaks and whistleblowers continue to be valuable tools in the fight for data privacy, we can't rely on them solely to keep big tech companies in check.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/computer-keyboard-multiple-social-media-images-114119137?src=4760a9b5-01c2-4efd-b8ff-7d5518288498-1-2">SHUTTERSTOCK</a></span></figcaption></figure><p>As survey results pile, it’s becoming clear Australians are sceptical about how their online data is tracked and used. But one question worth asking is: are our fears founded?</p>
<p>The short answer is: yes.</p>
<p>In <a href="https://privacyaustralia.net/online-privacy-survey-results/">a survey</a> of 2,000 people completed last year, Privacy Australia found 57.9% of participants weren’t confident companies would take adequate measures to protect their data. </p>
<p>Similar scepticism was noted in results from the 2017 <a href="https://www.oaic.gov.au/assets/engage-with-us/research/acaps-2017/acaps-2017-report.pdf">Australian Community Attitudes to Privacy Survey</a> of 1,800 people, which found:</p>
<p>• 79% of participants felt uncomfortable with targeted advertising based on their online activities</p>
<p>• 83% were uncomfortable with social networking companies keeping their information</p>
<p>• 66% believed it was standard practice for mobile apps to collect user information and</p>
<p>• 74% believed it was standard practice for websites to collect user information.</p>
<p>Also in 2017, the <a href="https://ses.library.usyd.edu.au/bitstream/handle/2123/17587/USYDDigitalRightsAustraliareport.pdf">Digital Rights in Australia</a> report, prepared by the University of Sydney’s <a href="http://digitalrightsusyd.net/">Digital Rights and Governance Project</a>, revealed 62% of 1,600 participants felt they weren’t in control of their online privacy. About 47% were also concerned the government could violate their privacy. </p>
<h2>The ugly truth</h2>
<p>Lately, a common pattern has emerged every time malpractice is exposed. </p>
<p>The company involved will provide an “opt-out” mechanism for users, or a dashboard to see what personal data is being collected (for example, <a href="https://myaccount.google.com/intro/privacycheckup">Google Privacy Checkup</a>), along with an apology.</p>
<p>If we opt-out, does this mean they stop collecting our data? Would they reveal collected data to us? And if we requested to have our data deleted, would they do so? </p>
<p>To be blunt, we don’t know. And as end users there’s not much we can do about it, anyway. </p>
<p>When it comes to personal data, it’s extremely difficult to identify unlawful collections among legitimate collections, because multiple factors need to be considered, including the context in which the data is collected, the methodology used to obtain user consent, and country-specific laws.</p>
<p>Also, it’s almost impossible to know if user data is being misused within company bounds or in business-to-business interactions.</p>
<p>Despite ongoing public outcry to protect online privacy, last year we witnessed the <a href="https://www.wired.com/amp-stories/cambridge-analytica-explainer/">Cambridge Analytica scandal</a>, in which a third party company was able to the gather personal information of millions of Facebook users and use it in political campaigns.</p>
<p>Earlier this year, both <a href="https://www.bloomberg.com/news/articles/2019-04-10/is-anyone-listening-to-you-on-alexa-a-global-team-reviews-audio">Amazon</a> and <a href="https://www.theguardian.com/technology/2019/aug/29/apple-apologises-listen-siri-recordings">Apple</a> were reported to be using human annotators to listen to personal conversations, recorded via their respective digital assistants Alexa and Siri. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-if-the-companies-that-profit-from-your-data-had-to-pay-you-100380">What if the companies that profit from your data had to pay you?</a>
</strong>
</em>
</p>
<hr>
<p>More recently, <a href="https://www.nytimes.com/2019/11/04/business/secret-consumer-score-access.html">a New York Times article</a> exposed how much fine granular data is acquired and maintained by relatively unknown consumer scoring companies. In one case, a third-party company knew the writer <a href="https://www.nytimes.com/by/kashmir-hill">Kashmir Hill</a> used her iPhone to order chicken tikka masala, vegetable samosas, and garlic naan on a Saturday night in April, three years ago.</p>
<p>At this rate, without any action, scepticism towards online privacy will only increase.</p>
<h2>History is a teacher</h2>
<p>Early this year, we witnessed the <a href="https://www.gizmodo.com.au/2019/02/apple-is-removing-do-not-track-from-safari/">bitter end of the Do-Not-Track initiative</a>. This was proposed as a privacy feature where requests made by an internet browser contained a flag, asking remote web servers to not track users. However, there was no legal framework to force web server compliance, so many web servers ended up discarding this flag.</p>
<p>Many companies have made it too difficult to opt-out from data collections, or request the deletion of all data related to an individual. </p>
<p>For example, as a solution to the backlash on human voice command annotation, Apple <a href="https://www.theguardian.com/technology/2019/oct/30/apple-lets-users-opt-out-of-having-siri-conversations-recorded">provided an opt-out mechanism</a>. However, doing this for an Apple device is not straightforward, and the option isn’t prominent in the device settings. </p>
<p>Also, it’s clear tech companies don’t want to have <a href="https://www.securityweek.com/youre-opted-default-know-when-and-where-opt-out">opting-out of tracking</a> as users’ default setting. </p>
<p>It’s worth noting that since Australia doesn’t have social media or internet giants, much of the country’s privacy-related debates are focused on <a href="https://www.smh.com.au/technology/australians-are-rightly-questioning-my-health-record-says-privacy-commissioner-20180730-p4zui3.html">government legislation</a>.</p>
<h2>Are regulatory safeguards useful?</h2>
<p>But there is some hope left. Some recent events have prompted tech companies to think twice about the undeclared collection of user data.</p>
<p>For example, <a href="https://www.smh.com.au/world/north-america/facebook-fined-us5-billion-in-cambridge-analytica-privacy-probe-20190713-p526xb.html">a US$5 billion fine is on air for Facebook</a>, for its role in the Cambridge Analytica incident, and related practices of sharing user data with third parties. The exposure of this event has forced Facebook to <a href="https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/">take measures</a> to improve its privacy controls and be forthcoming with users. </p>
<p>Similarly <a href="https://www.bbc.com/news/technology-46944696">Google was fined EU$50 million under the General Data Protection Regulation</a> by French data regulator CNIL, for lack of transparency and consent in user-targeted ads. </p>
<p>Like Facebook, Google responded by taking measures to improve the privacy of users, by <a href="https://blog.google/products/gmail/g-suite-gains-traction-in-the-enterprise-g-suites-gmail-and-consumer-gmail-to-more-closely-align/">stopping reading our e-mails to provide targeted ads</a>, <a href="https://www.theverge.com/2017/9/8/16276000/google-dashboard-my-account-privacy-security-redesign">enhancing its privacy control dashboard</a>, and <a href="https://www.washingtonpost.com/technology/2019/05/07/google-vows-greater-user-privacy-after-decades-data-collection/">revealing its vision to keep user data in devices rather than in the cloud</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/imagine-what-we-could-learn-if-we-put-a-tracker-on-everyone-and-everything-50123">Imagine what we could learn if we put a tracker on everyone and everything</a>
</strong>
</em>
</p>
<hr>
<h2>No time to be complacent</h2>
<p>While it’s clear current regulatory safeguards are having a positive effect on online privacy, there is ongoing debate about whether they are sufficient.</p>
<p><a href="https://thenextweb.com/contributors/2018/08/05/gdpr-privacy-eroding-bad/">Some have</a> argued about possible loopholes in the European Union’s General Data Protection Regulation, and the fact that <a href="https://medium.com/mydata/five-loopholes-in-the-gdpr-367443c4248b">some definitions of legitimate use of personal data</a> leave room for interpretation. </p>
<p>Tech giants are multiple steps ahead of regulators, and are in a position to exploit any grey areas in legislation they can find. </p>
<p>We can’t rely on accidental leaks or whistleblowers to hold them accountable.</p>
<p>Respect for user privacy and ethical usage of personal data must come intrinsically from within these companies themselves. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/if-youve-given-your-dna-to-a-dna-database-us-police-may-now-have-access-to-it-126680">If you've given your DNA to a DNA database, US police may now have access to it</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/127444/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Suranga Seneviratne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Most of us are probably having our data tracked in some form. And while there are regulatory safeguards in place to protect user privacy, it’s hard to say whether these are enough.Suranga Seneviratne, Lecturer - Security, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1260272019-10-31T18:53:28Z2019-10-31T18:53:28ZWould you notice if your calculator was lying to you? The research says probably not<figure><img src="https://images.theconversation.com/files/299628/original/file-20191031-187903-hakehj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2700%2C1782&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">As our worlds are become increasingly digitised, we're starting to rely more on machines and devices for everyday tasks. But in an age when even pacemakers can be hacked, how do we know when and who to trust?</span> <span class="attribution"><span class="source">SHUTTERSTOCK</span></span></figcaption></figure><p>These days, it’s hard to know whom to <a href="https://www.routledge.com/Truth-Lies-and-Trust-on-the-Internet-1st-Edition/Whitty-Joinson/p/book/9780203938942">trust</a> online, and how to discern genuine content from fakery.</p>
<p>Some degree of trust in our devices is necessary, if we’re to embrace the growing number of technologies that could potentially enhance our lives. How many of us, however, bother trying to confirm the truth, and how many blindly approach their online communications?</p>
<p>In a <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0223736">study published this week</a>, Texas Tech University researchers tested how university students reacted when unknowingly given incorrect calculator outputs. Some students were presented with an onscreen calculator that was programmed to give the wrong answers, whereas a second group was given a properly functioning calculator. </p>
<p>Participants could also opt not to use the calculator, but most chose to use it - even if they had good numeracy skills. Researchers found most participants raised few or no suspicions when presented with wrong answers, until the answers were quite wrong. In addition, those with higher numeracy skills were, unsurprisingly, more suspicious of incorrect answers than others.</p>
<h2>Do the math</h2>
<p>To understand these results, we need to acknowledge calculators were created to make our lives easier, by reducing our mental burden. Also, there were no real consequences for participants who did not realise they were being duped. </p>
<p>Perhaps if they were completing their income tax forms, or applying for a loan, they may have been more thorough in checking their results. More importantly, there’s no reason an individual ought to feel suspicious about a calculator, so the participants were acting in accord with what we might expect.</p>
<p>People can’t spend their time deciding if they should trust every tool they use. This would consume too much time and energy. This study, however, was carried out with university students in a lab. What are the consequences of this in the real world, when much more is at stake? </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lie-detectors-and-the-lying-liars-who-use-them-28167">Lie detectors and the lying liars who use them</a>
</strong>
</em>
</p>
<hr>
<p>The Internet and digital technologies have changed our lives for the better in so many ways. We can access information at super speeds, communicate regularly (and in fun ways) with our friends and family, and carry out mundane tasks such as banking and shopping with ease. </p>
<p>However, new technologies pose new challenges. Is the person you’re talking to online a real person or a <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15587">bot</a>? Are you developing a real romantic relationship on your dating app, or being conned in a <a href="https://academic.oup.com/bjc/article-abstract/53/4/665/396759">romance scam</a>? </p>
<p>To what extent do people blindly accept their technologies are safe, and that everyone online is who they claim to be?</p>
<h2>Hackers are often phishing for data</h2>
<p>The <a href="https://www.forbes.com/sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-understand/#5e48e2931d09">Internet of Things</a> is already changing our lives in and outside the home. At home, there’s the constant threat that we’re being listened to and watched through our devices. In August, Apple publicly apologised for allowing contractors to <a href="https://www.theguardian.com/technology/2019/aug/29/apple-apologises-listen-siri-recordings">listen to voice recordings</a> of Siri users. </p>
<p>Similarly, as autonomous vehicles become the norm, they too <a href="https://ieeexplore.ieee.org/abstract/document/8038391">pose ethical concerns</a>. Not only do we need to be worried about the programmed moral choices on whom to harm if an accident becomes inevitable, but also whether criminals can hack into these vehicles and alter programmed decisions. </p>
<p>Also, there have been reports of benign-looking USB cables being rigged with small WiFi-enabled implants which, when plugged into a computer, let a nearby hacker run commands. We even need to think about the safety of health devices, such as pacemakers, which can <a href="https://www.wired.com/story/pacemaker-hack-malware-black-hat/">now be hacked</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/with-usb-c-even-plugging-in-can-set-you-up-to-be-hacked-102296">With USB-C, even plugging in can set you up to be hacked</a>
</strong>
</em>
</p>
<hr>
<p>A major problem organisations and governments are trying to solve is stopping individuals from falling victim to phishing. A phish is an email or text which is made to appear authentic and trustworthy, but isn’t. </p>
<p>Cybercriminals use them to trick users into revealing secret information, such as bank account details, or clicking on a link that downloads malicious software onto their computer. This software can then steal passwords and other important personal data. </p>
<p>Clicking on a phishing message can have long-lasting detrimental effects on an individual or an organisation, as was the case with an Australian National University <a href="https://www.anu.edu.au/news/all-news/anu-releases-detailed-account-of-data-breach">data breach</a> last year.</p>
<p>We’re yet to effectively train people to recognise a phish. This is partly because because they’re often realistic and difficult to identify. However, it’s also because, as illustrated in the Texas Tech University study, people tend to place undue trust in technology and devices, without pausing to check the facts.</p>
<h2>Knowledge is power, and safety</h2>
<p>It’s incredibly difficult to have the right balance between scepticism and trust in the digital age. Individuals need to function in the world, and the mental effort required to constantly check all information is perhaps more than what we can expect of people. </p>
<p>That said, one positive takeaway from the calculator study is that training is critical if we want to improve people’s cybersecurity practices. This includes training individuals on what to do as online users, how to do it, and why it’s important. </p>
<p>As with all learning, this needs to be repetitive and the individual needs to be motivated to learn. Without effective learning methods, end-users, organisations, and state nations will remain vulnerable to cybercriminals.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/payid-data-breaches-show-australias-banks-need-to-be-more-vigilant-to-hacking-123529">PayID data breaches show Australia's banks need to be more vigilant to hacking</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/126027/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Monica Whitty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Research shows we’re pretty gullible as it is. And our increasing reliance on machines for completing everyday tasks makes us all-the-more vulnerable to being exploited.Monica Whitty, Chair in Human Factors in Cyber Security, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1109992019-02-06T13:22:11Z2019-02-06T13:22:11ZDeepfake videos could destroy trust in society – here’s how to restore it<figure><img src="https://images.theconversation.com/files/257484/original/file-20190206-174880-42oqjz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/woman-face-recognition-biometric-verification-concept-1028052145?src=j2YHKqXCPj1yiGwR-nCHbw-3-60">Andriano.cz/Shutterstock</a></span></figcaption></figure><p>It has the potential to ruin relationships, reputations and our online reality. “Deepfake” artificial intelligence technology promises to create doctored videos so realistic that they’re almost impossible to tell from the real thing. So far it has mostly been used to create altered pornographic clips <a href="https://www.washingtonpost.com/technology/2018/12/31/scarlett-johansson-fake-ai-generated-sex-videos-nothing-can-stop-someone-cutting-pasting-my-image/?utm_term=.4c76836a527a">featuring celebrity women’s faces</a> but once the techniques are perfected, deepfake revenge porn purporting to show people cheating on their partners won’t be far behind.</p>
<p>But more than becoming a nasty tool for stalkers and harassers, deepfakes threaten to undermine trust in political institutions and society as a whole. The White House recently justified temporarily banning a reporter from its press conferences using <a href="https://www.washingtonpost.com/nation/2018/11/12/kellyanne-conway-acosta-video-thats-not-altered-thats-sped-up-they-do-it-all-time-sports/?utm_term=.d26b27571b3b">reportedly sped up genuine footage</a> of an incident involving the journalist. Imagine the implications of seeing ultra-realistic but artificial footage of government leaders planning assassinations, CEOs colluding with foreign agents or a renowned philanthropist abusing children.</p>
<p>So-called fake news has already increased many people’s scepticism towards politicians, journalists and other public figures. It is becoming so easy to create entirely fictional scenarios that we can no longer trust any video footage at face value. This threatens our political, legal and media systems, not to mention our personal relationships. We will need to create new forms of consensus on which to base our social reality. New ways of checking and distributing power - some political, some technological - could help us achieve this.</p>
<h2>Fake scandals, fake politicians</h2>
<p>Deepfakes are scary because they allow anyone’s image to be co-opted, and call into question our ability to trust what we see. One obvious use of deepfakes would be to falsely implicate people in scandals. Even if the incriminating footage is subsequently proven to be fake, the damage to the victim’s reputation may be impossible to repair. And politicians could tweak old footage of themselves to make it appear as if they had always supported something that had recently become popular, updating their positions in real time.</p>
<p>There could even be public figures who are entirely imaginary, <a href="https://link.springer.com/article/10.1007/s13347-018-0325-3">original but not authentic</a>. Meanwhile, video footage could become useless as evidence in court. Broadcast news could be reduced to people debating whether clips were authentic or not, using ever more complex AI to try to detect deepfakes.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cQ54GDm1eL0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But the arms race that already exists between fake content creators and those detecting or debunking disinformation (such as Facebook’s planned <a href="https://www.theguardian.com/technology/2019/jan/28/facebook-war-room-fight-fake-news-nick-clegg-eu-elections-dublin-operations-centre">fake news “war room”</a>) hides a deeper issue. The mere existence of deepfakes undermines confidence and trust, just as the possibility that an election was hacked brings the validity of the result into question.</p>
<p>While some people may be taken in by deepfakes, that is not the real problem. What is at stake is the underlying social structure in which we all agree that some form of truth exists, and the social realities that are based on this trust. It is not a matter of the end of truth, but the end of the belief in truth – a post-trust society. In the wake of massive disinformation, even honest public figures will be easily ignored or discredited. The traditional organisations that have supported and enabled consensus – government, the press – will no longer be fit for purpose.</p>
<h2>Blockchain trust</h2>
<p><a href="https://www.wired.co.uk/article/deepfake-app-ai-porn-fake-reddit">New laws</a> to regulate the use of deepfakes will be important for people who have damaging videos made of them. But policy and law alone will not save our systems of governance. We will need to develop new forms of consensus, new ways to agree on social situations based on alternative forms of trust.</p>
<p>One approach will be to decentralise trust, so that we no longer need a few institutions to guarantee whether information is genuine and can instead rely on multiple people or organisations with good reputations. One way to do this could be to <a href="https://theconversation.com/blockchain-could-challenge-the-accepted-ways-we-shape-and-manage-society-53647">use blockchain</a>, the technology that powers Bitcoin and other cryptocurrencies.</p>
<p>Blockchain works by creating a public ledger stored on multiple computers around the world at once and made tamper-proof by cryptography. Its algorithms enable the computers to agree on the validity of any changes to the ledger, making it much harder to record false information. In this way, trust is distributed between all the computers who can scrutinise each other, increasing accountability.</p>
<h2>More democratic society</h2>
<p>We can also look to more democratic forms of government and journalism. For example, <a href="https://demtech.chathamhouse.org/liquid-democracy-could-help-answer-europes-legitimacy-crisis/">liquid democracy</a> allows voters to vote directly on each issue or temporarily assign their votes to delegates in a more flexible and accountable way than handing over full control to one party for years. This would allow the public to look to experts to make decisions for them where necessary but swiftly vote out politicians who disregarded their views or acted dishonestly, increasing trust and legitimacy in the political system.</p>
<p>In the press, we could move towards more <a href="https://www.theguardian.com/commentisfree/2016/apr/18/future-of-journalism-collaboration-panama-papers">collaborative and democratised news reporting</a>. Traditional journalists could use the positive aspects of social media to gather information from a more diverse range of sources. These contributors could then discuss and help scrutinise the story to build a consensus, improving the media’s reputation.</p>
<p>The problem with any system that relies on the reputation of key individuals to build trust is how to prevent that reputation from being misused or fraudulently damaged. Checks such as Twitter’s “blue tick” account verification for public figures can help, but better legal and technical protections are also needed: more protected rights to privacy, <a href="https://webrootsdemocracy.org/kinder-gentler-politics/">better responses</a> to antisocial behaviour online, and better privacy-enhancing technologies built in by design. </p>
<p>The potential ramifications of deepfakes should act as a call to action in redesigning systems of trust to be more open, more decentralised and more collective. And now is the time to start thinking about a different future for society.</p><img src="https://counter.theconversation.com/content/110999/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Garfield Benjamin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>More democratic forms of politics, journalism and fact-checking will be needed when we can no longer trust any video footage.Garfield Benjamin, Postdoctoral Researcher, School of Media Arts and Technology, Solent UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/952622018-07-23T10:24:20Z2018-07-23T10:24:20ZAs emerging economies bring their citizens online, global trust in internet media is changing<p>Digital technology was dreamed of as the ultimate connector and leveler, the ideal <a href="https://www.nytimes.com/2005/04/03/magazine/its-a-flat-world-after-all.html">destroyer of borders and boundaries</a>. The <a href="https://digiday.com/marketing/media-trends-defining-world-cup-5-charts/">digital community that assembled itself</a> around this summer’s FIFA World Cup shows one example of a true global village, in which people share the same obsessions on the digital planet. That’s a significant contrast to the online communities leaning toward <a href="http://doi.org/10.1126/science.358.6361.317-g">nativism and anti-globalization</a>.</p>
<p>But that’s not the only split in what was imagined as a link to a true global community. In our multi-year study of digital evolution around the world, “<a href="https://sites.tufts.edu/digitalplanet/">The Digital Planet</a>,” my collaborators and I identified <a href="https://hbr.org/2017/07/60-countries-digital-competitiveness-indexed">divisions among internet users</a> in different countries – largely mirroring differences in economic development. More digitally evolved nations, like those in Western Europe, North America, Japan, Singapore or New Zealand, are in what we called the “Digital North.” Russia, China, India and others in South East Asia, Africa and the Middle East or Latin America are in what we called the “Digital South.” We found that the Digital South, broadly speaking, not only has greater momentum, in terms of embrace of digital technologies, but also greater trust in these technologies.</p>
<p>Some recent studies of users point to three emerging trends driving a deeper wedge between the North and the South.</p>
<h2>Privacy concerns are rising</h2>
<p>Around the world, <a href="http://www.pewresearch.org/fact-tank/2018/03/27/americans-complicated-feelings-about-social-media-in-an-era-of-privacy-concerns/">people are more worried about privacy</a> – which isn’t surprising, given the stream of revelations relating to <a href="https://www.nbcnews.com/tech/social-media/timeline-facebook-s-privacy-issues-its-responses-n859651">Facebook users’ data</a> and commercial security breaches. More than half of global internet users are <a href="https://www.cigionline.org/internet-survey-2018">more concerned about their online privacy</a> this year than they were a year ago – including threats from cybercriminals, governments and social media companies.</p>
<p>Yet <a href="https://www.cigionline.org/internet-survey-2018">privacy concerns climbed much higher in Digital South</a> countries than they did in the Digital North. For example, 58 percent of internet users in Brazil, Russia, India, China and South Africa were more concerned now than a year ago; in France, Germany, Italy, Japan, the United Kingdom, the United States and Canada, only 43 percent were more concerned in 2018 than they had been in 2017. Part of this is because the Digital North already had a higher level of concern about privacy – the South is clearly catching up.</p>
<h2>Trading data for services</h2>
<p>Perhaps related, a recent Asia-focused survey found a clear divergence on the issue of <a href="http://www.experian.com.hk/wp-content/uploads/2018/04/Digital-Consumer-View-2018.pdf">users giving up their personal data in exchange for convenience</a> and free digital services. People in China, India, Vietnam, Indonesia and Thailand – all part of the Digital South – tend to be more willing to let companies collect and aggregate their data as part of using online services. In the Digital Northern countries of Japan, New Zealand, Singapore and Australia, however, people are less willing to make that trade-off.</p>
<p>In fact, 94 percent of Chinese customers said they would agree to let businesses share or reuse their personal data. But only 60 percent of New Zealanders agreed. Of course, many of those who say they wouldn’t share their data in exchange for online services are doing so – just less willingly.</p>
<h2>Shifting attitudes toward news on social media</h2>
<p>Beyond concerns about their own data are worries about truth and accuracy in online information. People in wealthy countries tend to get <a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/pg_2018-01-11_global-media-habits_4-00/">more of their news online</a> more frequently than people in poorer nations. And <a href="http://www.digitalnewsreport.org/survey/2018/overview-key-findings-2018/">more than half of all people agree or strongly agree</a> that they are concerned about what is real and what is fake online.</p>
<p><a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/pg_2018-01-11_global-media-habits_3-01/"><img width="417" height="845" src="http://assets.pewresearch.org/wp-content/uploads/sites/2/2018/01/09111608/PG_2018.01.11_Global-Media-Habits_3-01.png" class="attachment-large size-large" alt="Chart showing that people in emerging, developing economies are as likely to use social media for news as those in advanced ones"></a></p><a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/pg_2018-01-11_global-media-habits_3-01/">
</a><p><a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/pg_2018-01-11_global-media-habits_3-01/">Yet </a><a href="http://www.digitalnewsreport.org/survey/2018/overview-key-findings-2018/">only 23 percent of those surveyed</a> say they trust news they get from social media. And people in both the Digital North and the Digital South are <a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/">equally likely to get news from social media</a>. That’s partly a result of <a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/">decreasing social media use for news</a> in the Digital North, as well <a href="http://www.pewglobal.org/2018/01/11/people-in-poorer-countries-just-as-likely-to-use-social-media-for-news-as-those-in-wealthier-countries/">as a rise in news on social platforms like WhatsApp</a> and Instagram in many parts of the developing world.</p>
<p>The emergence of these new platforms is creating a host of new problems – which, in many ways, are more devastating than the problems created in the Digital North. For example, in India, rumors carried over WhatsApp have given rise to a <a href="https://indianexpress.com/article/opinion/columns/a-lynching-in-digital-south-whatsapp-rumours-facebook-5262350/">spate of lynchings</a>. Users in the Digital South are new to such media and have not yet had the opportunity to make distinctions between what is real and what is false. Because <a href="https://www.bbc.com/news/world-asia-india-44709103">WhatsApp messages are encrypted</a>, it is harder to track and control how these malicious forms of fake news spread. That comes with a real human cost: At least 25 people <a href="https://www.dw.com/en/india-engineer-latest-victim-of-mob-lynchings-fueled-by-whatsapp-rumors/a-44679902">have reportedly been killed</a> across India since May by mobs encouraged by rumors over WhatsApp.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=325&fit=crop&dpr=1 600w, https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=325&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=325&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=409&fit=crop&dpr=1 754w, https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=409&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/228075/original/file-20180717-44097-1mkm0nx.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=409&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Peru’s national soccer team had more Facebook engagement than any other World Cup team.</span>
<span class="attribution"><a class="source" href="https://www.facebook.com/federacionperuanadefutbol/">Screenshot by The Conversation</a></span>
</figcaption>
</figure>
<p>Collectively, these emerging trends suggest the Digital South’s online use is developing and evolving very differently from the path the Digital North has taken. The digital fervor around the recently concluded World Cup reflects this: The national soccer team of Peru, a part of the Digital South, had <a href="http://www.foxnews.com/tech/2018/06/15/world-cup-2018-who-are-social-media-winners.html">more Facebook profile</a> likes, comments and shares per post than any other World Cup team. And the Facebook and Twitter profiles of Digital Southerner <a href="http://www.foxnews.com/tech/2018/06/15/world-cup-2018-who-are-social-media-winners.html">Mohamed Salah of Egypt</a> had the most fan engagement among all the players. And neither Peru nor Egypt is a top-ranked team on the soccer pitch. </p>
<p>Then consider China and India, neither of which had a team in the World Cup. A <a href="https://digiday.com/marketing/media-trends-defining-world-cup-5-charts/">quarter of active internet users</a> around the world planned to watch the World Cup online – but that number was <a href="https://digiday.com/marketing/media-trends-defining-world-cup-5-charts/">nearly twice as high among internet users in China and India</a>. That’s the scale of change coming as the Digital South continues to come online.</p><img src="https://counter.theconversation.com/content/95262/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bhaskar Chakravorti has founded and directs the Institute for Business in the Global Context at Fletcher/Tufts that receives funding from Mastercard, Microsoft and the Gates Foundation. He is a Non-Resident Senior Fellow at Brookings India and a Senior Advisor on Digital Inclusion at the Mastercard Center for Inclusive Growth.</span></em></p>Three trends suggest people in less developed nations – who are coming online in greater numbers – use and trust the internet very differently those in more developed economies.Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/977012018-06-10T20:05:21Z2018-06-10T20:05:21ZLooking online for info on your child’s health? Here are some tips<figure><img src="https://images.theconversation.com/files/222096/original/file-20180607-137312-10mv1ui.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Most parents look online for information about their child's health.</span> <span class="attribution"><span class="source">from shutterstock.com</span></span></figcaption></figure><p>Many parents can be anxious when their child is sick. So looking online for health information can help them understand their child’s medical condition and take an active role in treatment. Seeking health information can also be a coping strategy for parents coming to terms with their child’s illness. </p>
<p>But parents <a href="https://www.ncbi.nlm.nih.gov/pubmed/18564080">have reported</a> being worried about whether the online health information they find is reliable and relevant, and are concerned about the possibility of misdiagnosis. They can also feel overwhelmed by the amount of information online, which can be difficult to understand. </p>
<p>Just more than half of the parents we surveyed for a <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/jpc.14068">recent study</a> were hesitant to act on, or present the information they found online, to the treating doctor. This was despite the fact 73% believed the information influenced the questions they asked the doctor.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dr-google-probably-isnt-the-worst-place-to-get-your-health-advice-73835">Dr Google probably isn't the worst place to get your health advice</a>
</strong>
</em>
</p>
<hr>
<p>Previous studies have highlighted barriers to parents discussing their online research with doctors. They include finding a suitable time, given the doctor’s <a href="https://www.ncbi.nlm.nih.gov/pubmed/21771145">high workload</a>, and a <a href="https://www.ncbi.nlm.nih.gov/pubmed/11556771">fear of being perceived</a> as “bossy”, “a whinger”, “difficult” or “pushy”.</p>
<p>Other <a href="https://www.ncbi.nlm.nih.gov/m/pubmed/27145497/">difficulties may arise</a> if the doctor lacks interest in the information parents find because they believe it lacks credibility or is irrelevant.</p>
<p>A <a href="https://www.ncbi.nlm.nih.gov/pubmed/24986308">2015 study</a> found that, of the 110 parents of children with cancer who searched for online medical information, only 47% shared it with their child’s oncologist, but around 86% would have liked to have done so.</p>
<p>Unlike doctors, parents aren’t trained in how to verify the information they find. When seeking health information online or in parenting forums, it’s important to make sure it’s credible and discuss it with the doctor.</p>
<h2>Parents looking for information</h2>
<p>Our interviews with parents found online health information can provide reassurance and improve adherence to treatment. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=352&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=352&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=352&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=443&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=443&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222116/original/file-20180607-137301-bdcrt2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=443&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Seeking health information can be a coping strategy for parents coming to terms with their child’s illness.</span>
<span class="attribution"><span class="source">from shutterstock.com</span></span>
</figcaption>
</figure>
<p>We surveyed 308 parents of sick children at The Children’s Hospital at Westmead. We found 90% of parents searched for health information online. Of these, almost all (95%) looked for information after seeing their child’s doctor and many (63%) did so beforehand. </p>
<p>Some parents, especially those aged under 45, used online parenting forums (29%) or social media such as Facebook (27%) for health information.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-it-means-when-kids-walk-on-their-toes-59081">What it means when kids walk on their toes</a>
</strong>
</em>
</p>
<hr>
<p>Most parents (88%) who went online for health information before seeing the doctor wanted to prepare questions. Most (84%) wanted to find out what their child’s medical condition might be. </p>
<p>Of the parents who searched for information after seeing their child’s doctor, 94% wanted to know more about their child’s condition and 90% had more questions after thinking about what the doctor said.</p>
<h2>Where to look</h2>
<p>Only some parents (29%) believed the health information they found online was correct and just 61% understood it. Only a little more than half (57%) investigated to see if a website, app or Facebook group was trustworthy before accepting or using the information. </p>
<p>Most parents said they wanted help searching for (69%) and assessing (77%) the trustworthiness of online health information. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=191&fit=crop&dpr=1 600w, https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=191&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=191&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=240&fit=crop&dpr=1 754w, https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=240&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/222098/original/file-20180607-137322-12bw99t.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=240&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Mayo Clinic features the HONcode logo, showing it’s a trustworthy source of information.</span>
<span class="attribution"><a class="source" href="https://www.mayoclinic.org/">Screenshot/Mayo Clinic</a></span>
</figcaption>
</figure>
<p>Parents can ask their child’s doctor to recommend a website so they can find out more about their child’s medical condition. And they can discuss with the doctor whether the online information they find is trustworthy and relevant.</p>
<p>Some online health information or advice from parenting forums may be untrustworthy, irrelevant to the child’s condition or incorrect. This can lead to misinformation, potentially harming the child and increasing parents’ anxiety and guilt.</p>
<p>When looking up health information online, parents can remember it’s more likely to be trustworthy if it’s on websites that are:</p>
<ul>
<li>aimed at consumers and funded or supported by state and federal health departments (<a href="http://raisingchildren.net.au/">raisingchildren.net.au</a> and <a href="https://www.healthdirect.gov.au/">healthdirect</a>) </li>
<li>operated by public health institutions such as major teaching hospitals (<a href="http://www.schn.health.nsw.gov.au/">The Sydney Children’s Hospitals Network</a> and <a href="https://www.rch.org.au/home/">The Royal Children’s Hospital Melbourne</a>), state and federal health departments (<a href="http://www.health.nsw.gov.au/Pages/default.aspx">NSW Health</a>), government organisations (<a href="http://www.abc.net.au/health/">Australian Broadcasting Corporation</a>) and universities</li>
<li>operated by not-for-profit charities, foundations and professional societies (<a href="https://www.nationalasthma.org.au/">National Asthma Council Australia</a> and <a href="https://au.reachout.com/">ReachOut</a>)</li>
<li>approved by reputable online health accrediting organisations (<a href="https://www.hon.ch/en/">Health On the Net</a>) or featuring their logo (<a href="https://www.mayoclinic.org/">Mayo Clinic</a>).</li>
</ul>
<p>Also look to see if the health information is:</p>
<ul>
<li>written by qualified health professionals</li>
<li>based on evidence-based research or the work of an expert panel (it’s helpful if the website cites the source of its information)</li>
<li>aimed at giving consumers information (such as <a href="http://www.choosingwisely.org.au/home">Choosing Wisely Australia</a>)</li>
<li>balanced, unbiased and unemotional</li>
<li>up to date, listing a recent revision date</li>
<li>separated from advertising</li>
<li>any funding is declared.</li>
</ul>
<hr>
<p><em><strong>Read More: <a href="https://theconversation.com/au/topics/kids-health-series-28783">Children’s health series</a></strong></em></p>
<hr>
<p><em>The study was conducted by Griffith Medical School student Shruti Yardi while on a University of Sydney summer research scholarship in 2015.</em></p><img src="https://counter.theconversation.com/content/97701/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Parents aren’t taught how to verify the health information they find online. So here are some ways to ensure the sources are credible and trustworthy.Karen Scott, Senior Lecturer, Discipline of Child and Adolescent Health, University of SydneyPatrina Ha Yuen Caldwell, Senior Staff Specialist, Centre for Kidney Research, The Children's Hospital at Westmead; Associate Professor, Discipline of Child and Adolescent Health, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/953392018-06-07T10:53:48Z2018-06-07T10:53:48ZConnected cars can lie, posing a new threat to smart cities<figure><img src="https://images.theconversation.com/files/220363/original/file-20180524-51121-101woqz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What algorithm turned these lights red?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/traffic-lights-over-blue-sky-293647550">monticello/Shutterstock.com</a></span></figcaption></figure><p>The day when <a href="https://www.its.dot.gov/cv_basics/index.htm">cars can talk to each other</a> – and to <a href="https://www.its.dot.gov/pilots/">traffic lights</a>, stop signs, guardrails and even pavement markings – is <a href="https://www.its.dot.gov/pilots/cv_pilot_apps.htm">rapidly approaching</a>. Driven by the promise of <a href="https://www.cts.virginia.edu/wp-content/uploads/2014/05/Task2.3._CONOPS_6_Final_Revised.pdf">reducing traffic congestion</a> and <a href="https://www.its.dot.gov/press/2015/ngv_tech_announcement.htm">avoiding crashes</a>, these systems are already rolling out on roads around the U.S.</p>
<p>For instance, the <a href="https://www.its.dot.gov/pilots/pilots_mobility.htm">Intelligent Traffic Signal System</a>, developed <a href="https://www.its.dot.gov/research_archives/dma/index.htm">with support from the U.S. Department of Transportation</a>, has been tested on public roads in Arizona and California and is being installed more widely in <a href="https://www.its.dot.gov/pilots/pilots_nycdot.htm">New York City</a> and <a href="https://www.its.dot.gov/pilots/pilots_thea.htm">Tampa, Florida</a>. It allows vehicles to share their real-time location and speed with traffic lights, which can be used to effectively optimize the traffic timing in coordination with the real-time traffic demand to <a href="https://www.cts.virginia.edu/wp-content/uploads/2014/05/Task2.3._CONOPS_6_Final_Revised.pdf">dramatically reduce vehicle waiting time in an intersection</a>.</p>
<p><a href="https://sites.google.com/view/cav-sec/congestion-attack">Our work</a>, from the <a href="https://vhosts.eecs.umich.edu/robustnet/">RobustNet Research Group</a> and the <a href="http://traffic.engin.umich.edu/">Michigan Traffic Laboratory</a> at the University of Michigan, focuses on making sure these next-generation transportation systems are secure and protected from attacks. So far we’ve found they are in fact relatively easy to trick. Just one car that’s transmitting fake data can cause enormous traffic jams, and several attack cars could work together to shut down whole areas. What’s particularly concerning is that our research has found the weakness is not in the underlying communication technology, but in the <a href="http://dx.doi.org/10.14722/ndss.2018.23222">algorithms actually used to manage the traffic flow</a>.</p>
<h2>Misleading an algorithm</h2>
<p>In general, algorithms are meant to take in a variety of inputs – such as how many cars are in various locations around an intersection – and calculate an output that meets a particular goal – such as minimizing their collective delay at traffic lights. Like most algorithms, the traffic control algorithm in Intelligent Traffic Signal System – nicknamed “I-SIG” – assumes the inputs it’s getting are honest. That’s not a safe assumption.</p>
<p>The hardware and software in modern cars can be modified, either <a href="https://doi.org/10.1109/SP.2010.34">physically through the car’s diagnostic ports</a> or <a href="http://static.usenix.org/events/sec11/tech/full_papers/Checkoway.pdf">over wireless connections</a>, to instruct a car to transmit false information. Someone who wanted to compromise the I-SIG system could hack her own car using such methods, drive to a target intersection and park nearby.</p>
<p>Once parked near the intersection, we’ve found that the attacker could take advantage of two weaknesses in the algorithm controlling the light to extend the time a particular lane of traffic gets a green light – and, similarly, the time other lanes get red lights.</p>
<p>The first vulnerability we found, which we call “last vehicle advantage,” is a way of extending the length of a green-light signal. The algorithm keeps an eye on approaching cars, estimates how long the line of cars is and determines how long it thinks it will take for all the vehicles in a line of traffic to get through the intersection. This logic helps the system serve as many vehicles as possible in each round of light changes, but it can be abused. An attacker can instruct her car to falsely report joining the line of cars very late. The algorithm will then hold the attacked light green long enough for this nonexistent car to pass, leading to a green light – and correspondingly, red lights for other lanes – that is much longer than needed for the actual cars on the road.</p>
<p>We called the second weakness we found the “curse of the transition period,” or the “ghost vehicle attack.” The I-SIG algorithm is built to accommodate the fact that not all vehicles can communicate yet. It uses the driving patterns and information of newer, connected cars to infer the real-time location and speed of older, noncommunicating vehicles. Therefore, if a connected car reports that it is stopped a long distance back from an intersection, the algorithm will assume there is a long line of older vehicles queuing ahead of it. Then the system would allocate a long green light for that lane because of the long queue it thinks is there, but really isn’t.</p>
<p>These attacks happen by making a device lie about its own position and speed. That’s very different from known cyberattack methods, like injecting messages into <a href="https://www.techrepublic.com/article/no-surprise-iot-devices-are-insecure/">unencrypted communications</a> or having an unauthorized user logging in <a href="https://www.pcworld.idg.com.au/article/607908/iot-botnet-highlights-dangers-default-passwords/">with a privileged account</a>. Therefore, known protections against those attacks can do nothing about a lying device.</p>
<h2>Results from a misinformed algorithm</h2>
<p>Using either of these attacks, or both in concert with each other, can allow an attacker to give long periods of green lights to lanes with little or no traffic and longer red lights to the busiest lanes. That causes backups that grow and grow, ultimately building into massive traffic jams.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3iV1sAxPuL0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A congestion attack on a traffic signal control system.</span></figcaption>
</figure>
<p>This sort of attack on traffic lights could be just for fun or for the attacker’s own benefit. Imagine, for example, a person who wants to have a faster commute adjusting his own traffic-light timing, at the expense of other drivers’ delays. Criminals, too, might seek to attack traffic lights to ease their getaways from crime scenes or pursuing police cars. </p>
<p>There are even political or financial dangers: A coordinated group could shut down several key intersections in a city and demand a ransom payment. It’s much more disruptive, and easier to get away with, than other ways of blocking intersections, like parking a car across traffic.</p>
<p>Because this type of attack exploits the smart traffic control algorithm itself, fixing it requires joint efforts from both transportation and cybersecurity fields. This includes taking into account one of the broadest lessons of our work: The sensors underlying interactive systems – such as the vehicles in the I-SIG system – aren’t inherently trustworthy. Before engaging in calculations, algorithms should attempt to validate the data they’re using. For example, a traffic-control system could use other sensors – like <a href="https://auto.howstuffworks.com/car-driving-safety/safety-regulatory-devices/question234.htm">in-road sensors</a> already in use across the nation – to double-check how many cars are really there.</p>
<p>This is just the beginning of our research into new types of security problems in the smart transportation systems of the future, which we hope will both discover weaknesses and identify ways to protect the roads and the drivers on them.</p><img src="https://counter.theconversation.com/content/95339/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Qi Alfred Chen receives funding from NSF and University of Michigan. </span></em></p><p class="fine-print"><em><span>Z. Morley Mao receives funding from NSF, ONR, and University of Michigan.</span></em></p>New research has uncovered a previously unknown weakness in smart city systems: devices that trust each other. That could lead to some pretty terrible traffic, among other problems.Alfred Chen, Assistant Professor in Computer Science, University of California, IrvineZ. Morley Mao, Professor of Electrical Engineering and Computer Science, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/937762018-03-23T10:30:44Z2018-03-23T10:30:44ZDon’t quit Facebook, but don’t trust it, either<figure><img src="https://images.theconversation.com/files/211597/original/file-20180322-54863-q9ou8p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What is this man doing with your data?</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Facebook-Publishers/d401f9e4e5fe4fdabe15923bf17bafad/9/0">AP Photo/Jeff Roberson</a></span></figcaption></figure><p>Is it time to <a href="https://www.ft.com/content/bc8a642a-2b6a-11e8-9b4b-bc4b9f08f381">give up on social media</a>? Many people are thinking about that in the wake of revelations regarding <a href="https://www.theguardian.com/news/series/cambridge-analytica-files">Cambridge Analytica’s questionable use</a> of personal data from over 50 million Facebook users to support the Trump campaign. Not to mention the troubles with <a href="https://slate.com/technology/2018/03/cambridge-analytica-demonstrates-that-facebook-needs-to-give-researchers-more-access.html">data theft</a>, <a href="https://datasociety.net/output/dead-reckoning/">trolling, harassment</a>, the <a href="https://datasociety.net/output/lexicon-of-lies/">proliferation of fake news</a>, <a href="https://datasociety.net/output/media-manipulation-and-disinfo-online/">conspiracy theories and Russian bots</a>. </p>
<p>The <a href="https://www.vox.com/policy-and-politics/2018/3/21/17144748/case-against-facebook">real societal problem</a> might be <a href="https://slate.com/technology/2018/03/the-real-scandal-isnt-cambridge-analytica-its-facebooks-whole-business-model.html">Facebook’s business model</a>. Along with other social media platforms, it makes money by nudging users to provide their data (without understanding the potential consequences), and then using that data in ways well beyond what people may expect.</p>
<p>As researchers who <a href="https://scholar.google.com/citations?user=xKR6oTIAAAAJ&hl=en">study social media</a> and the <a href="https://scholar.google.com/citations?user=iQaNa-kAAAAJ&hl=en">impact of new technologies on society</a> in both the past and the present, we share these concerns. However, we’re <a href="https://medium.com/@cfiesler/why-data-sharing-privacy-controversies-arent-killing-social-media-platforms-a3e3ecfdd801">not ready to give up</a> on the idea of social media just yet. A main reason is that, like all forms of <a href="https://mitpress.mit.edu/books/always-already-new">once “new” media</a> (including everything from the telegraph to the internet), social media has become an <a href="http://www.pewinternet.org/2018/03/01/social-media-use-in-2018/">essential conduit</a> for interacting with other people. We don’t think it’s reasonable for users to be told their only hope of <a href="https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html">avoiding exploitation</a> is to isolate themselves. And for many vulnerable people, including members of <a href="https://mitpress.mit.edu/books/out-shadows-streets">impoverished, marginalized or activist communities</a>, leaving Facebook is <a href="https://slate.com/technology/2018/03/dont-deletefacebook-thats-not-good-enough.html">simply not possible</a> anyway.</p>
<p>As individuals, and society as a whole, come to better understand the role social media plays in life and politics, they’re wondering: Is it possible – or worthwhile – to trust Facebook?</p>
<h2>Designing for attention</h2>
<p>Of course, social media platforms don’t exist without their users. Facebook has grown from its origins serving only college students by exploiting the <a href="https://hbr.org/product/information-rules-a-strategic-guide-to-the-network-economy/863X-HBK-ENG">network effect</a>: If all your friends are socializing on the site, it’s tempting to join yourself. Over time this network effect has made Facebook not only more valuable, but also harder to leave. </p>
<p>However, now that Facebook and its ilk are under fire, it’s possible that those network effects might unravel the other way: Facebook’s <a href="https://techcrunch.com/2018/01/31/facebook-q4-2017-earnings/">number of active users continued to rise in 2017</a>, but in the final three months of the year, its growth showed signs of slowing. If all your friends are leaving Facebook, you might go with them.</p>
<p>The design of social media platforms like Facebook – and many other common apps, such as Uber – is intentionally engrossing. Some scholars go so far as to call it “<a href="https://press.princeton.edu/titles/9156.html">addictive</a>,” but we’re uncomfortable using the term so broadly in this context. Nevertheless, digital designers <a href="https://darkpatterns.org/">manipulate users’ behavior</a> with a wide array of interface elements and <a href="http://captology.stanford.edu/wp-content/uploads/2015/02/RSA-The-new-rules-of-persuasion.pdf">interaction strategies</a>, such as <a href="https://yalebooks.yale.edu/book/9780300122237/nudge">nudges</a> and cultivating routines and habits, to keep users’ attention.</p>
<p>Attention is at the center of the social media business model because it’s worth money: Media theorist Jonathan Beller has observed that “<a href="http://www.cabinetmagazine.org/issues/24/beller.php">human attention is productive of value</a>.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"378319529081704448"}"></div></p>
<h2>Playing tricks on users</h2>
<p>To attract users, keep them engaged and ensure they want to come back, companies manipulate the details of visual interfaces and user interaction. For example, the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2686227">ride-sharing app Uber</a> shows customers <a href="https://motherboard.vice.com/en_us/article/mgbz5a/ubers-phantom-cabs">phantom cars</a> to trick them into thinking drivers are nearby. The company uses similar <a href="https://www.nytimes.com/interactive/2017/04/02/technology/uber-drivers-psychological-tricks.html">psychological tricks</a> when sending drivers text messages encouraging them to stay active.</p>
<p>This manipulation is particularly effective when app developers <a href="https://www.nngroup.com/articles/the-power-of-defaults/">set default options</a> for users that serve the company’s needs. For example, some privacy policies make <a href="https://doi.org/10.1023/A:1015044207315">users opt out of sharing their personal data, while others allow users to opt in</a>. This initial choice affects not only what information users end up disclosing, but also their overall trust in the <a href="https://doi.org/10.1108/14684520710832342">online platform</a>. Some of the <a href="https://www.vox.com/technology/2018/3/21/17148852/mark-zuckerberg-facebook-cambridge-analytica-breach">measures announced</a> by Facebook CEO Mark Zuckerberg in the wake of the Cambridge Analytica revelations – including tools showing users which third parties have access to their personal data – could further complicate the design of the site and discourage users even more. </p>
<h2>Frameworks of trust</h2>
<p>Was users’ trust in Facebook misplaced in the first place? Unfortunately, we think so. Social media companies have never been transparent about what they’re up to with users’ data. Without <a href="http://www.kellogg.northwestern.edu/trust-project/videos/grayson-ep-1.aspx">full information about what happens</a> to their personal data once it’s gathered, we recommend people default to not trusting companies until they’re convinced they should. Yet neither regulations nor third-party institutions currently exist to ensure that social media companies are trustworthy.</p>
<p>This is not the first time new technologies created social change that disrupted established mechanisms of trust. For example, in the industrial revolution, new forms of organization like factories, and major demographic shifts from migration, increased contact among strangers and across cultures. That altered established relationships and forced people to do business with unknown merchants.</p>
<p>People could <a href="https://doi.org/10.1086/228791">no longer rely</a> on interpersonal trust. Instead, <a href="http://psycnet.apa.org/record/1988-10420-001">new institutions</a> arose: Regulatory agencies like the Interstate Commerce Commission, trade associations like the American Railway Association, and other third parties like the American Medical Association’s Council on Medical Education established systematic <a href="https://www.russellsage.org/publications/cooperation-without-trust-0">rules for transactions</a>, standards for product quality and professional training. They also offered accountability if <a href="https://www.researchgate.net/publication/261707664_Solving_the_Problem_of_Trust">something went wrong</a>.</p>
<h2>A new need for protection</h2>
<p>There are <a href="https://doi.org/10.1515/auk-2004-0111">not yet similar standards</a> and accountability requirements for 21st-century technologies like social media. In the U.S., the <a href="https://yalebooks.yale.edu/book/9780300122237/nudge">Federal Trade Commission</a> is one of the few regulatory bodies working to hold digital platforms to account for business practices that are deceptive or potentially unfair. The <a href="https://www.bloomberg.com/news/articles/2018-03-20/ftc-said-to-be-probing-facebook-for-use-of-personal-data">FTC is now investigating</a> Facebook over the Cambridge Analytica situation.</p>
<p>There is <a href="https://pdfs.semanticscholar.org/d764/3e79b0a382ef535a2fcd49d351069690920f.pdf">plenty of demand</a> for <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2567476">more supervision</a> of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2573181">social media platforms</a>. Several existing proposals could <a href="http://codev2.cc/">regulate</a> and <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674035072">support</a> trust online. </p>
<p>Other countries have rules, such as the EU’s <a href="https://www.eugdpr.org/">General Data Protection Regulation</a> and Canada’s <a href="https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/">Personal Information Protection and Electronic Documents Act</a>. However, in the U.S., technology companies like Facebook have actively <a href="https://www.eff.org/deeplinks/2017/10/how-silicon-valleys-dirty-tricks-helped-stall-broadband-privacy-california">blocked</a> and resisted these efforts while <a href="https://theintercept.com/2018/03/21/ftc-facebook-chuck-schumer/">policymakers</a> and other tech gurus have convinced people they’re not necessary.</p>
<p>Facebook has the technical know-how to give users more control over their private data, but <a href="https://medium.com/@shanegreen/facebook-ignored-recommendations-from-2016-internal-study-on-their-data-and-privacy-problem-6dc7c5f75b6f">has chosen not to</a> – and that’s not surprising. No laws or other institutional rules require it, or provide necessary oversight to ensure that it does. Until a major social media platform like Facebook is <a href="https://www.npr.org/2018/03/21/595791380/sen-richard-blumenthal-weighs-in-on-how-congress-could-regulate-facebook">required</a> to reliably and transparently demonstrate that it is protecting the interests of its users – as distinct from its advertising customers – the calls to <a href="https://www.theguardian.com/commentisfree/2018/mar/22/restructure-facebook-ftc-regulate-9-steps-now">break the company up</a> and start afresh are only going to grow.</p><img src="https://counter.theconversation.com/content/93776/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Denise Anthony currently receives funding from the National Science Foundation (grant #1408730 “A Socio-Technical Approach to Privacy in a Camera-Rich World”).</span></em></p><p class="fine-print"><em><span>Luke Stark currently receives funding from the National Science Foundation (grant #1408730 “A Socio-Technical Approach to Privacy in a Camera-Rich World”).
</span></em></p>Users shouldn’t trust Facebook, but that doesn’t mean they should immediately abandon what has become a crucial platform for connectedness.Denise Anthony, Professor of Sociology, Dartmouth CollegeLuke Stark, Postdoctoral Fellow in Sociology, Dartmouth CollegeLicensed as Creative Commons – attribution, no derivatives.