tag:theconversation.com,2011:/us/topics/cyber-ethics-13533/articlesCyber ethics – The Conversation2021-05-26T12:13:17Ztag:theconversation.com,2011:article/1613832021-05-26T12:13:17Z2021-05-26T12:13:17ZColonial Pipeline forked over $4.4M to end cyberattack – but is paying a ransom ever the ethical thing to do?<figure><img src="https://images.theconversation.com/files/402686/original/file-20210525-19-16a6rzj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C3997&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What would happen if companies stopped paying ransoms?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/photo-taken-on-may-11-2021-shows-a-colonial-pipeline-news-photo/1233000644?adppopup=true">Liu Jie/Xinhua via Getty Images</a></span></figcaption></figure><p>It took little over two hours for hackers to <a href="https://cybernews.com/editorial/darkside-strives-for-ethical-hacking-after-hitting-a-vital-fuel-pipeline-in-the-us/">gain control</a> of more than 100 gigabytes of information from Colonial Pipeline on May 7, 2021 – causing the firm to shut down its fuel distribution network and sparking widespread <a href="https://morningconsult.com/2021/05/19/gasoline-shortage-polling/">fears</a> of a gasoline shortage. The decision to pay off the attackers was also <a href="https://www.washingtonpost.com/business/2021/05/19/colonial-pipeline-ransom-joseph-blunt/">made with apparent speed</a>, but the ethical arguments involved are age old and the implications could reverberate well into the future.</p>
<p>Cyberattacks, including those on <a href="https://www.cisa.gov/critical-infrastructure-sectors">critical infrastructure</a> in the U.S., are nothing new. Ransomware, a type of <a href="http://dx.doi.org/10.2139/ssrn.3746754">malicious software</a> that locks access to a computer until a ransom is paid, has been a component of the cyberthreat landscape since the mid-2000s. But the Colonial Pipeline breach raised the stakes and highlighted the ability of ransomware to interrupt the vital services on which Americans rely.</p>
<p>As scholars of <a href="https://scholar.google.com/citations?user=YtgRGx0AAAAJ&hl=en">cybersecurity policy</a>, in particular <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2978305">critical infrastructure protection</a> and <a href="https://www.govtech.com/security/deal-with-ransomware-the-way-police-deal-with-hostage-situations.html">ransomware</a>, we think it important to consider the legal and ethical questions surrounding ransomware payments – just because paying off cyberattackers may be lawful in some contexts, that still doesn’t make it the morally correct thing to do.</p>
<h2>To pay or not to pay</h2>
<p>It has been widely reported that the Colonial Pipleline CEO <a href="https://www.wsj.com/articles/colonial-pipeline-ceo-tells-why-he-paid-hackers-a-4-4-million-ransom-11621435636">Joseph Blount agreed</a> to pay a US$4.4 million ransom to <a href="https://www.bbc.com/news/business-57050690">DarkSide</a>, the Russia-based group behind the cyber attack. </p>
<p>In describing his decision, which he said did not come lightly, <a href="https://www.wsj.com/articles/colonial-pipeline-ceo-tells-why-he-paid-hackers-a-4-4-million-ransom-11621435636">Blount argued that it was justifiable</a> given that it was “the right thing to do for the country.”</p>
<p>Official guidance suggests otherwise. In October 2020, the Treasury Department <a href="https://home.treasury.gov/system/files/126/ofac_ransomware_advisory_10012020_1.pdf">warned that ransomware payments are a violation of its rules</a> and would only encourage future demands. Although there is no federal legislation, such states as California, Texas and Michigan have <a href="https://www.ibrc.indiana.edu/studies/State-of-Hoosier-Cybersecurity-2020.pdf">cyber-extortion laws</a> on the books that discourage ransomware payments.</p>
<p>Often, though, the decision of whether to pay falls in a legal and ethical gray area.</p>
<p>CEOs can turn to three main schools of ethics in guiding decisions about whether to pay ransoms based on virtues, duties and consequences. </p>
<p>Under <a href="https://plato.stanford.edu/entries/ethics-virtue/">virtue ethics</a>, which traces its origins to philosophers Plato, Aristotle and Confucius, people make decisions based on a set of virtues or character traits such as honesty and loyalty. In and of itself, the tradition does not help in situations that require weighing one virtue against another, such as not wishing to reward criminal activity against preventing disruption to the wider American public. For example, Colonial Pipeline CEO Blount <a href="https://www.wsj.com/articles/colonial-pipeline-ceo-tells-why-he-paid-hackers-a-4-4-million-ransom-11621435636">expressed a moral distaste</a> in paying “people like this,” but ultimately decided to override that concern based on other factors. </p>
<p>Another way to approach challenging ethical decisions is through what is called the <a href="https://plato.stanford.edu/entries/ethics-deontological/">deontological approach</a>, which holds that actions are good or bad determined by a clear set of rules. So another way to come at the question of whether to pay a ransom is to ask, “How does doing so align with recognized universal duties?”</p>
<p>The problem with cybersecurity is that, given the rapidly changing technological and regulatory environment, it is not always clear what the “golden rules” are, or even if any have been established. Some business leaders may even perceive a duty to pay as Blount did, especially in the case of critical infrastructure such as pipelines on which so many people rely. </p>
<p>The ethics of ransomware payments can also be viewed through the consequences of the decision to yourself, your family, your ganization and, as Blount suggested, the country and the world. <a href="https://plato.stanford.edu/entries/utilitarianism-history/">Utilitarian philosophers</a> hold that what is important is promoting the greatest good for the greatest number of people. </p>
<p>This is often described in boardrooms and policy circles as cost-benefit analysis. Yet it’s not always clear where to put that next dollar of investment to maximize the good and minimize the harm in the long term. In dealing with ransomware, for example, backing up data is key, as is practicing <a href="https://theconversation.com/zero-trust-security-assume-that-everyone-and-everything-on-the-internet-is-out-to-get-you-and-maybe-already-has-160969">zero-trust security</a>, an approach in which companies assume that their networks are already compromised and act accordingly. But doing so can be complex, and investments might cause fewer benefits than if the money were invested elsewhere.</p>
<h2>Pros of paying</h2>
<p>In practice, business leaders use all these ethical tools, and more, in deciding whether or not to pay – and there isn’t much time to weigh the options. Colonial Pipeline CEO Blount’s decision reportedly <a href="https://www.washingtonpost.com/business/2021/05/19/colonial-pipeline-ransom-joseph-blunt/">came almost immediately</a>. </p>
<p>And it isn’t universally accepted that Colonial Pipeline came to the right decision.</p>
<p>Some cybersecurity professionals want to ban paying out ransoms to <a href="https://www.washingtonpost.com/politics/2021/05/21/cybersecurity-202-cybersecurity-pros-are-split-banning-ransomware-payments/">halt the growing problem of malware attacks for profit</a>. Others say banning payments would be a “<a href="https://www.bbc.com/news/technology-57173096">horrific game of chicken</a>” in which cyberattackers up the stakes until the consequences of not breaking the law are greater for the companies involved than the impact of the breach. And banning ransom payments outright would place an impossible burden on smaller businesses or organizations that do not have the resources to protect against malicious actors. </p>
<p>The thinking behind banning payments is that attacks might stop if they don’t yield payments. Yet if the attack has the capability of paralyzing an entire entity, paying up is often the <a href="https://www.bbc.com/news/technology-57173096">economically rational decision in the short term</a>. An attack on the Irish Healthcare System in May, for example, is expected to cost <a href="https://www.nytimes.com/2021/05/20/technology/ransomware-attack-ireland-hospitals.html">tens of millions of euros to rebuild the network</a>. Cybersecurity experts estimate that companies hit by attacks take an average of <a href="https://www.washingtonpost.com/technology/2021/05/15/ransomware-colonial-darkside-cyber-security/">287 days to fully recover to normal operations </a>.</p>
<p>[<em>Understand new developments in science, health and technology, each week.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-understand">Subscribe to The Conversation’s science newsletter</a>.]</p>
<h2>Ransomware as a service</h2>
<p>The rapid proliferation of attacks has been fueled by a new business model known as “ransomware as a service.” Ransomware developers sell personalized variants to “<a href="https://us-cert.cisa.gov/ncas/alerts/aa21-131a">affiliates</a>” – cybercriminals who deploy the <a href="https://us-cert.cisa.gov/ncas/alerts/aa21-131a">ransomware</a>.</p>
<p>With the emergence of ransomware as a service, ransomware can be profitable for both the developers of the variant and the affiliates.</p>
<p>Not all affiliates and ransomware developers are governed by the same moral code. DarkSide, which conducted the Colonial Pipeline attack, has its own set of principles, which include not attacking certain targets, such as <a href="https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/">medical services, the educational establishment and nonprofit organizations</a>. </p>
<p>DarkSide has also been known to promise it will completely leave a network alone after <a href="https://krebsonsecurity.com/2021/05/a-closer-look-at-the-darkside-ransomware-gang/">ransom is paid</a>. </p>
<p>The FBI discourages payment, partly on the grounds that it is not a guarantee that a company will not be hit again.</p>
<p>But the message is mixed. Law enforcement agencies encourage victims not to pay, but paying ransom is not illegal, and even <a href="https://www.darkreading.com/attacks-breaches/police-pay-off-ransomware-operators-again/d/d-id/1319918">police departments</a> have been known to pay up when their systems have been compromised. And while the Treasury Department has been investigating new financial penalties against payment of ransoms, <a href="https://www.washingtonpost.com/technology/2021/05/15/ransomware-colonial-darkside-cyber-security/">to date none have been levied</a>. </p>
<p>But even without the threat of legal sanction, payment of ransomware will continue to pose a moral dilemma.</p><img src="https://counter.theconversation.com/content/161383/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Scott Shackelford is a principal investigator on grants from the Hewlett Foundation, Indiana Economic Development Corporation, and the Microsoft Corporation supporting both the Ostrom Workshop Program on Cybersecurity and Internet Governance and the Indiana University Cybersecurity Clinic.
</span></em></p><p class="fine-print"><em><span>Megan Wade is affiliated with the Ostrom Workshop for Cybersecurity and Internet Governance at Indiana University.</span></em></p>The FBI and Treasury Department frown on the idea of paying off cyber attackers. But there is sufficient ethical and legal gray areas to make it a real moral quandary for business leaders.Scott Shackelford, Associate Professor of Business Law and Ethics; Executive Director, Ostrom Workshop; Cybersecurity Program Chair, IU-Bloomington, Indiana UniversityMegan Wade, Research Affiliate at the Ostrom Workshop for Cybersecurity and Internet Governance, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1584632021-04-07T15:14:43Z2021-04-07T15:14:43ZShould cyberwar be met with physical force? Moral philosophy can help us decide<figure><img src="https://images.theconversation.com/files/393840/original/file-20210407-13-9c2v2o.jpeg?ixlib=rb-1.1.0&rect=162%2C225%2C5013%2C3220&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/computer-security-concept-spyware-hacker-1164583075">seaonweb/Shutterstock</a></span></figcaption></figure><p>In conventional warfare, it’s accepted that if a state finds itself under attack, it’s entitled to respond – either with defensive force, or with a counterattack. But it’s less clear how countries should respond to cyber-attacks: state-backed hacks which often have dangerous real-world implications.</p>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/should-cyberwar-be-met-with-physical-force-moral-philosophy-can-help-us-decide-158463&bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><a href="https://www.theguardian.com/us-news/2021/mar/08/microsoft-cyber-attack-biden-emergency-task-force">The 2020 SolarWinds hack</a>, attributed to state-backed Russian hackers, breached security at around 100 private companies. But it also infiltrated nine US federal agencies – including the <a href="https://www.chathamhouse.org/2021/02/solarwinds-hack-valuable-lesson-cybersecurity?gclid=Cj0KCQjwo-aCBhC-ARIsAAkNQivQecAKCMQKg23wXNavyLrz5r6xn9tFy2XUwmYK08r5GT0ReriiKOwaAqtKEALw_wcB">US Energy Department</a>, which oversees the country’s nuclear weapons stockpile.</p>
<p>Such attacks are expected to become more common. Recently, the UK’s <a href="https://www.gov.uk/government/collections/the-integrated-review-2021">2021 Strategic Defence Review</a> confirmed the creation of a “National Cyber Force” tasked with developing effective offensive responses to such cyber-attacks, which could even include <a href="https://www.theguardian.com/politics/2021/mar/16/defence-review-uk-could-use-trident-to-counter-cyber-attack">responding to them with nuclear weapons</a>.</p>
<p>Philosophers like myself would urge caution and restraint here. As cyber-attacks are new and ambiguous forms of threat, careful ethical consideration should take place before we decide upon appropriate responses.</p>
<h2>‘Just war’ theory</h2>
<p>We already have a millennia-old framework designed to regulate the use of physical force in wars. It’s called “<a href="https://philpapers.org/rec/FINIJW">just war theory</a>”, and its rules determine whether or not it’s morally justified to launch military operations against a target. Given how cyber systems can be weaponised, it seems natural for ethicists to build “<a href="https://eprints.whiterose.ac.uk/120228/3/Just%20Cyber%20War%20-%20Final%20.pdf">cyberwar</a>” into existing just war theory. </p>
<p>But not everyone is convinced. <a href="http://blog.practicalethics.ox.ac.uk/2012/06/cyberwarfare-no-new-ethics-needed/">Sceptics</a> doubt whether cyberwar requires new ethics, with some even questioning whether <a href="https://www.tandfonline.com/doi/abs/10.1080/01402390.2011.608939?journalCode=fjss20#:%7E:text=Cyber%2520war%2520does%2520not%2520take%2520place%2520in%2520the%2520present.,subversion%252C%2520espionage%252C%2520and%2520sabotage.">cyberwar is actually possible</a>. <a href="https://eprints.whiterose.ac.uk/120228/3/Just%20Cyber%20War%20-%20Final%20.pdf">Radicals</a>, meanwhile, believe cyberwar requires a wholesale rethink, and are building an entirely new theory of “<a href="https://link.springer.com/article/10.1007/s11245-014-9245-8">just information war</a>”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/cyber-attacks-are-rewriting-the-rules-of-modern-warfare-and-we-arent-prepared-for-the-consequences-117043">Cyber attacks are rewriting the 'rules' of modern warfare – and we aren't prepared for the consequences</a>
</strong>
</em>
</p>
<hr>
<p>Lending credence to the radicals’ claim is the assumption that cyber-attacks are fundamentally different from physical force. After all, while conventional military force targets human bodies and their built environment, cyber-attacks chiefly harm data and virtual objects. Crucially, while physical attacks are “violent”, cyber-attacks seem to present – if anything – an alternative to violence.</p>
<p>On the other hand, some ethicists highlight the fact that cyber operations can sometimes lead to physical harm. For instance, when hackers <a href="https://www.scientificamerican.com/article/how-hackers-tried-to-add-dangerous-lye-into-a-citys-water-supply/">infiltrated the system</a> controlling the fresh water supply in Oldsmar, Florida, in February 2021, they weaponised physical infrastructure by attempting to poison the water. And a ransomware attack on a Düsseldorf hospital in September 2020 actually contributed to the <a href="https://www.bbc.co.uk/news/technology-54204356">death of a patient</a>.</p>
<h2>Espionage or attack?</h2>
<p>Clearly, cyber-attacks can result in grave harms that states have a responsibility to defend their citizens against. But cyber-attacks are <a href="https://www.newyorker.com/news/daily-comment/after-the-solarwinds-hack-we-have-no-idea-what-cyber-dangers-we-face">ambiguous</a> – US senator Mitt Romney characterised the SolarWinds hack as “<a href="https://www.reuters.com/article/usa-cyber-breach-idUSKBN28U0IK">an invasion</a>”, while Mark Warner of the US Senate Intelligence Committee placed it “<a href="https://www.reuters.com/article/usa-cyber-breach-idUSKBN28U0IK">in that grey area between espionage and an attack</a>”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-arent-in-a-cyber-war-despite-what-britains-top-general-thinks-125578">We aren't in a cyber war – despite what Britain's top general thinks</a>
</strong>
</em>
</p>
<hr>
<p>For defence agencies, the difference matters. If they regard state-backed hacks as attacks, they may believe themselves entitled to launch offensive counterattacks. But if hacks are just espionage, they may be dismissed as <a href="https://www.newyorker.com/news/daily-comment/after-the-solarwinds-hack-we-have-no-idea-what-cyber-dangers-we-face">business as usual</a>, part of the everyday intelligence work of states.</p>
<p>In just war theory, some “<a href="https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199609857.001.0001/acprof-9780199609857">revisionist</a>” philosophers find it useful to go back to basics. They analyse individual threats and acts of violence in isolation before carefully building up a robust theory of complex, <a href="https://www.cambridge.org/core/journals/ethics-and-international-affairs/article/abs/war-as-selfdefense/308F9B7076C9695E5A1B7E3F5FFAD64A">large-scale war</a>. Because cyber-attacks are new and ambiguous, the revisionist approach may help us decide how best to respond to them.</p>
<h2>Cyber violence</h2>
<p>I have argued previously that some cyber-attacks are <a href="https://link.springer.com/article/10.1007/s13347-017-0299-6">acts of violence</a>. That’s partially because, as noted above, cyber-attacks can cause grave physical harms just like conventional violence. </p>
<p>But the gravity of harms alone doesn’t help us categorise cyber-attacks as acts of violence. Think of the myriad ways that the often lethal harm of a coronavirus infection can be transmitted: through <a href="https://blog.petrieflom.law.harvard.edu/2020/04/14/coronavirus-negligence-liability-for-covid-19-transmission/">recklessness, negligence, or mischief</a>; by accident; and even sometimes <a href="https://www.theguardian.com/world/2021/mar/08/return-to-schools-could-alter-covid-roadmap-boris-johnson-warns">as a byproduct</a> of an otherwise legitimate policy. </p>
<p>We wouldn’t say these harms resulted from violence, and nor would we argue that defensive violence is an appropriate response to them. Instead, what seems to make some cyber operations violent attacks – rather than mere espionage – is that they express similar sorts of intention to those expressed in physical violence.</p>
<h2>Intentionality</h2>
<p>To explore how, consider an example of physical violence: someone shooting a distant, unwitting human target with a long-range rifle. </p>
<p>Like all agents of violence, the sniper seems to intend one thing, <a href="https://dro.dur.ac.uk/23594/">but really intends two</a>. First, she intends to harm her target. But second, and less obviously, she intends to dominate her target. The target has no means of evading or deflecting the threat of the bullet.</p>
<p>This relationship, of domination versus defencelessness, can be established by any number of technologies, from swinging a club to launching a rocket from a remote drone. In these cases the threat is undetectable – like a cyber-attack on drinking water, you don’t know anything is wrong until it’s too late.</p>
<figure class="align-center ">
<img alt="A women in military clothing controls a drone via a computer screen" src="https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/393795/original/file-20210407-21-jkzlud.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Drone strikes are a form of technical domination to which targets are especially vulnerable.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-military-drone-operator-wide-shot-539931541">Burlingham/Shutterstock</a></span>
</figcaption>
</figure>
<p>Many cyber-attacks <a href="https://link.springer.com/article/10.1007/s13347-017-0299-6">have a similar profile</a>. They establish technical domination by creating a vulnerability and positioning themselves to execute harm at the hacker’s will. Like boobytrap bombs, they leverage secrecy and surprise to keep their victims from acting until it’s too late.</p>
<p>If some cyber-attacks are acts of violence, then perhaps they could justify defensive violence or counterattack. That would depend on the degree of destruction threatened, and defenders would still have to satisfy age-old <a href="https://plato.stanford.edu/entries/war/">just war</a> rules. </p>
<p>But the same premise means that employing offensive cyber-attacks ought to be seen as a grave matter – as grave, in some cases, as physical attacks. It is vital, then, that the UK’s new National Cyber Force directs its operations with the same care and restraint as if they were using military weapons in a conventional war.</p><img src="https://counter.theconversation.com/content/158463/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christopher J. Finlay does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Cyber attacks have created new dilemmas for philosophers who determine the ethics of war.Christopher J. Finlay, Professor in Political Theory, Durham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1475092020-10-21T15:45:32Z2020-10-21T15:45:32ZWe must make moral choices about how we relate to social media apps<figure><img src="https://images.theconversation.com/files/364456/original/file-20201020-14-nybccz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span></figcaption></figure><p>Recently a South African <a href="https://www.kfm.co.za/Show/kfm-breakfast">radio show</a> asked, “If you had to choose between your mobile phone and your pet, which would choose?” Think about that for a moment. Many callers responded they would choose their phone. I was shocked… But to be honest, I give more attention to my phone than to my beloved dogs!</p>
<p>Throughout history there have been discoveries that have changed society in unimaginable ways. Written language made it possible to communicate over space and time. The printing press, say historians, helped shape societies <a href="https://www.jstor.org/stable/24357082">through</a> the mass dissemination of ideas. New modes of transport <a href="https://hrcak.srce.hr/index.php?id_clanak_jezik=237992&show=clanak">radically transformed</a> social norms by bringing people into contact with new cultures.</p>
<p>Yet these pale in comparison to how the internet is shaping, and misshaping, our individual and social <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">identities</a>. I remember the first time I heard a teenager speaking with an American accent and discovered she’d never been out of South Africa but picked up her accent from watching YouTube. We shape our technologies, but they also shape us. </p>
<p>The potentially negative impacts of social media have again been highlighted by <a href="https://www.imdb.com/title/tt11464826/"><em>The Social Dilemma</em></a> on Netflix. The documentary, which Facebook has <a href="https://www.indiewire.com/2020/10/facebook-response-the-social-dilemma-1234590361/">slammed</a> as sensational and unfair, shows how dominant and largely unregulated social media companies manipulate users by harvesting personal data, while using <a href="https://theconversation.com/do-social-media-algorithms-erode-our-ability-to-make-decisions-freely-the-jury-is-out-140729">algorithms</a> to push information and ads that can lead to social media addiction – and dangerous anti-social behaviour. Among others, the show makes an example of the conspiracy theory <a href="https://theconversation.com/how-qanon-uses-satanic-rhetoric-to-set-up-a-narrative-of-good-vs-evil-146281">QAnon</a>, which is <a href="https://www.dailymaverick.co.za/article/2020-09-26-qanon-originated-in-south-africa-now-that-the-global-cult-is-back-here-we-should-all-be-afraid/">increasingly</a> <a href="https://www.thedailybeast.com/qanon-targets-africa-with-new-conspiracy-that-democrats-are-stealing-local-children">targeting</a> Africans.</p>
<p>Despite its flaws, the doccie got me wondering what our relationship should be to social media? As an ethics professor, I’ve come to realise that we must make moral choices about how we relate to our technologies. This requires an honest evaluation of our needs and weaknesses, and a clear understanding of the intentions of these platforms. </p>
<h2>Tug-of-war with technology</h2>
<p><a href="https://www.ynharari.com">Yuval Noah Harari</a>, author of <a href="https://www.theguardian.com/books/2014/sep/11/sapiens-brief-history-humankind-yuval-noah-harari-review"><em>Sapiens</em></a>, contends it’s our ability to inhabit “fiction” that differentiates humans. <a href="https://www.harpercollins.com/products/sapiens-yuval-noah-harari?variant=32207215656994">He claims</a> you “could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven”. Humans have a capacity to believe in things we cannot see – which changes things that do exist. Ideas like prejudice and hatred, for example, are powerful enough to cause wars that displace thousands. </p>
<p>The wall between Israel and Palestine was conceived in people’s minds before being transformed into bricks and barbed wire. Philosopher Oliver Razac’s book <a href="https://thenewpress.com/books/barbed-wire"><em>Barbed Wire: A political history</em></a> traces how this razor-sharp technology has been deployed from farms that displaced indigenous peoples to the trenches of World War I and the prisons of contemporary democracies. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman in a bathroom is engaged with her mobile phone, reflected in a mirror." src="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=245&fit=crop&dpr=1 600w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=245&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=245&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=308&fit=crop&dpr=1 754w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=308&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=308&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sophia Hammons as Isla in <em>The Social Dilemma</em>.</span>
<span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span>
</figcaption>
</figure>
<p>Technology is in a constant psychological, political and economic tug-of-war with humanity. Yet, some of today’s technologies are much more subtle than barbed wire. They are deeply <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">integrated into</a> our lives – they know us better than we know ourselves.</p>
<p>I have thousands of ‘friends’ on social media – far too many to relate to meaningfully. Yet, at times I can be more present to people that I have never met than I am to my family. This is not by chance – social media platforms are <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">designed</a> to seek and hold our attention. They are businesses, intent on making money. Harvard University professor <a href="https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy">Shoshana Zuboff</a>, who features in the documentary, explains in <a href="https://profilebooks.com/the-age-of-surveillance-capitalism.html"><em>The Age of Surveillance Capitalism</em></a> that social media “trades exclusively in human futures”.</p>
<h2>We are the product</h2>
<p>Zuboff says that social media platforms exploit our emotions and pre-cognate needs like belonging, recognition, acceptance and pleasure that are ‘hard wired’ into us to secure our survival. </p>
<p>Recognition relates to two of the primary <a href="https://books.google.co.za/books/about/The_Primal_Feast.html?id=TJF_xQAuLOYC&redir_esc=y">functions of the brain</a>, avoiding danger and finding ways to meet our basic survival needs (such as food or a mate to perpetuate our gene pool). These corporations, she says, are hiring the smartest engineers, social psychologists, behavioural economists and artists to hold our attention, while interspersing adverts between our videos, photos and status updates. They make money by offering a future that their advertisers will sell you. </p>
<p>Or, as former Google and Facebook employee Justin Rosenstein, says in <em>The Social Dilemma</em>:</p>
<blockquote>
<p>Our attention is the product being sold to advertisers. </p>
</blockquote>
<p>If our adult brains are so susceptible to this kind of manipulation, what effects are they having on the developing minds of children?</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/uaaC57tcci0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for <em>The Social Dilemma</em>.</span></figcaption>
</figure>
<p>The documentary also reminds the viewer that social media has a more subtle and powerful influence on our lives – shaping our social and political realities. </p>
<h2>Fake news and hate speech</h2>
<p>The documentary uses an example from 2017 in which Facebook use is linked to <a href="https://www.reuters.com/article/us-facebook-india-content-idUSKBN1X929F">violence</a> that led to the displacement of close to 700,000 Rohingya persons in Myanmar. Something that doesn’t really exist (a social media platform) violently changed something that does exist (the safety of people). Facebook was a primary means of communication in Myanmar. New phones came with Facebook pre-installed. What users were unaware of was a ‘third person’ – Facebook’s algorithms – feeding information that included hate speech and fake news into their conversations. In Africa, similar reports have emerged from <a href="https://www.buzzfeednews.com/article/jasonpatinkin/how-to-get-people-to-murder-each-other-through-fake-news-and#.cfxZRym4z">South Sudan</a> and <a href="https://theconversation.com/a-vicious-online-propaganda-war-that-includes-fake-news-is-being-waged-in-zimbabwe-99402">Zimbabwe</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<p>Another example used is the <a href="https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook">Cambridge Analytica</a> <a href="https://theconversation.com/why-facebook-is-the-reason-fake-news-is-here-to-stay-94308">scandal</a>, which also played out in <a href="https://qz.com/africa/1089911/bell-pottinger-and-cambridge-analyticas-work-in-south-africa-kenya-is-raising-questions/">Africa</a>, most notably in <a href="https://theconversation.com/how-the-nigerian-and-kenyan-media-handled-cambridge-analytica-128473">Nigeria and Kenya</a>. Facebook user information was mined and sold to nefarious political actors. This information (like what people feared and what upset them) was used to spread misinformation and manipulate their voting decisions on important elections.</p>
<h2>What to do about it?</h2>
<p>So, what do we do? We can’t very well give up on social media completely, and I don’t think it is necessary. These technologies are already deeply intertwined with our daily lives. We cannot deny they have some value. </p>
<p>However, just like humans had to adapt to the responsible use of the printing press or long distance travel, we will need to be more intentional about how we relate to these new technologies. We can begin by cultivating healthier social media <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">habits</a>.</p>
<p>We should also develop a greater awareness of the aims of these companies and how they achieve them, while understanding how our information is being used. This will allow us to make some simple commitments that align our social media usage to our better values.</p><img src="https://counter.theconversation.com/content/147509/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dion Forster receives funding from the South African National Research Foundation. </span></em></p>As more comes to light about the money-making tactics of social media platforms we need to reevaluate our relationship with them.Dion Forster, Head of Department, Systematic Theology and Ecclesiology, Stellenbosch UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1255782019-10-25T09:53:33Z2019-10-25T09:53:33ZWe aren’t in a cyber war – despite what Britain’s top general thinks<figure><img src="https://images.theconversation.com/files/298543/original/file-20191024-170493-kgf22x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Cyber attacks aren't warfare.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/government-surveillance-agency-military-joint-operation-669170761">Gorodenkoff/Shutterstock</a></span></figcaption></figure><p>The UK is “<a href="https://www.telegraph.co.uk/news/2019/09/29/britain-war-every-day-due-constant-cyber-attacks-chief-defence/?WT.mc_id=tmg_share_tw">at war every day</a>”, the country’s chief of the defence staff, General Sir Nick Carter, recently declared. The reason for Carter’s rather bleak assessment is the proliferation of cyber attacks against Britain’s information networks, and other aggressive but non-violent actions (such as disinformation campaigns) from rival states. He further claimed that the distinction between war and peace has broken down, as competitors increasingly ignore established norms of acceptable behaviour.</p>
<p>Although Carter is right that cyber attacks are a threat to national security, to describe them as war is problematic. War is a distinct activity, with a particular nature. But most cyber attacks are a kind of non-military activity that fall under the broad banner of “<a href="https://foreignpolicy.com/2009/04/08/what-is-grand-strategy-and-why-do-we-need-it/">grand strategy</a>”. </p>
<p>To be sure, Carter is right that warfare is always evolving in line with technology. But our common definitions of the nature of war still largely exclude cyber attacks. War is best defined by scholar of international relations <a href="https://link.springer.com/book/10.1007/978-1-349-24028-9">Hedley Bull</a>, as “organised violence carried on by political units against one another”.</p>
<p>And as 19th-century Prussian general <a href="http://clausewitz.com/">Carl von Clausewitz</a> wrote, war “is a clash … resolved by bloodshed – that is the only way it differs from other conflicts”. States engage in various forms of competition, even conflict. But without violence, they do not constitute war.</p>
<p>By violence, we mean acts of force that result in physical harm to someone or physical damage to something. In contrast to violent acts, most cyber attacks merely manipulate, steal or destroy digital information, causing, at most, economic costs and inconvenience.</p>
<p>That being said, it is theoretically possible for cyber attacks to result in casualties. An attack on air traffic control could produce many casualties. Alternatively, shutting down a power grid (as with <a href="https://www.wired.com/2016/01/everything-we-know-about-ukraines-power-plant-hack/">BlackEnergy, an attack on the Ukrainian grid in 2015</a>) could indirectly result in the deaths of vulnerable citizens. But to date there have been no recorded deaths resulting from cyber attacks.</p>
<p>There is one notable case that gives pause for thought, the <a href="https://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet">Stuxnet attack on the Iranian nuclear programme</a> (2009-2010). This involved a computer virus that destroyed centrifuges at the uranium enrichment facility in Natanz, Iran. Although there were no fatalities, this incident demonstrates that physical destruction can result from cyber attacks.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=412&fit=crop&dpr=1 600w, https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=412&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=412&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=518&fit=crop&dpr=1 754w, https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=518&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/298546/original/file-20191024-170467-1adav4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=518&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Cyber-attacks are inconvenient but haven’t killed anyone.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/man-looking-astonished-network-data-center-95440231?src=5w3GYE4n5T3t0mcZS9DlTQ-1-10">Arjuna Kodisinghe</a></span>
</figcaption>
</figure>
<p>We also need to consider the notion of “<a href="http://www.ethikundmilitaer.de/en/full-issues/20142-cyberwar/taddeo-what-ethics-has-to-do-with-the-regulation-of-cyberwarfare/">cyberharm</a>”. In recent years, some legal experts <a href="https://theconversation.com/cyber-attacks-are-rewriting-the-rules-of-modern-warfare-and-we-arent-prepared-for-the-consequences-117043">have proposed</a> that attacks against information networks should be regulated under the international <a href="https://www.icrc.org/en/document/what-are-rules-of-war-Geneva-Conventions">laws of war</a>. Key to this argument is the idea that information networks are so essential to modern life, that to be without them causes harm – cyberharm. Should this principle be made part of the law, it would bring information networks into line with other essentials for life, such as water supplies, which are already protected by international humanitarian law.</p>
<p>And yet, despite Stuxnet and the rise of cyberharm, war still seems an inappropriate term to describe the vast majority of cyber attacks. Indeed, rather than identifying a new and ambiguous relationship between war and peace, Carter appears to be discussing different methods of grand strategy. Grand strategy has traditionally been based on four main tools available to states: diplomacy, intelligence, military and economic. Cyber power creates a fifth tool. Only the military operates in the realm of war, although the other tools can be used in support.</p>
<p>From a grand strategy perspective, many of the nefarious activities (including electoral interference) that use cyber means are best categorised as “covert operations”. These actions, which have a long history in international politics, are typically conducted by intelligence agencies. Indeed, cyber power has traditionally been under the control of intelligence agencies. In Britain, the National Cyber Security Centre (NCSC) exists within intelligence agency <a href="https://www.gchq.gov.uk/">GCHQ</a>, while in the US, <a href="https://warontherocks.com/2019/04/cyber-command-the-nsa-and-operating-in-cyberspace-time-to-end-the-dual-hat/">Cyber Command</a> is still closely associated with <a href="https://www.nsa.gov/">the National Security Agency</a>.</p>
<h2>Escalation danger</h2>
<p>Why does this all matter? What could be the consequences of expanding our understanding of war? First, there is the danger of escalation to physical forms of attack. If we define cyber attacks as acts of war, then we may feel justified to respond with physical violence. In this way, the threshold for resorting to violence is lowered. For example, in the <a href="https://media.defense.gov/2018/Feb/02/2001872886/-1/-1/1/2018-NUCLEAR-POSTURE-REVIEW-FINAL-REPORT.PDF">2018 Nuclear Posture Review</a>, the Trump administration has threatened nuclear response to non-nuclear strategic attack against critical US infrastructure. </p>
<p>What’s more, war has consequences. In war, governments may feel justified in restricting certain freedoms, diverting resources and expecting certain sacrifices from the population (both in and out of uniform). There is also the danger of “alert fatigue”. If a society is constantly in a state of war, then people’s senses may become dulled to genuine existential threats when they appear. Cyber attacks are a threat, but not an existential threat to the continued existence of the nation. </p>
<p>Treating cyber attacks as a form of warfare means seeing too much novelty in the new cyber domain. Certainly, the technology and techniques of statecraft are changing, but states have always conducted different competitive activities across the entire range of grand strategy. </p>
<p>Cyber attacks can be used in support of military operations. The <a href="https://www.theguardian.com/world/2018/mar/21/israel-admits-it-carried-out-2007-airstrike-on-syrian-nuclear-reactor">2007 Israeli air attack</a> on the Syrian nuclear facility at al-Kibar is <a href="https://www.researchgate.net/publication/296808631_Cyber-combat's_first_shot">suspected to have included</a> cyber attacks on Syria’s radar system. But cyber attacks are more commonly used in non-violent covert operations, including espionage, sabotage, propaganda, etc. Certainly, cyber attack is a security threat that must be addressed. But, in the absence of violence, it does not constitute an act of war. And so <a href="https://www.foreignaffairs.com/articles/2013-10-15/cyberwar-and-peace">cyberwar</a> is a term we should reject.</p><img src="https://counter.theconversation.com/content/125578/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David J. Lonsdale was part of a research team that received funding from the ESRC for the project 'Ethics and Rights in Cyber Security'. </span></em></p>Treating non-violent cyber attacks as warfare could lead to unnecessary escalation.David J. Lonsdale, Senior Lecturer in War Studies, University of HullLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1144382019-04-05T05:16:28Z2019-04-05T05:16:28ZArtificial intelligence in Australia needs to get ethical, so we have a plan<figure><img src="https://images.theconversation.com/files/267756/original/file-20190405-180036-d87u05.jpg?ixlib=rb-1.1.0&rect=1413%2C288%2C3055%2C1849&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence needs to be developed with an ethical framework.</span> <span class="attribution"><span class="source">Shutterstock/Alexander Supertramp </span></span></figcaption></figure><p>The question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technolgies using artificial intelligence.</p>
<p>When developed and used appropriately, artificial intelligence (<a href="https://theconversation.com/au/topics/artificial-intelligence-90">AI</a>) has the potential to transform the way we live, work, communicate and travel. </p>
<p>New <a href="https://www.newscientist.com/article/2193361-ai-can-diagnose-childhood-illnesses-better-than-some-doctors/">AI-enabled medical technologies</a> are being developed to improve patient care. There are persuasive indications that autonomous vehicles will <a href="https://www.zdnet.com/article/how-autonomous-vehicles-could-save-over-350k-lives-in-the-us-and-millions-worldwide/">improve safety and reduce the road toll</a>. Machine learning and automation are streamlining workflows and allowing us to <a href="https://www.business.com/articles/9-ai-applications-to-streamline-business/">work smarter</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-protect-us-from-the-risks-of-advanced-artificial-intelligence-we-need-to-act-now-107615">To protect us from the risks of advanced artificial intelligence, we need to act now</a>
</strong>
</em>
</p>
<hr>
<p>Around the world, AI-enabled technology is increasingly being adopted by individuals, governments, organisations and institutions. But along with the vast potential to improve our quality of life, comes a risk to our basic human rights and freedoms. </p>
<p>Appropriate oversight, guidance and understanding of the way AI is used and developed in Australia must be prioritised.</p>
<p>AI gone wild may conjure images of <a href="https://www.imdb.com/title/tt0088247/">The Terminator</a> and <a href="https://www.imdb.com/title/tt0470752/">Ex Machina</a> movies, but it is much simpler, fundamental issues that need to be addressed at present, such as:</p>
<ul>
<li>how data is used to develop AI</li>
<li>whether an AI system is being used fairly</li>
<li>in which situations should we continue to rely on human decision-making?</li>
</ul>
<h2>We have an AI ethics plan</h2>
<p>That’s why, in partnership with government and industry, we’ve developed an <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">ethics framework for AI in Australia</a>. The aim is to catalyse the discussion around how AI should be used and developed in Australia.</p>
<p>The ethical framework looks at various case studies from around the world to discuss how AI has been used in the past and the impacts that it has had. The case studies help us understand where things went wrong and how to avoid repeating past mistakes.</p>
<p>We also looked at what was being done around the world to address ethical concerns about AI development and use.</p>
<iframe src="https://www.google.com/maps/d/embed?mid=1ORZdoiDSrGjhCT_6XRLizBr5oRrR5KPO" width="100%" height="480"></iframe>
<p>Based on the core issues and impacts of AI, eight principles were identified to support the ethical use and development of AI in Australia.</p>
<ol>
<li><p><strong>Generates net benefits:</strong> The AI system must generate benefits for people that are greater than the costs.</p></li>
<li><p><strong>Do no harm:</strong> Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.</p></li>
<li><p><strong>Regulatory and legal compliance:</strong> The AI system must comply with all relevant international, Australian local, state/territory and federal government obligations, regulations and laws.</p></li>
<li><p><strong>Privacy protection:</strong> Any system, including AI systems, must ensure people’s private data is protected and kept confidential and prevent data breaches that could cause reputational, psychological, financial, professional or other types of harm.</p></li>
<li><p><strong>Fairness:</strong> The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.</p></li>
<li><p><strong>Transparency and explainability:</strong> People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.</p></li>
<li><p><strong>Contestability:</strong> When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.</p></li>
<li><p><strong>Accountability:</strong> People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.</p></li>
</ol>
<p>In addition to the core principles various toolkit items are identified in the framework that could be used to help support these principles. These include impact assessments, ongoing monitoring and public consultation.</p>
<h2>A plan, what about action?</h2>
<p>But principles and ethical goals can only go so far. At some point we will need to get to work on deciding how we are going to implement and achieve them. </p>
<p>There are various complexities to consider when discussing the ethical use and development of AI. The vast reach of the technology has potential to impact every facet of our lives.</p>
<p>AI applications are already in use across <a href="https://www.theverge.com/circuitbreaker/2018/9/30/17914022/smart-speaker-40-percent-us-households-nielsen-amazon-echo-google-home-apple-homepod">households</a>, <a href="https://www.theaustralian.com.au/business/technology/australian-bosses-embrace-artificial-intelligence/news-story/336a20c3a1df43d21947d57be780e7d1">businesses</a> and <a href="https://www2.deloitte.com/insights/us/en/focus/cognitive-technologies/artificial-intelligence-government.html">governments</a>, most Australians are already being impacted by them. </p>
<p>There is a pressing need to examine the effects that AI has on the vulnerable and on minority groups, making sure we protect these individuals and communities from bias, discrimination and exploitation. (Remember <a href="https://theconversation.com/microsofts-racist-chatbot-tay-highlights-how-far-ai-is-from-being-truly-intelligent-56881">Tay, the racist chatbot</a>?)</p>
<p>There is also the fact that AI used in Australia will often be developed in other countries, so how do we ensure it adheres to Australian standards and expectations?</p>
<h2>Your say</h2>
<p>The framework explores these issues and forms some of Australia’s first steps on the journey towards the positive development and use of AI. But true progress needs input from stakeholders across government, business, academia and broader society.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/careful-how-you-treat-todays-ai-it-might-take-revenge-in-the-future-112611">Careful how you treat today's AI: it might take revenge in the future</a>
</strong>
</em>
</p>
<hr>
<p>That’s why ethical framework discussion paper is now <a href="https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/">open to public comment</a>. You have until May 31, 2019, to have your say in Australia’s digital future.</p>
<p>With a proactive approach to the ethical development of AI, Australia can do more than just mitigate against any risks. If we can build AI for a fairer go, we can secure a competitive advantage as well as safeguard the rights of Australians.</p><img src="https://counter.theconversation.com/content/114438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emma Schleiger receives funding from the Australian Government</span></em></p><p class="fine-print"><em><span>Stefan Hajkowicz receives funding from the Australian Government</span></em></p>Artificial intelligence has the potential to transform the way we live, work, communicate and travel. So long as it’s designed that way.Emma Schleiger, Research Scientist, CSIROStefan Hajkowicz, Senior Principal Scientist, Strategy and Foresight, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1114152019-03-22T10:45:05Z2019-03-22T10:45:05ZCars are regulated for safety – why not information technology?<figure><img src="https://images.theconversation.com/files/265128/original/file-20190321-93063-ouhsqj.jpg?ixlib=rb-1.1.0&rect=152%2C6%2C4096%2C2816&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Modern cars are safer than this – but not because auto companies got more ethical.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/bakersfield-ca-oct-24-beautifully-restored-39559930">Richard Thornton/Shutterstock.com</a></span></figcaption></figure><p>As the computing industry grapples with its role in society, many people, both <a href="https://gzconsulting.org/2018/06/04/salesforce-there-is-a-crisis-of-trust-concerning-data-privacy-and-cybersecurity/">in the field</a> and <a href="https://www.wsj.com/articles/in-praise-of-hierarchy-1515175338">outside it</a>, are talking about a <a href="https://www.bostonglobe.com/ideas/2018/03/22/computer-science-faces-ethics-crisis-the-cambridge-analytica-scandal-proves/IzaXxl2BsYBtwM4nxezgcP/story.html">crisis</a> of <a href="https://www.wsj.com/articles/the-culture-of-deathand-of-disdain-1507244198">ethics</a>. </p>
<p>There is a massive rush to hire <a href="https://www.nytimes.com/2018/10/21/opinion/who-will-teach-silicon-valley-to-be-ethical.html">chief ethics officers</a>, retool <a href="https://theconversation.com/programmers-need-ethics-when-designing-the-technologies-that-influence-peoples-lives-100802">codes of professional ethics</a> and <a href="https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/">teach ethics to students</a>. But as a <a href="https://scholar.google.com/citations?user=DQaARsgAAAAJ&hl=en">scholar of computing</a> – and a teacher of <a href="https://www.cs.rice.edu/%7Evardi/COMP301-2019.pdf">a course on computing, ethics and society</a> at Rice University – I am skeptical of the assumptions that what ails technology is a lack of ethics, and that the best fix is to teach technologists about ethics.</p>
<p>Instead, in my view, the solution is government action, which aims at balancing regulation, ethics and markets. This isn’t a radical new idea: It’s how society treats cars and driving.</p>
<p>Consider, for instance, the Ford Model T, the first mass-produced and mass-consumed automobile. Its debut in 1908 launched the automobile age, a time of great mobility – and widespread death. Car crashes kill <a href="https://www.who.int/gho/road_safety/mortality/en/">more than a million people worldwide each year</a> – but the fatality rate per mile driven <a href="https://en.wikipedia.org/wiki/Transportation_safety_in_the_United_States">has been dropping</a> almost since the first Model T rolled off the assembly line. </p>
<p><iframe id="m9zdG" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/m9zdG/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The reason for that improving safety record is not that people learning to drive studied the ethics of responsible and safe driving. Rather, they were taught, and tested on, the rules of the road, in order to obtain a driving license. Other regulations <a href="https://www.fhwa.dot.gov/programadmin/standards.cfm">improved how roads were built</a>, required car makers to adopt <a href="https://www.npr.org/2015/10/16/449090584/why-arent-auto-safety-standards-universal">new safety features</a>, mandated <a href="https://www.nerdwallet.com/blog/insurance/car-insurance/">accident insurance</a>, and <a href="https://www.nhtsa.gov/laws-regulations/impaired-driving">outlawed drunk driving</a> and <a href="https://en.wikipedia.org/wiki/Texting_while_driving">other unsafe behaviors</a>. I believe a similar approach – regulation, in addition to ethics education for technologists, as well as market competition – is needed today to make modern technology safe for society as a whole.</p>
<h2>Flaws in the basic business model</h2>
<p>In the 1980s, internet pioneers adopted a philosophy that “<a href="https://en.wikipedia.org/wiki/Information_wants_to_be_free">information wants to be free</a>” – so website owners didn’t charge readers for access to the content. Instead, internet companies used advertising to support their efforts. That led them to collect personal data on their users and offer <a href="https://theconversation.com/solving-the-political-ad-problem-with-transparency-85366">micro-targeted advertising</a> to make money, which social scientist Shoshana Zuboff calls “<a href="https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/">surveillance capitalism</a>.”</p>
<p>This business model is <a href="https://www.macrotrends.net/stocks/charts/GOOG/alphabet/revenue">enormously profitable</a>, so it’s unlikely internet companies will abandon it on their own as a result of ethical qualms. Even in the face of <a href="https://techcrunch.com/2018/10/24/apples-tim-cook-makes-blistering-attack-on-the-data-industrial-complex/">blistering critiques</a> and <a href="https://theconversation.com/understanding-facebooks-data-crisis-5-essential-reads-94066">Facebook’s Cambridge Analytica</a> scandal, the massive profits are compelling.</p>
<p><iframe id="19C8Z" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/19C8Z/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The real problem with surveillance capitalism is not that it is unethical – which I believe it is – but that it is completely legal in most countries. It is unreasonable to expect for-profit corporations to avoid profitable businesses that are legal. In my view, it is not enough to simply criticize internet companies’ ethics. If society finds the surveillance business model offensive, then the remedy is not an ethical outrage, but making laws and regulations that govern it, or even prevent it altogether.</p>
<p>Of course, public policy cannot be divorced from ethics. Selling human organs <a href="https://doi.org/10.1097%2FTP.0000000000001778">is banned in the U.S.</a> in part because society finds it ethically repugnant to profit from life itself. But the ban is enforced by laws, not by an ongoing ethics debate. As Chief Justice Earl Warren remarked: “<a href="https://www.brainyquote.com/quotes/earl_warren_112607">In civilized life, law floats in a sea of ethics</a>.”</p>
<h2>Regulation has benefits</h2>
<p>For decades, the information-technology industry has <a href="https://www.wired.com/story/the-case-against-elon-musk-will-chill-innovation/">successfully lobbied</a> against attempts to legislate or regulate it, arguing that “<a href="https://bigthink.com/peter-thiel-regulation-stifles-innovation">regulation stifles innovation</a>.” Of course, that assumes all innovation is good. It has become evidently clear that this is not always the case: Some of the internet giants’ <a href="https://theconversation.com/facebook-is-killing-democracy-with-its-personality-profiling-data-93611">innovation has harmed democratic society</a> in the U.S. and around the world. </p>
<p>In fact, one purpose of regulation is to chill certain kinds of innovation – specifically, those that the public finds wrong, distasteful or unhelpful to the advancement of society. Regulation can also encourage innovation in ways society deems beneficial. There is no question that regulations on the automobile industry encouraged innovation in <a href="https://en.wikipedia.org/wiki/Automotive_safety">safety</a> and <a href="https://www.ucsusa.org/clean-vehicles/fuel-efficiency/fuel-economy-basics.html">fuel efficiency</a>.</p>
<p>Some members of Congress have proposed a number of <a href="https://www.documentcloud.org/documents/4620765-PlatformPolicyPaper.html#document/p1">ambitious plans</a> to tackle <a href="https://theconversation.com/weaponized-information-seeks-a-new-target-in-cyberspace-users-minds-100069">information warfare</a>, <a href="https://theconversation.com/fragmented-us-privacy-rules-leave-large-data-loopholes-for-facebook-and-others-94606">consumer protection</a>, <a href="https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791">competition in digital technology</a> and the <a href="https://theconversation.com/artificial-intelligence-must-know-when-to-ask-for-human-help-112207">role of artificial intelligence</a> in society. But much simpler – and more widely supported – rules could make a huge difference for individual customers and society as a whole.</p>
<p>For instance, federal regulators could require software terms and licenses include plain language that’s easily understood by anyone – perhaps modeled on the longstanding “<a href="https://www.sec.gov/rules/final/33-7497.txt">plain English rule</a>” for corporate financial filings to the U.S. Securities and Exchange Commission. Laws or rules could also require companies to <a href="https://finance.yahoo.com/news/need-federal-law-protecting-consumers-data-leaks-195017782.html">disclose data breaches quickly</a>, both to officials and the public at large. That might even spark innovation as firms increase their efforts to prevent and detect network intrusions and data theft. Another relatively easy opportunity would be to regulate automated judicial decision systems, including requiring that they not be deployed before passing an independent audit showing that they are <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">fair and unbiased</a>.</p>
<p>Those straightforward regulations could pave the way for thinking and talking about whether and how to regulate <a href="https://www.cnn.com/2018/12/17/tech/big-tech-too-big-tim-wu/index.html">the sizes of these big technology firms</a>. But rule-making need not start with the hardest problems – there’s plenty to do that most people would agree on right away.</p>
<p>The bottom line is that technology advances have been moving <a href="https://en.wikipedia.org/wiki/Moore%27s_law">very fast</a>, while public policy has lagged behind. It is time for public policy to catch up with technology. If technology is driving the future, society should do the steering.</p><img src="https://counter.theconversation.com/content/111415/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Moshe Y. Vardi is affiliated with the Association for Computing Machinery, a professional association. </span></em></p>Of course people need ethics. But the current troubles in the technology industry are not evidence of an ethics crisis; it is a public-policy crisis.Moshe Y. Vardi, Professor of Computer Science, Rice UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/955232018-05-15T02:37:20Z2018-05-15T02:37:20ZThe ethics of ‘securitising’ Australian cyberspace<p><em>This article is the fifth in a five-part series exploring Australian national security in the digital age. Read parts <a href="https://theconversation.com/explainer-how-the-australian-intelligence-community-works-94422">one</a>, <a href="https://theconversation.com/trust-is-the-second-casualty-in-the-war-on-terror-94420">two</a>, <a href="https://theconversation.com/this-isnt-helter-skelter-why-the-internet-alone-cant-be-blamed-for-radicalisation-94825">three</a> and <a href="">four</a> here.</em></p>
<hr>
<p>As technology evolves and Australia becomes ever-more reliant on cyber systems throughout government and society, the threats that cyber attacks pose to the country’s national security are real – and significant.</p>
<p>Cyber weapons now exist that can be used to attack and exploit vulnerabilities in Australia’s national infrastructure. Many of the cyber threats that exist now, such as defacing a website, are not that serious.</p>
<p>But more nefarious attacks on software systems have the potential to damage <a href="https://theconversation.com/the-public-has-a-vital-role-to-play-in-preventing-future-cyber-attacks-95141">critical infrastructure</a> and threaten people’s lives. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/since-boston-bombing-terrorists-are-using-new-social-media-to-inspire-potential-attackers-94944">Since Boston bombing, terrorists are using new social media to inspire potential attackers</a>
</strong>
</em>
</p>
<hr>
<p>The Australian Cyber Security Centre (ACSC) <a href="https://acsc.gov.au/publications/ACSC_Threat_Report_2017.pdf">Threat Report</a> addresses these concerns every year, highlighting the ubiquitous nature of cyber-crime in Australia, the potential for cyber-terrorism, and the vulnerability of data stored on government and commercial networks.</p>
<p>Governments now take these types of threats so seriously, they speak of the potential for military responses to cyber-attacks in the future. As one US military official <a href="https://www.wsj.com/articles/SB10001424052702304563104576355623135782718">told The Wall Street Journal</a>:</p>
<blockquote>
<p>If you shut down our power grid, maybe we will put a missile down one of your smokestacks.</p>
</blockquote>
<h2>A securitised internet</h2>
<p>Such concerns have been a key part of Australia’s ambitions to revamp its national security to respond to future cyber-threats. <a href="https://cybersecuritystrategy.pmc.gov.au/assets/img/PMC-Cyber-Strategy.pdf">Australia’s Cyber Security Strategy</a>, for instance, states that:</p>
<blockquote>
<p>all of us – governments, businesses and individuals – need to work together to build resilience to cybersecurity threats and to make the most of opportunities online. </p>
</blockquote>
<p>An important ethical concern with such a focus, however, is the risk that Australia’s cyberspace becomes “securitised”.</p>
<p>When we securitise an issue, we frame the activity as being conducted in a state of emergency. A state of emergency is when a government temporarily changes the conditions of its political and social institutions in response to a particularly serious emergency. This might be a natural disaster, war or rioting, for example. Importantly, due process constraints on government officials, such as <a href="https://en.wikipedia.org/wiki/Habeas_corpus"><em>habeas corpus</em></a>, are suspended. </p>
<p>An ethical problem with a securitised or militarised cyberspace, especially if it becomes a permanent measure, is that it can quickly erode fundamental human rights such as privacy and freedom of speech. </p>
<h2>Ethical problems in a brave new world</h2>
<p>For instance, what are the ethical implications of conducting military activities against terrorist propaganda online, by conducting psychological operations on social media platforms, say, or simply shutting them down? </p>
<p>Using social media in this way would be counter to the social and civil function of these channels of communication. Trying to deny audiences the ability to speak freely on social media could also undermine the internet’s effectiveness as a tool for social and economic good. This is especially problematic in Australia, where fundamental human rights such as privacy and freedom of speech are taken for granted as fundamental civic values.</p>
<p>There is also potential for a militarised cyberspace to increase the likelihood of conflict between states. As cyber-attacks are a relatively new threat, it’s unclear what actions might lead to escalation and constitute an act of war.</p>
<p>The perception that cyber-attacks are not as harmful as, say, a missile attack could lead to their increased use. This opens the door to potentially more serious forms of conflict. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-cyber-security-strategy-is-only-a-small-step-in-the-right-direction-58208">The Cyber Security Strategy is only a small step in the right direction</a>
</strong>
</em>
</p>
<hr>
<p>Another important ethical consideration is the enhanced government surveillance of a securitised internet. The fall-out from the Edward Snowden disclosures, for instance, <a href="https://www.theguardian.com/world/2013/jun/06/nsa-phone-records-verizon-court-order">revealed the intrusiveness of US security agencies’ activities online</a>. This in turn had the effect of undermining the <a href="https://theconversation.com/we-need-to-fix-the-way-we-talk-about-national-intelligence-32170">public’s trust</a> in the government. </p>
<p>Such a loss of trust in one segment of the government can have potentially dire impacts on other areas. For example, in response to public suspicions of the actions of security agencies, governments might overreact and cut worthwhile surveillance programmes. Or disgruntled government employees (like Snowden) might leak other types of confidential or sensitive information to the detriment of the public good. </p>
<p>A recent example of this occurred when highly sensitive correspondences between Home Affairs Secretary Mike Pezzullo and Defence Secretary Greg Moriarty <a href="https://www.dailytelegraph.com.au/news/nsw/spying-shock-shades-of-big-brother-as-cybersecurity-vision-comes-to-light/news-story/bc02f35f23fa104b139160906f2ae709">were leaked</a> to the media. The communications detailed plans to give the Australian Signals Directorate new domestic surveillance powers. Mark Dreyfus, the national security shadow minister, <a href="https://twitter.com/markdreyfusQCMP/status/991226168094310400">labelled the leak</a>, “a deeply worrying signal of internal struggles.”</p>
<p>So it is important that Australian government agencies tasked with managing national security in cyberspace consistently act in a trustworthy manner. As such, there should be guarantees that decisions related to cyber-security oversight and governance are not driven by short-term political gains. </p>
<p>In particular, government decision-makers <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2894490">should seek to promote</a> an informed and public debate about the standards required for “minimum transparency, accountability and oversight of government surveillance practices.”</p>
<p>Anything short of that could make the country’s cyber-infrastructure less secure – a frightening prospect in an increasingly hostile and volatile digital world.</p><img src="https://counter.theconversation.com/content/95523/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dr Shannon Brandt Ford receives funding from the Australian Army Research Scheme. </span></em></p>Framing cyberspace as a national security concern can quickly erode fundamental human rights.Dr Shannon Brandt Ford, Lecturer, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/805692017-07-10T12:21:33Z2017-07-10T12:21:33ZAsimov’s Laws won’t stop robots harming humans so we’ve developed a better solution<figure><img src="https://images.theconversation.com/files/177524/original/file-20170710-29710-1p3gdyz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.</p>
<p>Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0004018">an idea</a> that could be an alternative. </p>
<p>Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper <a href="http://journal.frontiersin.org/article/10.3389/frobt.2017.00025/full">in Frontiers</a>, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.</p>
<h2>The Three Laws</h2>
<p>Asimov’s Three Laws are as follows:</p>
<ul>
<li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.</li>
<li>A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.</li>
<li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.</li>
</ul>
<p>While these laws sound plausible, <a href="http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410">numerous arguments</a> have demonstrated why they are inadequate. <a href="https://en.wikipedia.org/wiki/The_Complete_Robot">Asimov’s own stories</a> are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. <a href="https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/">Most attempts</a> to draft <a href="https://futureoflife.org/ai-principles">new guidelines</a> follow a similar principle to create safe, compliant and robust robots.</p>
<p>One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioural goals, such as preventing harm to humans or protecting a robot’s existence, can mean different things in <a href="https://theconversation.com/after-75-years-isaac-asimovs-three-laws-of-robotics-need-updating-74501">different contexts</a>. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/177530/original/file-20170710-23474-1ywk3bs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Here to help.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.</p>
<p>When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly “natural” ways. It typically only requires them to model how the real world works but doesn’t need any specialised artificial intelligence programming designed to deal with the particular scenario.</p>
<p>But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.</p>
<h2>Robots could adapt to new situations</h2>
<p>Using this general principle rather than predefined rules of behaviour would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule “don’t push humans”, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didn’t push them.</p>
<p>In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimise the overall harm to humans by keeping them confined and “protected”. But our principle would avoid such a scenario because it would mean a loss of human empowerment.</p>
<p>While empowerment provides a new way of thinking about safe robot behaviour, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots’ behaviour, and how to keep robots -– in the most naive sense -– “ethical”.</p><img src="https://counter.theconversation.com/content/80569/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christoph Salge is currently funded by the EU Horizon 2020 programme under the Marie Sklodowska-Curie grant 705643 (INTERCOGAM), and has previously funded by the EC H2020-641321 socSMCs FET Proactive project.</span></em></p>Robots should be empowered to pick the action that most helps humans.Christoph Salge, Marie Curie Global Fellow, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/777882017-05-19T07:20:59Z2017-05-19T07:20:59ZAn ethical hacker can help you beat a malicious one<figure><img src="https://images.theconversation.com/files/170057/original/file-20170519-12226-1jxealn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Not all hackers can be bad for an organisation: the white hat or ethical hacker can help.</span> <span class="attribution"><span class="source">Shutterstock/napocska </span></span></figcaption></figure><p>The <a href="http://www.abc.net.au/news/2017-05-18/adylkuzz-cyberattack-could-be-far-worse-than-wannacry:-expert/8537502">recent spate of cyber attacks</a> on computer systems across the world shows how some organisations are not doing enough to protect their systems against malicious hackers.</p>
<p>But if organisations had engaged the services of an ethical hacker then many of the vulnerabilities on their systems could have been found and fixed, rather than exploited.</p>
<p>There are many instances in which ethical hacking has successfully prevented a potential attack, but because of the sensitive nature of such information, few cases are made public. <a href="https://www.iansresearch.com/services/consulting/case-study---pen-testing">This anonymised example</a> highlights the type of issues that can be uncovered by an ethical hacker, which can then be addressed by the client.</p>
<h2>Putting on your hacker hat</h2>
<p>There are typically three types of hacker: “black hat”, “grey hat” and “white hat”. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=256&fit=crop&dpr=1 600w, https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=256&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=256&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=322&fit=crop&dpr=1 754w, https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=322&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/170058/original/file-20170519-12242-1jbohtm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=322&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Three types of hacker: Black hat, grey hat and white hat.</span>
<span class="attribution"><span class="source">Shutterstock/MatiasDelCarmine</span></span>
</figcaption>
</figure>
<p>Black hat hackers are typically malicious; they operate illegally and attempt to breach or bypass security controls. Their motivation can be for personal, political or financial gain, or simply to cause havoc. </p>
<p>Grey hat hackers also try to find vulnerabilities in an organisation, and may then alert the organisation or publish the vulnerability.</p>
<p>Grey hats can sometimes sell the vulnerabilities to government or law-enforcement agencies, who may use them for questionable means in conflict or enforcement. The activities of a grey hat are not only questionable, but also seen as illegal because they are not given permission to conduct their operations.</p>
<p>White hat hackers use the same tools and techniques as their black and grey hat counterparts, but they are engaged and paid by organisations to find vulnerabilities. That’s why they are known as ethical hackers.</p>
<p>A contract and non-disclosure agreement (NDA) is usually signed between the ethical hacker and the organisation. This ensures that what they are doing is legal and that both parties are protected.</p>
<h2>The ethical hack-attack</h2>
<p>Ethical hackers will typically follow a phased approach to conducting their tests. Depending on their methods, this will usually begin with a reconnaissance phase in which information is gathered and potential target systems are identified. </p>
<p>From there the computer network will be scanned (externally, internally or both, depending on the engagement) to examine it in more depth so as to identify any known vulnerabilities.</p>
<p>If vulnerabilities are found, an attempt to exploit them may follow, and ultimately access may be gained. An ethical hacker would also attempt to break into system that don’t necessarily have a known vulnerability, but are simply exposed.</p>
<p>Ethical hackers will then document their work and capture evidence to report back to the client. Hopefully they will find any vulnerabilities first, before they are exploited by others with less beneficent aims. </p>
<h2>Becoming an ethical hacker</h2>
<p>Ethical hackers gain their skills mainly through experience. </p>
<p>There are also many courses and certifications that teach ethical hacking, including the <a href="https://www.crestaustralia.org/certification.html">CREST Certified Tester</a>, <a href="https://www.eccouncil.org/programs/certified-ethical-hacker-ceh/">EC-Councils Certified Ethical Hacker</a>, <a href="https://www.giac.org/certification/penetration-tester-gpen">GIAC Penetration Tester</a> and <a href="https://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/">Offensive Security Certified Professional</a></p>
<p>But these courses can’t teach everything. Organisations can differ vastly from one another, and the way to penetration-test each organisation is different and by no means prescriptive. </p>
<p>A good ethical hacker requires a great deal of skill and experience, not just the ability to blindly run a tool or script (also known as “<a href="http://www.pcmag.com/encyclopedia/term/50927/script-kiddie">script kiddie</a>”).</p>
<p>Ethical hackers, like any other hacker, may also venture into the <a href="https://www.wired.com/2014/11/hacker-lexicon-whats-dark-web/">dark web</a> to gain intelligence and learn about new exploits.</p>
<h2>Asking for trouble</h2>
<p>One of the frustrations over this month’s ransomware attack on Microsoft’s Windows systems is that the software giant had already <a href="https://technet.microsoft.com/en-us/library/security/ms17-010.aspx">issued a patch</a> in March, to protect PCs from this type of attack.</p>
<p>Despite the warnings, several organisations had <a href="http://www.reuters.com/article/us-cyber-attack-liability-idUSKCN18B2SE">not installed the patch</a>, and others were <a href="https://theconversation.com/nhs-ransomware-cyber-attack-was-preventable-77674">running old Windows XP systems</a> that <a href="https://theconversation.com/the-end-is-nigh-for-windows-xp-are-you-ready-24104">Microsoft stopped supporting back in 2014</a>. Windows 2003 systems were also vulnerable, having been <a href="https://www.microsoft.com/en-au/cloud-platform/windows-server-2003">unsupported since 2015</a>.</p>
<p>This left these systems <a href="https://theconversation.com/massive-global-ransomware-attack-highlights-faults-and-the-need-to-be-better-prepared-77673">open to attack by ransomware</a> known by a variety of names, including WannaCrypt and WannaCry. It encrypts files on infected systems, requiring a ransom for their unencryption.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=449&fit=crop&dpr=1 600w, https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=449&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=449&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/169178/original/file-20170513-3668-xajz7t.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Wana Decrypt0r 2.0 Ransomware Screen.</span>
<span class="attribution"><a class="source" href="https://blog.avast.com/ransomware-that-infected-telefonica-and-nhs-hospitals-is-spreading-aggressively-with-over-50000-attacks-so-far-today">Avast</a></span>
</figcaption>
</figure>
<h2>Another attack</h2>
<p>It has now been revealed that the same vulnerabilities that allowed this ransomware to infect systems has allowed the spread of a new threat, the <a href="https://www.proofpoint.com/us/threat-insight/post/adylkuzz-cryptocurrency-mining-malware-spreading-for-weeks-via-eternalblue-doublepulsar">Adylkuzz Cryptocurrency Mining Malware</a>. </p>
<p>This ransomware is thought to have gone largely undetected until now because it isn’t destructive. Instead, it <a href="https://www.lifewire.com/cryptocoin-mining-for-beginners-2483064">mines a cryptocurrency</a> called <a href="https://getmonero.org/home">Monero</a>, which can generate income for the attackers.</p>
<p>Both outbreaks highlight the importance of practising diligent security and making sure that unsupported systems are upgraded or decommissioned.</p>
<p>The majority of advice so far has focused on appropriate defences such as the <a href="https://asd.gov.au/publications/protect/essential-eight-explained.htm">Australian Signals Directorate’s Essential Eight</a>. This covers issues such as patching, application white-listing, appropriate firewall configuration, and using vendor-supported platforms.</p>
<p>But having a vigilant IT department that follows such guidance may not be enough.</p>
<p>Some focus should be given to how an <a href="http://www.pcmag.com/encyclopedia/term/42791/ethical-hacker">ethical hacker</a> can be used to help protect organisations against malicious attacks.</p>
<h2>More than just an IT check</h2>
<p>This approach to using an ethical hacker differs from the traditional internal IT team approach, as the focus is shifted from a defensive to an offensive mindset. </p>
<p>While the importance of solid defences can’t be understated, augmenting this with ethical hacking can greatly increase the resilience of an organisation’s networks. This approach tests the effectiveness of the controls in place and may identify previously unknown exposures.</p>
<p>But this approach is fairly limited to organisations. Engaging the services of an ethical hacker can cost tens of thousands of dollars, depending on the size of the job.</p>
<p>A typical home user would not have the resources to hire such help. In that case, adequate security controls and awareness would still be the best way to stop many attacks. </p>
<p>Microsoft’s Windows 10, for example, installs updates automatically, which can’t be deferred like previous versions. Windows 8 and 10 also come with <a href="https://www.microsoft.com/en-us/safety/pc-security/windows-defender.aspx">Windows Defender</a> pre-installed.</p>
<p>People should also make sure not to open suspicious emails, including those from unknown recipients. This will go a long way towards preventing infection.</p>
<h2>The future of hack attacks</h2>
<p><a href="https://www.telstra.com.au/business-enterprise/campaigns/cyber-security-report">Telstra’s latest security report</a> says that 59.6% of future potential attacks in Asia and 52.6% in Australia will be due to external hackers. These attackers will use vulnerabilities (known or unknown) to carry out their attacks. </p>
<p>So there is merit in further research to determine how an ethical hacker can help organisations prevent attacks and infections from unknown vulnerabilities. The ability for a penetration test to identify vulnerabilities in advance before software vendors are aware and can release any patches would be invaluable. </p>
<p>But there are certain ethical issues that need to be considered, given that an ethical hacker often needs to use questionable means, such as through the dark web. There is a fine line between what constitutes an ethical approach and an unethical one.</p><img src="https://counter.theconversation.com/content/77788/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Georg Thomas is completing his Doctorate at Charles Sturt University with a focus on ethical hacking. He is also a CISSP, CISM, C|EH, GIAC, MACS (Snr), CP, MCSE(Security) and is affiliated with the ACS, ISC2, ISACA, GIAC, and EC-Council. </span></em></p>Simply updating and patching an organisation’s computer software may not be enough to fend off another cyber attack. You could engage an ethical hacker to help out.Georg Thomas, PhD candidate in information technology, Charles Sturt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/715982017-01-26T00:46:21Z2017-01-26T00:46:21ZFar beyond crime-ridden depravity, darknets are key strongholds of freedom of expression online<figure><img src="https://images.theconversation.com/files/154323/original/image-20170125-23875-go03tr.jpg?ixlib=rb-1.1.0&rect=753%2C509%2C2143%2C2003&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/social-media-network-connection-concept-people-216055819">via shutterstock.com</a></span></figcaption></figure><p>The internet is much more than just the publicly available, Google-able web services most online users frequent – and that’s good for free expression. Companies frequently create private networks to enable employees to use secure corporate servers, for example. And free software allows individuals to create what are called “peer-to-peer” networks, connecting directly from one machine to another.</p>
<p><a href="https://theconversation.com/searching-deep-and-dark-building-a-google-for-the-less-visible-parts-of-the-web-58472">Unable to be indexed</a> by current search engines, and therefore less visible to the general public, subnetworks like these are often called “darknets,” or collective as the singular “darknet.” These networks typically use software, such as <a href="https://www.torproject.org">Tor</a>, that <a href="https://theconversation.com/securing-web-browsing-protecting-the-tor-network-56840">anonymizes the machines connecting</a> to them, and <a href="https://gnunet.org">encrypts the data</a> traveling through their connections. </p>
<p>Some of what’s on the darknet is alarming. A 2015 story from <a href="http://www.foxnews.com/tech/2015/04/23/darknet-danger-organs-murder-credit-card-info-all-for-sale-on-internet.html">Fox News</a> reads:</p>
<blockquote>
<p>“Perusing the darknet offers a jarring jaunt through jaw-dropping depravity: Galleries of child pornography, videos of humans having sex with animals, offers for sale of illegal drugs, weapons, stolen credit card numbers and fake identifications for sale. Even human organs reportedly from Chinese execution victims are up for sale on the darknet.”</p>
</blockquote>
<p>But that’s not the whole story – nor the whole content and context of the darknet. Portraying the darknet as primarily, or even solely, for criminals ignores the societal forces that push people toward these anonymous networks. Our research into the content and activity of <a href="https://www.academia.edu/30683064/Diagram_of_a_Darknet_Exploring_the_Characteristics_of_an_Anonymous_Space_Online">one major darknet, called Freenet</a>, indicates that darknets should be understood not as a crime-ridden “<a href="http://www.rollingstone.com/politics/news/the-battle-for-the-dark-net-20151022">Wild West</a>,” but rather as “wilderness,” spaces that by design are meant to remain unsullied by the civilizing institutions – law enforcement, governments and corporations – that have come to dominate the internet. </p>
<p>There is definitely illegal activity on the darknet, as there is on the open internet. However, many of the people using the darknet have a diverse range of motives and activities, linked by a common desire to reclaim what they see as major benefits of technology: privacy and free speech.</p>
<h2>Describing Freenet</h2>
<p>Our research explored <a href="https://freenetproject.org/">Freenet</a>, an anonymous peer-to-peer network accessed via a freely downloadable application. In this type of network, there are no centralized servers storing information or transferring data. Rather, each computer that joins the network takes on some of the tasks of sharing information. </p>
<p>When a user installs Freenet, her computer establishes a connection to a small group of existing Freenet users. Each of these is connected in turn to other Freenet users’ computers. Through these connections, the entire contents of the network are available to any user. This design allows Freenet to be decentralized, anonymous and resistant to surveillance and censorship.</p>
<p>Freenet’s software requires users to donate a portion of their local hard drive space to store Freenet material. That information is automatically encrypted, so the computer’s owner does not know what files are stored or the contents of those files. Files shared on the network are stored on numerous computers, ensuring they will be accessible even if some people turn off their machines.</p>
<h2>Joining the network</h2>
<p>As researchers, we played the role of a novice Freenet user. The network allows many different types of interaction, including social networking sites and even the ability to build direct relationships with other users. But our goal was to understand what the network had to offer to a new user just beginning to explore the system.</p>
<p>There are several Freenet sites that have used web crawlers to index the network, offering a sort of directory of what is available. We visited one of these sites to download their list. From the 4,286 total sites in the index we chose, we selected a random sample of 427 sites to visit and study more closely. The sites with these indexes are a part of the Freenet network, and therefore can be accessed only by users who have downloaded the software. Standard search engines cannot be used to find sites on Freenet. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=268&fit=crop&dpr=1 600w, https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=268&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=268&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=337&fit=crop&dpr=1 754w, https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=337&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/153685/original/image-20170120-5238-jk9zsa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=337&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An introductory page on Freenet.</span>
<span class="attribution"><span class="source">Roderick Graham and Brian Pitman</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Finding a ‘hacker ethic’</h2>
<p>What we found indicated that Freenet is dominated by what scholars call a “<a href="http://www.penguinrandomhouse.com/books/80240/the-hacker-ethic-by-pekka-himanen/9780375758782/">hacker ethic</a>.” This term encompasses a group of progressive and libertarian beliefs often espoused by hackers, which are primarily concerned with <a href="https://www.academia.edu/4469457/Computer_Hacking_Just_Another_Case_of_Juvenile_Delinquency">these ideals</a>:</p>
<ul>
<li>Access to information should be free;</li>
<li>Technology can, and should, improve people’s lives;</li>
<li>Bureaucracy and authority are not to be trusted;</li>
<li>A resistance to conventional and mainstream lifestyles</li>
</ul>
<p>Some of that may be because using darknet technology often requires <a href="http://www.techrepublic.com/article/why-email-encryption-is-failing-and-how-to-fix-it/">additional technical understanding</a>. In addition, <a href="http://www.stevenlevy.com/index.php/books/hackers">people with technical skills</a> may be more likely to want to find, use and even create services that have technological protections against surveillance.</p>
<p>Our reading of hacking literature suggests to us that the philosophical and ideological beliefs driving darknet users are not well-known. But without this context, what we observed on Freenet would be hard to make sense of.</p>
<p>There were Freenet sites for sharing music, e-books and video. Many sites were focused around personal self-expression, like regular internet blogs. Others were dedicated to promoting a particular ideology. For example, socialist and libertarian content was common. Still other sites shared information from whistle-blowers or government documents, including a copy of the Wikileaks website’s data, complete with its “Afghan War Diary” of classified documents about the United States military invasion of Afghanistan following the Sept. 11, 2001 terrorist attacks.</p>
<p>With the hacker ethic as a guide, we can understand that most of this content is from individuals who have a deep mistrust of authority, reject gross materialism and conformity, and wish to live their digital lives free of surveillance. </p>
<h2>What about crime?</h2>
<p>There is criminal activity on Freenet. About a quarter of the sites we observed either delivered or linked to child pornography. This is alarming, but must be seen in the proper context. Legal and ethical limits on researchers make it very hard to <a href="http://www.wsj.com/articles/SB114485422875624000">measure the magnitude of pornographic activity online</a>, and specifically child pornography.</p>
<p>Once we came upon a site that purported to have child pornography, we left the site immediately without investigating further. For example, we did not seek to determine whether there was just one image or an entire library or marketplace selling pornographic content. This was a good idea from the perspectives of both law and ethics, but did not allow us to gather any real data about how much pornography was actually present.</p>
<p>Other research suggests that the presence of child pornography is not a darknet or Freenet problem, but an internet problem. Work from the <a href="http://www.asacp.org/index.php?content=statistics">the Association for Sites Advocating Child Protection</a> points to <a href="http://www.sumall.org/child-pornography-data">pervasive sharing of child pornography</a> well beyond just Freenet or even the wider set of darknets. Evaluating the darknet should not stop just at the presence of illegal material, but should extend to its full content and context.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=431&fit=crop&dpr=1 600w, https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=431&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=431&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=542&fit=crop&dpr=1 754w, https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=542&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/153687/original/image-20170120-5238-16sz0j1.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=542&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A pie chart shows the share of Freenet sites devoted to particular types of content.</span>
<span class="attribution"><span class="source">Roderick Graham and Brian Pitman</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>With this new information, we can look more accurately at the darknet. It contains many distinct spaces catering to a wide range of activities, from meritorious to abhorrent. In this sense, the darknet is no more dangerous than the rest of the internet. And darknet services do provide anonymity, privacy, freedom of expression and security, even in the face of a growing surveillance state.</p><img src="https://counter.theconversation.com/content/71598/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The darknet, like the open internet, is not immune from illegal activity. But many darknet users are there in search of ‘hacker ethics’ values such as privacy and free speech.Roderick S. Graham, Assistant Professor of Sociology, Old Dominion UniversityBrian Pitman, Instructor in Criminology and Sociology, Old Dominion UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/643092016-08-29T06:59:35Z2016-08-29T06:59:35ZFactCheck Q&A: what has the Children’s eSafety Commissioner done in its first year to tackle cyberbullying?<figure><img src="https://images.theconversation.com/files/135450/original/image-20160825-30231-oh4g1a.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Minister for Communications and Arts, Mitch Fifield, speaking on Q&A on August 23, 2016. </span> <span class="attribution"><span class="source">Q&A</span></span></figcaption></figure><p><strong>The Conversation is fact-checking claims made on Q&A, broadcast Mondays on the ABC at 9.35pm. Thank you to everyone who sent us quotes for checking via <a href="http://www.twitter.com/conversationEDU">Twitter</a> using hashtags #FactCheck and #QandA, on <a href="http://www.facebook.com/conversationEDU">Facebook</a> or by <a href="mailto:checkit@theconversation.edu.au">email</a>.</strong></p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5OTdRBgIa-0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Excerpt from Q&A, August 23, 2016.</span></figcaption>
</figure>
<blockquote>
<p>[The Children’s eSafety Commissioner] also is a cop on the beat when it comes to cyberbullying and they’ve investigated about 11,000 cases of cyberbullying. The eSafety Commissioner has the power to direct a social media organisation to take down offensive material and if they don’t, there are penalties of up to $17,000 per day for the social media organisation. <strong>– Minister for Communications and Arts, Mitch Fifield, <a href="http://www.abc.net.au/tv/qanda/txt/s4499755.htm">speaking on Q&A</a>, August 23, 2016.</strong> </p>
</blockquote>
<p>Revelations that boys have been <a href="http://www.smh.com.au/nsw/police-investigate-pornographic-website-targeting-nsw-schoolgirls-20160817-gquo0f.html">sharing pornographic pictures of underage girls online</a> have refocused attention on how best to tackle online harassment, bullying and abuse.</p>
<p>When asked about the issue on Q&A, Communications Minister Mitch Fifield said that the <a href="https://www.esafety.gov.au/about-the-office/role-of-the-office">Office of the Children’s eSafety Commissioner</a> had investigated about 11,000 cases of cyberbullying and can order social media organisations to take down offensive material – or face fines of up to $17,000 per day. </p>
<p>Is that correct?</p>
<h2>Checking the source</h2>
<p>When asked for a source to support his statement, a spokeswoman for the minister pointed to the commissioner’s <a href="https://www.esafety.gov.au/12-month-report/12-month-report-alternative">12-month report</a> and gave more detail on the agency’s capacity to issue penalties.</p>
<p>The spokesperson’s full response can be read <a href="http://theconversation.com/full-response-from-a-spokesperson-for-mitch-fifield-64439">here</a>. </p>
<p>When The Conversation asked the Office of the Children’s eSafety Commissioner how many fines had been issued since the passage of its enabling legislation, a spokesperson for the commissioner said:</p>
<blockquote>
<p>The Office of the children’s eSafety Commissioner handled over 11,000 complaints across its investigation functions, which include prohibited online content and cyberbullying. For more information on this please see our <a href="https://www.esafety.gov.au/about-the-office/newsroom/media-releases/child-sex-abuse-images-mainly-primary-schoolers">media release</a> issued at our 12-month mark. To date we have issued no penalty notices as we have worked collaboratively with our social media partners in getting material removed, without the need for formal powers.</p>
</blockquote>
<p>The commissioner’s website notes that in the 12 months to July 2016, there were 186 complaints of “serious cyberbullying” affecting under 18s, with 71% of these cases targeting girls. 15-year-olds are the primary targets of reported cyberbullying material.</p>
<p>The agency <a href="https://www.esafety.gov.au/12-month-report/12-month-report-alternative">said</a> that reports involved factors such as:</p>
<ul>
<li>73% Nasty comments and/or serious name calling</li>
<li>26% Offensive or upsetting pictures or videos</li>
<li>21% Threats of violence</li>
<li>21% Fake and/or impersonator accounts</li>
<li>7% Hacking of social media accounts</li>
<li>7% Unwanted contact</li>
<li>3% Hate pages</li>
</ul>
<p>In its 12-month report card, the agency <a href="https://www.esafety.gov.au/12-month-report/12-month-report-alternative">said</a>:</p>
<blockquote>
<p>We conducted 11,121 online content investigations and worked with our global partners to remove 7,465 URLs of child sexual abuse material. All items actioned in one to two days.</p>
</blockquote>
<p>Of the child sexual abuse material, 95% of the victims were girls and 5% boys.</p>
<p>So Mitch Fifield’s figures are correct, but he mistakenly conflated the term “cyberbullying” with the Children’s eSafety Commissioner’s full range of investigative responsibilities. </p>
<p>The Office of the Children’s eSafety Commissioner looked at about 11,000 <a href="https://en.wikipedia.org/wiki/Uniform_Resource_Locator">URLs</a> (more commonly known as web addresses) in the last year and removed 7,465 URLs of child sexual abuse material.</p>
<p>It’s inaccurate to describe such cases as “cyberbullying”. In fact, there were 186 complaints of “serious cyberbullying” affecting under 18s in the 12-month period to July 2016.</p>
<h2>Penalties</h2>
<p>The <a href="https://www.legislation.gov.au/Details/C2015A00024/Controls/">legislation</a> that led to the creation of the Office of the Children’s eSafety Commissioner notes that:</p>
<blockquote>
<p>A person must comply with a requirement under a social media service notice to the extent that the person is capable of doing so. Civil penalty: 100 penalty units.</p>
</blockquote>
<p>It is true that the agency has the power to direct social media organisations to take offensive material down and <a href="https://www.legislation.gov.au/Details/C2015A00024/Controls/">issue daily penalties</a> until they do. </p>
<p>That said, Fifield’s figure of up to $17,000 per day is out of date. This figure was accurate when the bill was proposed, but <a href="http://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r5464">a change to the Crimes Act 1914</a> has since increased Commonwealth penalty units from $170 to $180. </p>
<p>The maximum daily penalty is now $18,000. The agency has said it is yet to actually issue any penalties. </p>
<p>Lastly, it is worth noting that Mitch Fifield’s comments on Q&A were in response to a question from the audience about what could be done to afford Australian women more protection from online harassment.</p>
<p>Whilst the questioner asked about adult women, the minister’s response relates to the Children’s eSafety Commissioner, whose powers of investigation are constrained to cases involving young people under the age of 18. </p>
<h2>The scale of cyberbullying and online harassment</h2>
<p>It can be challenging to get a sense of how pervasive cyberbullying or online harassment are, because these terms are quite broad. It encapsulates everything from name-calling to stalking and threats of sexual assault.</p>
<p>A <a href="https://www.communications.gov.au/publications/publications/research-youth-exposure-and-management-cyber-bullying-incidents-australia-synthesis-report-june-2014">2014 study</a> prepared for the Department of Communications estimated that: </p>
<blockquote>
<p>the prevalence for being cyberbullied ‘over a 12-month period’ would be in the vicinity of 20% of young Australians aged 8–17.</p>
</blockquote>
<p>A <a href="http://www.pewinternet.org/2014/10/22/online-harassment/">Pew Internet survey</a> of 2,489 internet users in the United States revealed that 40% of respondents had experienced a form of online harassment. This ranged from offensive name-calling to threats of harm, stalking, sustained and/or sexual harassment. This survey found that young women, in particular, experience severe forms of harassment at disproportionately high levels.</p>
<h2>The Children’s eSafety Commissioner’s supporting roles</h2>
<p>The Office of the eSafety Commissioner provides useful tools, support, and education for people who have been targeted by cyberbullies and online harassers. These resources are helpful and approachable, although they lack the level of
technical detail provided by resources like <a href="https://onlinesafety.feministfrequency.com/en/">Speak Up & Stay Safe(r)</a>, and <a href="http://www.crashoverridenetwork.com/resources.html">Crash Override Network</a>.</p>
<p>The Office has also delivered a range of teaching programs to young people and school teachers, as well as establishing advice and support portals with their <a href="https://www.esafety.gov.au/iparent">iParent</a> and <a href="https://www.esafety.gov.au/women">eSafetyWomen</a> initiatives. </p>
<p>While the Office of the Children’s eSafety Commissioner provides some resources for adults, <a href="https://www.esafety.gov.au/complaints-and-reporting/offensive-and-illegal-content-complaints/what-we-cant-investigate">they do not investigate reports relating to adults</a>. Adults are advised to report to another government initiative, the <a href="https://www.acorn.gov.au">Australian Cybercrime Online Reporting Network (ACORN)</a>.</p>
<p>Finally, the commissioner is also charged with the <a href="https://www.esafety.gov.au/complaints-and-reporting/offensive-and-illegal-content-complaints/what-we-can-investigate">removal of offensive and illegal content</a>, including child sexual abuse material, or gratuitous, exploitative and offensive depictions of violence or sexual violence.</p>
<h2>Verdict</h2>
<p>Mitch Fifield’s statement that the Children’s eSafety Commissioner “investigated about 11,000 cases of cyberbullying” is not accurate. The minister has mistakenly conflated the term “cyberbullying” with the Children’s eSafety Commissioner’s full range of investigative responsibilities. </p>
<p>In fact, the agency conducted 11,121 <em>online content investigations</em> and removed 7,465 URLs of child sexual abuse material. There were 186 complaints of “serious cyberbullying” affecting under 18s in the 12-month period to July 2016.</p>
<p>The minister’s statement that “The eSafety Commissioner has the power to direct a social media organisation to take down offensive material”, and impose fines of up to $17,000 per day for non-cooperation is essentially true, but the figure is outdated. As of <a href="http://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r5464">July 2015</a>, the commissioner can impose fines of up to $18,000 per day. </p>
<p>In practice, the agency is yet to impose any penalties. Organisations have opted to collaborate with the commissioner’s office to take down prohibited content. <strong>– Andrew Quodling</strong> </p>
<hr>
<h2>Review</h2>
<p>The minister is wrong to say there were investigations into 11,000 examples of cyberbullying. In fact, according to the commissioner’s own <a href="https://www.esafety.gov.au/12-month-report/12-month-report-alternative">website</a>, they helped to resolve 186 complaints of serious cyberbullying for under 18s.</p>
<p>And although the commissioner’s office now <a href="https://www.esafety.gov.au/women">provides advice for women</a> as well as children, it only investigates complaints concerning children. </p>
<p>The commissioner’s report <a href="https://www.esafety.gov.au/12-month-report/12-month-report-alternative">says</a> it conducted 11,121 online content investigations and removed 7,465 URLs of child sexual abuse material. This sounds like a lot of investigations, but remember that a single site can house many URLs. </p>
<p>And while the office of the eSafety Commissioner and the minister’s spokesperson both say the agency handled over 11,000 <em>complaints</em>, in fact that does not line up with the language used in the Commissioner’s 12-month report card. </p>
<p>It says it conducted 11,121 content <em>investigations</em>. It is possible that all of these investigations originated as complaints from the public, but it’s also possible some or many were initiated by the office of the commissioner itself or its overseas partners. </p>
<p>The fact that the commissioner’s office received only 186 complaints of cyberbullying means either people don’t realise they can report the bullying to the office, or it’s not as widespread as might be suggested.</p>
<p>I helped conduct a <a href="http://www.criminologyresearchcouncil.gov.au/reports/1516/53-1112-FinalReport.pdf">large research project</a> on sexting and young people in Australia. There were two key findings that might be of interest here - and I do caution that sexting is very different to cyberbullying, but that an image that has been “sexted” can become a tool of a cyberbully. </p>
<p>The first is that sexting is widespread among young people; around 40-50% of 13 to 15 year olds have sent a nude or semi-nude selfie. But most happens consensually and only a small proportion of children who ever receive an image send it to a third party for whom it was not intended. </p>
<p>Also, few participants report being pressured to send an image, although there is a serious gendered double standard in the way girls who send images are treated (and shamed) compared with boys. This is not the way sexting is generally reported but dealing with these facts is important for minimising harm. </p>
<p>Without trying to understate the level of damage that sexting gone wrong (or cyberbullying) can have on young lives, we must stick to the facts and not overcook the danger, nature or prevalence of either. <strong>– Murray Lee</strong></p>
<hr>
<p><div class="callout"> Have you ever seen a “fact” worth checking? The Conversation’s FactCheck asks academic experts to test claims and see how true they are. We then ask a second academic to review an anonymous copy of the article. You can request a check at checkit@theconversation.edu.au. Please include the statement you would like us to check, the date it was made, and a link if possible.</div></p><img src="https://counter.theconversation.com/content/64309/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Murray Lee receives funding from the Australian Institute of Criminology and local government funding.
</span></em></p><p class="fine-print"><em><span>Andrew Quodling does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Communications Minister Mitch Fifield told Q&A that the Children’s eSafety Commissioner has investigated 11,000 cases of cyberbullying and can fine social media firms $17,000 a day. Is that true?Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/523002015-12-16T02:09:19Z2015-12-16T02:09:19ZThe drive towards ethical AI and responsible robots has begun<figure><img src="https://images.theconversation.com/files/105998/original/image-20151215-23210-1j2mqbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Robots and AI can be safe, if we make them that way.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Roboticist <a href="https://theconversation.com/profiles/sabine-hauert-104798">Sabine Hauert</a>, from the Britain’s University of Bristol, <a href="http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611#/hauert">wrote in Nature</a> earlier this year: </p>
<blockquote>
<p>My colleagues and I spend dinner parties explaining that we are not evil […] </p>
</blockquote>
<p>People are worried, she said.</p>
<blockquote>
<p>They hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology ‘under control’. </p>
</blockquote>
<p>These fears are not helped by a continuing epidemic of artificial intelligence (AI) and robophobic screenplays emanating from Hollywood. </p>
<p>It is hard to give examples of recent mainstream films with robots and AIs in them that are not infected with AI-phobic (<a href="http://www.imdb.com/title/tt2209764/">Transcendence</a>, <a href="http://www.imdb.com/title/tt1059786/">Eagle Eye</a>) or robophobic (<a href="http://www.imdb.com/title/tt1483013/">Oblivion</a>, any of the <a href="http://www.imdb.com/title/tt1234721/">Robocop</a> movies, <a href="http://www.imdb.com/title/tt0470752/">Ex Machina</a>) scenes.</p>
<p>The indie caper picture <a href="http://www.imdb.com/title/tt1990314/">Robot and Frank</a> and the relatively mild dumping of the human by the AI in <a href="http://www.imdb.com/title/tt1798709/">Her</a>, who then runs off to hang out with cooler Alan Watts-based superintelligences, are the only ones coming to mind that do not succumb to the prevailing moods of <a href="http://spectrum.ieee.org/computing/hardware/mocking-ai-panic">AI panic</a> and robophobia.</p>
<p>Robots and AIs do make good cinematic villains. However, in reality no one has much clue how to make a robot “want” or “feel” <em>anything</em> in a phenomenologically credible way as yet; let alone how to make them sociopaths hell bent on world domination and the extermination of <em>Homo sapiens</em>. </p>
<p>They are more likely to become innocently dangerous <em>idiot savants</em> than malevolent overlords seeking to get psychotic kicks by making humans “bend the knee”. </p>
<p>Rule-driven robots play a mean game of chess but <em>feel</em> nothing about winning or losing. They just pick moves that optimise a mathematical value function. Humans associate intelligence with desire but “desire” as <a href="https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention_software_model">formally modelled</a> in the rulebook of a Turing machine is a very different thing from the combustive forces of fury, jealousy and “star-crossed love” that drive humans.</p>
<h2>Responsible robots</h2>
<p>Two new AI and robotics nonprofits launched over the weekend. In different ways, both are responses to public concerns about the safety of AI and robotics.</p>
<p>The first is The Foundation for Responsible Robotics (<a href="http://responsiblerobotics.org/">FRR</a>), which wants to “promote responsibility for the robots embedded in our society”.</p>
<p>The FRR is headed by <a href="http://staffwww.dcs.shef.ac.uk/people/N.Sharkey/">Noel Sharkey</a> and various robotics experts. It aims to engage policymakers, create interdisciplinary teams of robotic, legal, ethical and societal scholars, work to explore what it means to be “responsible” as robotics researchers and designers, run workshops and engage the public. </p>
<p>Sharkey, who is known for his activism with the <a href="https://www.stopkillerrobots.org/">Campaign to Stop Killer Robots</a>, is concerned that “we are rushing headlong into the robotics revolution” without giving enough policy thought to the social problems that might arise.</p>
<p>He says governments are looking to robotics as a “powerful new economic driver” but “only lip service is being paid to a long list of potential societal hazards”.</p>
<p>New technologies could cause <a href="https://theconversation.com/australia-must-prepare-for-massive-job-losses-due-to-automation-43321">mass unemployment</a> or there might be an acceleration of social inequality caused by robots and automation leading to a society divided between robot-owners (living in gated communities) and a robot-less underclass (struggling on a brown burnt Earth) such as was depicted in <a href="http://www.imdb.com/title/tt1535108/">Elysium</a> . </p>
<p>The FRR wants to ensure that the public have confidence in robotics research and that robots will be developed with due regard for their human rights and freedom of choice.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=283&fit=crop&dpr=1 600w, https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=283&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=283&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=356&fit=crop&dpr=1 754w, https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=356&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/105960/original/image-20151215-23166-1b168vt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=356&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">We come in peace, if we’re designed that way.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>AI for everyone</h2>
<p>Open AI is backed by <a href="https://theconversation.com/topics/elon-musk">Elon Musk</a>, <a href="http://www.thielfoundation.org/peter">Peter Thiel</a> and various technology entrepreneurs.</p>
<p>Their focus is more on research and on making advanced AI freely available. They seek to develop innovations in “deep learning” a technique where rather than “hand-code a new algorithm for each problem, you design architectures that can twist themselves into a wide range of algorithms based on the data you feed them”. </p>
<p>They aim to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.</p>
<p>When “human-level AI” arrives, Open AI feels that it is important that there be “a leading research institution which can prioritize a good outcome for all over its own self-interest”.</p>
<p>They say “our researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world”.</p>
<p>Open AI’s backers have committed a billion dollars in funding though they expect to “only spend a tiny fraction of this in the next few years.”</p>
<p>Open AI is mainly about open access to advanced AI thus reducing the risk of a world of AI haves and have nots. This is a good idea. If the team can keep competitive and advanced AI open source this should reduce the risk of people being excluded from advanced AI for financial reasons. </p>
<p>The Foundation for Responsible Robotics has a broader agenda of policy engagement and raising professional and public awareness of robot ethics issues. Again, this is a worthwhile endeavour. AI and robotics researchers tend to be hard scientists unused to ethical debate. </p>
<p>Scientists need to step out of the empirical and into the normative. As trusted thought leaders of the citizenry, they should cross the line between “is” and “ought” and participate in policy debate. </p>
<p>Hopefully both these groups will help provide cures for the current epidemics of AI panic and robophobia.</p><img src="https://counter.theconversation.com/content/52300/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Welsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are growing concerns about robots, artificial intelligence and automation. Now two new organisations are seeking to produce responsible robots and advance beneficial AI.Sean Welsh, Doctoral Candidate in Robot Ethics, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/472922015-10-22T13:38:58Z2015-10-22T13:38:58ZHow big data and The Sims are helping us to build the cities of the future<figure><img src="https://images.theconversation.com/files/94672/original/image-20150914-4714-1gg4loa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/robertotaddeo/12458450145/sizes/l">Roberto Taddeo</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>By 2050, the United Nations predicts that around <a href="https://www.un.org/development/desa/en/news/population/world-urbanization-prospects.html">66% of the world’s population</a> will be living in urban areas. It is expected that the greatest expansion will take place in developing regions such as Africa and Asia. Cities in these parts will be challenged to meet the needs of their residents, and provide sufficient housing, energy, waste disposal, healthcare, transportation, education and employment. </p>
<p>So, understanding how cities will grow – and how we can make them smarter and more sustainable along the way – is a high priority among researchers and governments the world over. We need to get to grips with the inner mechanisms of cities, if we’re to engineer them for the future. Fortunately, there are tools to help us do this. And even better, using them is a bit like playing SimCity.</p>
<h2>A whole new (simulated) world</h2>
<p>Cities are complex systems. Increasingly, <a href="http://link.springer.com/article/10.1140%2Fepjst%2Fe2012-01703-3">scientists studying cities</a> have gone from thinking about “cities as machines”, to approaching “cities as organisms”. Viewing cities as complex, adaptive organisms – similar to natural systems like termite mounds or slime mould colonies – allows us to gain unique insights into their inner workings. Here’s how. </p>
<p>Complex organisms are characterised by individual units that can be driven by a small number of simple rules. As these relatively simple things live and behave, the culmination of all their individual interactions and behaviours generate more widespread aggregate phenomena. For example, the beautiful and complex patterns made by flocking birds are not organised by a leader. They come about because each bird follows some very simple rules about how close to get to each other, which direction to fly in, and how to avoid predators. </p>
<p>Similarly, ant colonies can exhibit very sophisticated and seemingly intelligent behaviour. But this sophistication doesn’t come about as a result of a good leader. It is the result of lots of ants following relatively simple rules, without any regard for the bigger picture. It is easy to see how this perspective could <a href="https://mitpress.mit.edu/books/turtles-termites-and-traffic-jams">be applied</a> to human systems to explain phenomena like traffic jams.</p>
<p>So, if cities are like organisms, it follows that we should examine them from the bottom-up, and seek to understand how unexpected large-scale phenomena emerge from individual-level interactions. Specifically, we can simulate how the behaviour of individual “agents” – whether they are people, households, or organisations – affect the urban environment, using a set of techniques known as “agent-based modelling”. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=375&fit=crop&dpr=1 600w, https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=375&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=375&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=471&fit=crop&dpr=1 754w, https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=471&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/94705/original/image-20150914-4711-f6tywx.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=471&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Using The Sims to build your own city.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/haljackey/4009924651/sizes/l">haljackey/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>This is where it gets a bit like SimCity. It’s apt that the computer game was <a href="http://www.denofgeek.com/games/simcity/24753/the-path-to-simcity">originally based</a> on the work of Jay Forrester, a world-renowned system scientist with an interest in urban dynamics. In the game, individual agents are given their own characteristics and rules, and allowed to interact with other agents and the environment. Different behaviour emerges through these interactions and drives the next set of interactions. </p>
<p>But while computer games can use generalisations about how people and organisations behave, researchers have to mine available data sets to construct realistic and robust rule sets, which can be rigorously tested and evaluated. To do this effectively, we need lots of data at the individual level. </p>
<h2>Modelling from big data</h2>
<p>These days, increases in computing power and the proliferation of <a href="https://theconversation.com/explainer-what-is-big-data-13780">big data</a> give agent-based modelling unprecedented power and scope. One of the most exciting developments is the potential to incorporate people’s thoughts and behaviours. In doing so, we can begin to model the impacts of people’s choices on present circumstances, and the future. </p>
<p>For example, we might want to know how changes to the road layout might <a href="http://dx.doi.org/10.1186/s40163-015-0023-8">affect crime rates</a> in certain areas. By <a href="http://sim.sagepub.com/content/88/1/50">modelling the activities</a> of individuals who might try to commit a crime, we can see how altering the urban environment influences how people move around the city, the types of houses that they become aware of, and consequently which places have the greatest risk of becoming the targets of burglary.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5ySbS075MyA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>To fully realise the goal of simulating cities in this way, models need a huge amount of data. For example, to model the daily flow of people around a city, we need to know what kinds of things people spend their time doing, where they do them, who they do them with, and what drives their behaviour.</p>
<p>Without good-quality, high-resolution data, we have no way of knowing whether our models are producing realistic results. Big data could offer researchers a wealth of information to meet these twin needs. The kinds of data that are exciting urban modellers include:</p>
<ul>
<li>Electronic travel cards that tell us how people move around a city.</li>
<li>Twitter messages that provide insight into what people are doing and thinking.</li>
<li>The density of mobile telephones that hint at the presence of crowds.</li>
<li>Loyalty and credit-card transactions to understand consumer behaviour.</li>
<li>Participatory mapping of hitherto unknown urban spaces, such as <a href="http://www.openstreetmap.org/">Open Street Map</a>.</li>
</ul>
<p>These data can often be refined to the level of a single person. As a result, models of urban phenomena no longer need to rely on assumptions about the population as a whole – they can be tailored to capture the diversity of a city full of individuals, who often think and behave differently from one another.</p>
<h2>Missing people</h2>
<p>There are, of course, serious practical and ethical considerations to take into account, when integrating big data into urban models. The volume of background noise in new data sources can make it difficult to extract useful and reliable information. For example, it can often be difficult to distinguish Twitter messages posted by bots from those by real people. </p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=902&fit=crop&dpr=1 600w, https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=902&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=902&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1133&fit=crop&dpr=1 754w, https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1133&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/94709/original/image-20150914-4682-p2bpvx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1133&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Some of us still do things the old-fashioned way.</span>
<span class="attribution"><span class="source">from www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>We must also make sure that we understand who is well-represented in our data, and who is not. The digital divide is alive and well and <a href="http://www.sciencedirect.com/science/article/pii/S0304422X1100012X">research suggests</a> a class divide separating those who do and do not produce digital content. This means that there are probably large sections of the population missing from data sets.</p>
<p>We also need to find new ways of making these methods ethical. Traditionally, consumer and research ethics have been structured around informed consent. Before taking part in interviews or surveys, participants need to sign consent forms that give the researchers permission to use their data. But now, individuals are digitising aspects of their lives such as moods, thoughts, feelings, and behaviours that have historically gone undocumented. And, importantly, these are often released publicly on the internet.</p>
<p>And while an individual might have ticked a box that gives permission for their data to be used, that’s no guarantee that they’ve read and understood the terms. <a href="http://www.apple.com/legal/internet-services/itunes/us/terms.html">iTunes’ June 2015 terms and conditions</a>, for example, are more than 20,000 words long (20 times the length of this article). Researchers and service providers need to ask themselves how many people really get to grips with these documents, and whether their agreement fulfils our idea of consent. </p>
<p>We may never be able to simulate every individual in a city, and we’ll probably never want to. But we are getting closer to being able to simulate the richness of the fabric that weaves together to shape our cities. If we can do this, then we will be able to provide useful input on how best to shape cities in the future – perhaps even down to the last street light, bus and block of flats.</p><img src="https://counter.theconversation.com/content/47292/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nick Malleson receives funding from the Economic and Social Research Council (ESRC).</span></em></p><p class="fine-print"><em><span>Alison Heppenstall does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>By simulating cities from the “bottom-up”, scientists can help us plan for the future.Alison Heppenstall, Professor in Geocomputation, University of LeedsNick Malleson, Associate Professor of Geographical Information Systems, University of LeedsLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/429042015-06-08T05:25:45Z2015-06-08T05:25:45ZUS hack shows data is the new frontier in cyber security conflict<figure><img src="https://images.theconversation.com/files/84131/original/image-20150605-8704-1gs5sj2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Data mining</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>More than four million personal records of US government workers are thought to have been <a href="http://edition.cnn.com/2015/06/04/politics/federal-agency-hacked-personnel-management/">hacked and stolen</a>, it has been. With US investigators <a href="http://www.theguardian.com/technology/2015/jun/04/us-government-massive-data-breach-employee-records-security-clearances">blaming the Chinese</a> government (although the Chinese deny involvement), this incident shows how data could be the new frontier for those in cyberspace with a political agenda.</p>
<p>In April 2015, the US Office of Personnel Management (OPM) – the body that provides the human resources function for the federal government and is responsible for background checks for security clearances – realised its records had been hacked.</p>
<p>Along with the direct personnel details, there are a whole range of references and contacts contained in the OPM records. The sensitive data could be used to identify people with security clearances, and could be used for the impersonation or blackmail of federal employees. Someone with security clearance could be exposed to identity fraud, where an intruder could gain access to sensitive information using the stolen identifies.</p>
<p>The data could also be used to hack into other government sites. For example, intruders recently <a href="http://edition.cnn.com/2015/05/27/politics/irs-cyber-breach-russia/">attempted to breach</a> the Inland Revenue Service’s systems (this time it was blamed on Russia) using personal information taken from tax returns stolen during other commercial breaches.</p>
<p>Such attacks create a certain amount of national humiliation. The hacking of confidential data from Sony <a href="https://theconversation.com/credibility-at-risk-in-sony-hacking-scandal-1038">highlighted how embarrassing</a> it can be for information to leak. The contents of its sensitive emails are now searchable <a href="https://wikileaks.org/index.en.html">on Wikileaks</a>, and we have probably only seen the tip of the iceberg in terms of the data that was taken.</p>
<h2>How did the hackers beat the system?</h2>
<p>Aware of the threat of attack, the OPM said it has “<a href="http://www.opm.gov/news/latest-news/announcements/">undertaken an aggressive effort</a>” to improve its cybersecurity over the last year. So why, many might ask, did it take the government so long to detect the security breach?</p>
<p>Many large companies now use advanced intrusion detection systems (IDS) that raise alerts of possible security breaches that are then collected, logged and analysed. At the OPM, the system that detected the breach was called EINSTEIN. It was developed by a division of the Department of Homeland Security to monitor the exit points of US government by examining the packets carried around a network for possible signs of intrusion.</p>
<p>The growing threat of attacks has led to the use of tools that gather all the event logs from IDS agents on a network. Human analysts then have to make sense of the events coming in, in order to spot possible signs of an intrusion. To do this advanced computer systems filter down the event logs and present only the most important ones to the analysts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=350&fit=crop&dpr=1 600w, https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=350&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=350&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=440&fit=crop&dpr=1 754w, https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=440&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/84120/original/image-20150605-8677-bzlc00.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=440&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Special Operations Centres (SOC) and SIEM (Security Information and Event Management)</span>
</figcaption>
</figure>
<p>Unfortunately some of the tell-tale signs of an intrusion could be lost. In the case of EINSTEIN, the system has to monitor the gateways devices coming from each of the partner government agencies, where it might be difficult to detect an intruder who has remote access to the inside of one the networks. </p>
<p>It is common for an IDS to detect where there are high rates of data loss (which large amounts of data are filtered off the network). So if this data loss is fairly slow, the IDS will often not detect it. The system must be tuned to show standard signs of intrusions so it does not trigger too many alerts and swamp its human administrators. Cyber attackers, however, often understand these standard detection methods and will use ways to slowing down the intrusion and avoid being noticed.</p>
<p>Many networks use a firewall to separate servers that can be accessed from untrusted networks from the protected main network infrastructure is then protected on another network. In many large networks, IDS agents exist across the whole network and listen for possible intrusions. The problem is that an intruder can often get over the firewall, and then remotely access the protected systems. Many organisations also allow employees to access their computer remotely through a secure network connection. With stolen access details, an intruder can use this remote access path in the same way.</p>
<p>The other major weakness of many IDSs is that they cannot examine the contents of encrypted data packets, such as where users visit secured websites starting with “https://”. To overcome this, many systems ban direct secure connections and route the data via a proxy, where they can examine the packets between the user’s computer and the secure connection to the internet. Unfortunately, intruders can set up connections using what is known as an end-to-end encryption tunnel that bypass this provision and in which data loss cannot be detected by the proxy or IDS.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=263&fit=crop&dpr=1 600w, https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=263&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=263&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=331&fit=crop&dpr=1 754w, https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=331&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/84122/original/image-20150605-8719-g7frup.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=331&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Secure tunnels with proxy and end-to-end.</span>
</figcaption>
</figure>
<p>While it has not been proven that the most recent attack was driven by a political agenda, the information once leaked from a site can then be sold on for the purposes of compromising nation states. Governments still need to understand the risks around their documents and make sure there are effective safeguards in place to restrict access to sensitive information. They often have a lot to learn from high-risk companies, such as in the <a href="http://www.computerweekly.com/news/2240222263/UK-finance-industry-launches-cyber-security-framework">finance sector</a>, where there is often large-scale detection of intrusions and monitoring for data loss.</p>
<p>The US agencies are saying that all those affected by the hack of the OPM will be insured against any loss they might experience as a result. But data is the life blood of most organisations and probably one of its important assets, so the need for improved security increases by the day.</p><img src="https://counter.theconversation.com/content/42904/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bill Buchanan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The security breach of the Office of Personnel Management (OPM) demonstrates governments have a lot to learn about protecting their documents from cyber attacks.Bill Buchanan, Head, Centre for Distributed Computing, Networks and Security, Edinburgh Napier UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/349852014-12-09T06:14:43Z2014-12-09T06:14:43ZWe must be sure that robot AI will make the right decisions, at least as often as humans do<figure><img src="https://images.theconversation.com/files/66355/original/image-20141204-7252-nvlu15.jpg?ixlib=rb-1.1.0&rect=0%2C91%2C682%2C576&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligences with the power of life or death over humans – what could possibly go wrong?</span> <span class="attribution"><a class="source" href="http://commons.wikimedia.org/wiki/File:Shadow_Hand_Bulb_large.jpg">Richard Greenhill and Hugo Elias</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Your <a href="http://www.ft.com/cms/s/0/c1474e0a-335d-11e4-9607-00144feabdc0.html">autonomous vacuum cleaner</a> cleans your floors and there is no great harm if it occasionally bounces into things or picks up a button or a scrap of paper with a phone number. But then again this latter case is irritating – it would be preferable if the machine was capable of noticing there was something written on it and alert you. A human cleaner would do that. </p>
<p>If your child has a toy robot, you are not worried much about its wheels, arms or eyes going wild occasionally during play. It can be just more fun for the kids. You know the toy has been designed not to have enough force to cause any harm. </p>
<p>But what about a factory robot designed for picking up cars pieces and fitting them into a car? Clearly you’d not want to be nearby when it goes berserk. You know it’s been pre-programmed to do particular tasks and it may not welcome your proximity. This kind of robot is often caged or barred-off, even for operating personnel. But what about the case of some future autonomous robot with which you need to work in order to assemble something, or complete some other task? You may think that if it is powerful enough to be useful, it may also be powerful enough to do you an unexpected injury. </p>
<p>If you fly model aircraft then you may want to put a GPS-equipped computer on board and make it follow waypoints, perhaps to take a series of aerial photos. There may be two points of concern. First, the legality of flying your aircraft when occasionally out of your sight, as in the case of some trouble you would not notice that the automatic control needs to be overridden for safety. Second, whether its on-board software has been sufficiently well written for a safe emergency landing if required. Might it endanger the public, or cause damage to something else airborne or otherwise?</p>
<p>Your latest luxury car with its own intelligent sensor system for recognising the environment around it may be forced to choose between two poor options: to hit a car that suddenly appears in the street, or to brake causing the car behind collide into you. As a passenger in an autonomous car travelling in a convoy of other autonomous vehicles, you may wonder what the car might do if the convoy arrives at a junction or road works or if the vehicle in the convoy suffers a breakdown: can the autonomous systems be trusted to navigate itself through temporary barriers or sudden disruptions without harming the pedestrians or vehicles around it?</p>
<h2>The right choices at the right time</h2>
<p>These are questions that pose real challenges for those designing and programming our future semi-autonomous and autonomous robots. All possible dangerous situations need to be anticipated and accounted for, or resolved by the robots themselves. Robots also need to be able to safely recognise objects in their environment, perceive their functional relationship to them and make safe decisions about their next move, and when they are able to satisfy our requests. </p>
<p>For some applications, such as with humanoid robots, it’s not clear today where the responsibility lies: with the manufacturer, with the robot, or with its owner. In a case where damage or harm is caused, it may be that the user taught the robot the wrong thing, or requested something inappropriate of it. </p>
<p>There is still a legal framework to be introduced, something that at the moment is still entirely missing. If various software systems are used, how can we check that the robot’s decisions are safe? Do we need a UK authority to certify autonomous robots? What will be the rules that robots need to keep to and how will it be verified that they are safe in all practical situations?</p>
<p>The EPSRC-supported research thgat we have <a href="http://wordpress.csc.liv.ac.uk/va/">recently launched</a> at the universities of Sheffield, Liverpool and West of England in Bristol are trying to establish answers and solutions to these questions that will make autonomous robots safer. The three-year project will examine how to formally verify and ultimately legally certify robots’ decision-making processes. Laying down methods for creating this will in fact help define a legal framework (in consultation with lawyers) that will hopefully allow the UK robotics industry to flourish.</p><img src="https://counter.theconversation.com/content/34985/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sandor Veres receives funding from the EPSRC.</span></em></p>Your autonomous vacuum cleaner cleans your floors and there is no great harm if it occasionally bounces into things or picks up a button or a scrap of paper with a phone number. But then again this latter…Sandor Veres, Professor and Director, Autonomous Systems and Robotics Research Group, University of SheffieldLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/339382014-11-16T19:19:33Z2014-11-16T19:19:33ZAbbott, Spurr, Peris: are we ready to use the internet unsupervised?<p>When Frances Abbott’s private <a href="https://newmatilda.com/2014/10/23/frances-abbott-scholarship-whistblower-has-sentencing-delayed">scholarship award was “exposed”</a>, when poetry professor Barry Spurr was <a href="https://theconversation.com/spurr-vs-new-matilda-case-pits-privacy-against-public-interest-33300">outed for his inflammatory emails</a> and when Senator Nova Peris was devastated by the <a href="https://theconversation.com/publish-and-be-slammed-33675">leaking of her private emails</a>, a media frenzy ensued about their behaviour and character. But less was said about how the information was obtained.</p>
<p>Every day we can read innumerable stories about the internet, published on the internet. Rather than portraying it as a medium that makes our lives better, the stories frequently and depressingly trumpet our abuse and misuse of the internet. But let’s backtrack for a bit.</p>
<p>Thanks to <a href="http://www.w3.org/People/Berners-Lee/">Tim Berners-Lee</a> in August 1991, we were suddenly presented with the World Wide Web. It took off spectacularly. As far as society’s ability to assimilate new developments is concerned, 1991 is not that long ago. We haven’t had much time to learn how drive this amazing cyber vehicle, so cyber accidents are common: online head-ons, rear-enders, side-swipes and, more ominously, hit-and-runs.</p>
<h2>We’re still learning to use cars safely</h2>
<p>Many parallels can be drawn between society’s adoption of the internet and the motor vehicle. We’ve been driving cars for around five or six generations, but we still kill and maim each other in them every day. While this, at worst, indicates that we might be slow learners, it also shows how long it takes society to adapt to a new technology and normalise its use, if ever.</p>
<p>Information technology development, especially online innovation, is possibly the fastest-moving social phenomenon humans have had to contend with. But our human foibles will not allow society to realise the full potential of being online any time soon. Six generations later, it is still doubtful whether humans are competent enough to drive cars safely. That the vehicle industry is well-advanced in developing <a href="https://theconversation.com/self-driving-cars-will-not-help-the-drinking-driver-31747">self-drive cars</a> suggests we cannot be trusted behind the wheel.</p>
<p>The extraordinary pace of development of the internet, which is fast becoming the Internet of Things (the IOT – only one more syllable to IDIOT), seems to have knocked our integrity and moral and ethical judgement askew. As a society, we do not seem to know what is right or wrong online.</p>
<p>We have trouble establishing behavioural norms as we have done all our lives in almost every other field. Although we still cherish our privacy in a hard-copy kind of way, we are happy as individuals, organisations and governments to stretch the boundaries of acceptable online behaviour. We willingly sacrifice our own (and others’) privacy for functionality, connectivity and the right to have a say on any subject.</p>
<h2>Moral and ethical compasses go haywire online</h2>
<p>In our “real” lives of physical, face-to-face engagement with other people, we are conditioned by long-held societal norms and values to behave with politeness and mutual respect, at least on initial contact. Online we seem to smash our moral compasses, put on our ugly faces and trammel the dignity and rights of all and sundry. We morph into trolls, bullies, bigots, vigilantes and morons – and that is without adding alcohol or other stimulants.</p>
<p>This online psychosis, which attests to the reality that we haven’t yet earned our internet L-plates, has been thrown into stark relief by the three incidents mentioned above.</p>
<p>The first was the hacking and subsequent guilty plea to unauthorised access by 21-year-old <a href="http://www.theguardian.com/australia-news/2014/oct/23/freya-newman-sentence-deferred-frances-abbott-scholarship-leak">Freya Newman</a> who, with the help of New Matilda, “exposed” Frances Abbott’s scholarship award by a private company. The second was the exposure of emails sent by Spurr. The third was the publication of Peris’ personal emails, which led her to make an impassioned statement in parliament. </p>
<h2>Do social rules and laws not apply?</h2>
<p>These incidents have kindled a huge amount of social and other media comment. Many commentators supported Newman’s actions as being <a href="https://newmatilda.com/2014/10/28/unanswered-questions-hunt-freya-newman">in “the public interest”</a> as she tried to hack down (pun intended) the tall-poppy figure of Frances Abbott and, by association, her father, prime minister Tony Abbott. Likewise, Spurr was considered to be justly outed given his role in <a href="http://www.abc.net.au/news/2014-10-16/university-of-sydney-professor-alleged-sent-racist-sexist-emails/5820220">reviewing the national education curriculum</a>. </p>
<p>The social media verdict is still out on Peris, but some think she has some explaining to do.</p>
<p>But are Abbott, Spurr and Peris guilty as charged or are they victims? Is the leaking of their personal details and missives an omen of an Orwellian future for our online existence? It is certainly a sign of how nasty and polarised the political game has become. </p>
<p>It is not my intent to argue the moral case of these three, but to question whether trust has become a serious casualty of our umbilical connection to the internet. We don’t yet know how Spurr’s and Peris’s emails were obtained: were their accounts hacked too, or were they the victims of a catastrophic failure of personal trust? In Abbott’s case, it is clear that Newman illegally accessed a company’s IT network, exposing more than 500 other students’ details, to find supposedly incriminating “evidence” about her scholarship.</p>
<p>Despite the claims of public interest, one trait that stands out about these “leaks” is their vindictive motive. Many social media commentators claim that Newman is merely <a href="http://www.smh.com.au/nsw/student-who-leaked-frances-abbott-scholarship-details-motivated-by-sense-of-injustice-court-told-20141023-11acza.html">a whistleblower</a> exposing fraud and corruption, despite her guilty plea and <a href="http://www.theaustralian.com.au/news/frances-forgives-apologetic-hacker/story-e6frg6n6-1227117631128">apology to Frances Abbott</a>. She clearly realises that a contrite defendant gets less time than an unrepentant one. </p>
<p>Fortunately, I’m sure there are many people who would not choose Newman as their paladin charged with protecting our morals and virtues. </p>
<h2>A crime is a crime in the ‘real’ and internet worlds</h2>
<p>The act of accessing an IT network without authority is a crime in Australia. Those who believe it is OK in the public interest might ask themselves if it would be OK for a thief to break into someone’s house to steal their personal correspondence. </p>
<p>There is no difference – just because it is done online, where there is no embarrassing or risky physical and personal contact, doesn’t make it any less repugnant. It is quite simply theft.</p>
<p>It was refreshing to see how American actress <a href="http://www.vanityfair.com/vf-hollywood/2014/10/jennifer-lawrence-cover">Jennifer Lawrence</a> dealt with the recent theft and exposure of her online nude photos:</p>
<blockquote>
<p>Just because I’m a public figure, just because I’m an actress, does not mean that I asked for this. It does not mean that it comes with the territory. It’s my body and it should be my choice, and the fact that it is not my choice is absolutely disgusting. I can’t believe that we even live in that kind of world … It is not a scandal. It is a sex crime, it is a sexual violation.</p>
</blockquote>
<p>Likewise, Abbott’s personal information was exposed by a criminal act. Her privacy as well as Spurr’s and Peris’ has been violated to create a scandal. It seems the internet has become the tool of choice for the jealous and zealous to better engage in Australia’s most unfortunate cultural pursuit of felling tall poppies.</p>
<p>As many cry out for law enforcement, intelligence and security agencies to be subjected to even stronger regulation and oversight as they carry out their legislated duties, we need to think very carefully about whether or not we want to give unconditional licence to so-called whistleblowers and their media collaborators to become investigator, prosecutor, judge, jury and executioner. We will learn which way the court falls on this question when Newman is sentenced on November 25.</p>
<p>We need to tread cautiously to avoid feeding oxygen to vigilantes who would make George Orwell proud. Who knows? You might be next.</p><img src="https://counter.theconversation.com/content/33938/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tim Scully is a member of the Australian Information Security Association.</span></em></p>When Frances Abbott’s private scholarship award was “exposed”, when poetry professor Barry Spurr was outed for his inflammatory emails and when Senator Nova Peris was devastated by the leaking of her private…Tim Scully, PhD Candidate, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.