tag:theconversation.com,2011:/us/topics/technology-ethics-51631/articlesTechnology ethics – The Conversation2023-06-11T11:22:21Ztag:theconversation.com,2011:article/2038202023-06-11T11:22:21Z2023-06-11T11:22:21ZGovernments and industry must balance ethical concerns in the race for AI dominance<figure><img src="https://images.theconversation.com/files/530700/original/file-20230607-21-vijgj3.jpg?ixlib=rb-1.1.0&rect=50%2C12%2C8410%2C5619&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence in Washington.</span> <span class="attribution"><span class="source">(AP Photo/Patrick Semansky)</span></span></figcaption></figure><p>The CEO of OpenAI, the company behind ChatGPT, <a href="https://www.wsj.com/articles/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a">recently testified before United States senators</a> that AI “could go quite wrong” and his company wanted to “work with the government to prevent that from happening.” </p>
<p>Privacy concerns about AI are widespread. Along with <a href="https://www.bbc.com/news/technology-65431914">temporary bans of ChatGPT in Italy</a>, some <a href="https://www.cpomagazine.com/cyber-security/wave-of-employer-chatgpt-bans-continues-as-apple-restricts-internal-use-of-ai-tools/">private organizations</a> have started to restrict its use. These concerns are not limited to ChatGPT, either. </p>
<p>Studies have also demonstrated that WeChat — the most-used social app in China — <a href="https://citizenlab.ca/2020/05/wechat-surveillance-explained/">incorporates censorship algorithms</a>. </p>
<p><a href="https://www.telegraph.co.uk/business/2023/05/12/tiktok-propaganda-tool-chinese-communist-party/">TikTok has similarly been framed as a propaganda tool</a> for the Chinese government, leading to <a href="https://www.theguardian.com/technology/2023/mar/23/key-takeaways-tiktok-hearing-congress-shou-zi-chew">U.S. congressional hearings about privacy concerns</a>. Along with broader <a href="https://www.nytimes.com/article/tiktok-ban.html">international efforts by other lawmakers</a>, there is clearly concern about the role governments should play in the development and use of artificial intelligence.</p>
<p>Despite these growing concerns, there are few signs that investment in China-made AI has — or will — decelerate, with <a href="https://www.reuters.com/technology/us-investors-have-plowed-billions-into-chinas-ai-sector-report-shows-2023-02-01/">U.S. venture capitalists continuing to invest heavily in the country’s AI sector</a>. </p>
<p><a href="https://asia.nikkei.com/Opinion/U.S.-has-no-moral-authority-over-China-in-tech-development">Some have claimed</a> that concerns over China are unwarranted — that oppression is unlikely and that others will simply step in to develop and distribute the technology if China doesn’t. </p>
<p>But we cannot disregard how the Chinese government — or any government — is deploying AI to achieve their goals.</p>
<h2>AI gold rush</h2>
<p>A speculative gold rush has followed the realization that AI — especially <a href="https://theconversation.com/generative-ai-like-chatgpt-reveal-deep-seated-systemic-issues-beyond-the-tech-industry-198579">large language models like ChatGPT</a> — has the potential to revolutionize business. </p>
<p>As <a href="https://wp.oecd.ai/app/uploads/2021/03/2021-AI-Index-Report.pdf">businesses seek to capitalize on these opportunities</a>, they must expand their portfolios to international markets. China is poised to provide a high return on investment to these businesses. </p>
<p>The Chinese government has prioritized innovation to <a href="https://www.globaltimes.cn/page/202303/1287981.shtml">counter the American technological dominance</a>. Recent estimates suggest <a href="https://www.hurun.net/en-US/Info/Detail?num=3OEJNGKGFPDS">China has the fourth-largest number of AI “unicorns”</a> — private start-ups that are valued at over $1 billion. </p>
<p>But unlike in the West, the boundary between state-owned and private organizations in China is permeable, with many companies <a href="https://thediplomat.com/2019/12/politics-in-the-boardroom-the-role-of-chinese-communist-party-committees/">hosting Chinese Communist Party committees within their organizations</a>.</p>
<figure class="align-center ">
<img alt="An Asian man in a navy suit and tie speaks into a microphone from behind a desk." src="https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&rect=62%2C12%2C8223%2C5503&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">TikTok CEO Shou Zi Chew testifies during a hearing of the House Energy and Commerce Committee on the platform’s consumer privacy and data security practices last March 2023 in Washington.</span>
<span class="attribution"><span class="source">(AP Photo/Alex Brandon)</span></span>
</figcaption>
</figure>
<p>Given <a href="https://www.jstor.org/stable/26508115">social media’s potential</a> to help China achieve its goals, <a href="https://www.hrw.org/news/2023/03/24/problem-tiktoks-claim-independence-beijing">TiKTok’s relationship with the Chinese government</a> raises concerns about <a href="https://www.theguardian.com/technology/2019/sep/25/revealed-how-tiktok-censors-videos-that-do-not-please-beijing">what content is presented on the platform</a>, <a href="https://www.wired.co.uk/article/tiktok-data-privacy">how user information is collected</a> and how it might be used to <a href="https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html">influence user beliefs and choices</a>.</p>
<h2>Ethical business of AI</h2>
<p>Protectionism, nationalism and racism undoubtedly play roles in concerns over technology consumption and adoption. Research has repeatedly demonstrated that <a href="https://doi.org/10.1016/S2212-5671(15)00383-4">a product’s country of origin affects consumers’ perception</a>. Yet, these factors must be carefully weighed against others. </p>
<p>Like many nations, China seeks global influence through soft power. Following the communist revolution, the Chinese state has attempted to <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674794757">guide technology development</a> for the purposes of <a href="https://www.google.ca/books/edition/We_Have_Been_Harmonized/g5DCDwAAQBAJ?hl=en&gbpv=0">monitoring and regulating society</a>. Such practices are <a href="https://www.cambridge.org/core/books/cosmology-and-political-culture-in-early-china/EF524D79C1EE401FE0E2D6ACC00E422D">deeply rooted in Chinese philosophy prioritzation of harmony</a>.</p>
<p>Harmony for society can be costly for others. <a href="https://doi.org/10.1080/10670564.2019.1621529">Uyghurs</a>, <a href="https://doi.org/10.1111/ajps.12514">political dissidents</a> and <a href="https://www.reuters.com/world/china/arrests-tight-security-hong-kong-tiananmen-anniversary-2023-06-04/">non-compliant people and groups</a> have all been targeted by the Chinese government. The oppressive surveillance of the <a href="https://www.nytimes.com/interactive/2019/11/16/world/asia/china-xinjiang-documents.html">Uyghurs in Xinjiang province</a> has not only resulted in their detainment in detention camps, but has resulted in many <a href="https://www.ft.com/content/fa6bd0b0-1d87-11ea-9186-7348c2f183af">Han settlers leaving the province</a>.</p>
<h2>Western governments and AI</h2>
<p>No technology is value-neutral. Values inform the choices of AI designers, developers, and users. </p>
<p>We must be wary of virtue signalling that fixates on China’s problems and ignores our own, as these are differences in the degree of these issues rather than the kind of issues. </p>
<p><a href="https://www.forbes.com/sites/forbestechcouncil/2020/09/25/the-state-of-mass-surveillance/?sh=7fa445e8b62d">Government mass surveillance of citizens</a>, <a href="https://www.ploughshares.ca/publications/no-canadian-leadership-on-autonomous-weapons">ill-defined policies about autonomous weapons in the military</a> and <a href="https://theconversation.com/privacy-violations-undermine-the-trustworthiness-of-the-tim-hortons-brand-184683">the collection of user data by private organizations</a> must all be reckoned with in North America.</p>
<p>As recent revelations over <a href="https://www.business-humanrights.org/en/latest-news/ukrainian-analysis-identifies-western-supply-chain-behind-irans-drones/">the components of a Russian drone used in an attack on Ukraine</a> have made clear, AI has both domestic and military applications. Three-quarters of the drones’ components were found to be made in the U.S. Investors cannot ignore <a href="https://www.routledge.com/Ethical-Artificial-Intelligence-from-Popular-to-Cognitive-Science-Trust/Schoenherr/p/book/9780367697983">the moral implications of global supply chains</a> when it comes to AI.</p>
<h2>Co-ordinated efforts are key</h2>
<p>Despite industry being the primary driver of AI development, all stakeholders have a role to play. While the Chinese government’s involvement in AI development might be too great, the hands-off approach of western governments have created their own problems. </p>
<p>These issues include <a href="https://doi.org/10.1073/pnas.1517441113">the spread of disinformation and polizarization</a> and <a href="https://doi.org/10.1080/02673843.2019.1590851">increased anxiety and depression associated with social media use</a>.</p>
<p>Regulation is not the only answer, but it is a start. As the U.S. mulls over <a href="https://www.reuters.com/technology/us-begins-study-possible-rules-regulate-ai-like-chatgpt-2023-04-11/">legislation for systems like ChatGPT</a>, and <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">Canada finalizes its own broad AI framework</a>, the Chinese government seeks to establish its own <a href="https://carnegieendowment.org/2023/05/16/what-chinese-regulation-proposal-reveals-about-ai-and-democratic-values-pub-89766">laws that will undoubtedly help it consolidate control</a>.</p>
<p>Industry leaders and academics are likely best positioned to understand the technology. However, governments can provide insight to users and investors who might be unaware of larger issues within technological ecosystems such as privacy and security. </p>
<p>Illustrating this, Sequoia Capital, one of the largest venture capital firms to invest in China, <a href="https://www.wsj.com/articles/sequoia-pares-back-china-tech-investments-as-u-s-national-security-concerns-grow-c17348b5">sought advice from national security agencies</a>. Its recent decision to <a href="https://www.reuters.com/business/finance/sequoia-separate-china-india-southeast-asia-by-march-2024-2023-06-06/">split its U.S. and China operations</a> has no doubt been influenced by this process.</p>
<p>Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.</p><img src="https://counter.theconversation.com/content/203820/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2001342023-02-26T15:06:28Z2023-02-26T15:06:28ZBillions have been sunk into virtual reality. To make it worth it, the industry needs to grow beyond its walled gardens<figure><img src="https://images.theconversation.com/files/511843/original/file-20230222-14-d9y1sq.jpg?ixlib=rb-1.1.0&rect=29%2C7%2C4962%2C3315&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">While VR is still used primarily as a gaming device, it has the potential to move beyond the industry and revolutionize the way people interact with one another in the metaverse.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/billions-have-been-sunk-into-virtual-reality--to-make-it-worth-it--the-industry-needs-to-grow-beyond-its-walled-gardens" width="100%" height="400"></iframe>
<p>Despite recent <a href="https://www.cnbc.com/2023/01/18/tech-layoffs-microsoft-amazon-meta-others-have-cut-more-than-60000.html">waves of Big Tech layoffs</a>, <a href="https://www.roadtovr.com/bytedance-pico-consumer-vr-us-jobs/">billions of dollars</a> <a href="https://venturebeat.com/business/hp-moves-into-vr-and-ar-with-investment-in-venture-reality-fund/">have been</a> <a href="https://venturebeat.com/business/hp-moves-into-vr-and-ar-with-investment-in-venture-reality-fund/">sunk into virtual reality (VR)</a> hardware and software over the past few years. </p>
<p>For this investment to be worthwhile, the VR industry needs to achieve sustainability and growth. To do this, it will have to explore many different applications of VR technology, including <a href="https://www.hp.com/us-en/workstations/learning-hub/vr-leading-manufacturing.html">manufacturing</a> and <a href="https://www.forbes.com/sites/cathyhackl/2020/08/30/social-vr-facebook-horizon--the-future-of-social-media-marketing">social VR</a>. Social VR is a type of virtual reality experience where users can meet and interact with one another in a virtual world.</p>
<p>As a <a href="https://www.utm.utoronto.ca/">University of Toronto Mississauga (UTM)</a> <a href="https://www.utm.utoronto.ca/iccit/people/bree-mcewan">associate professor</a> who researches social VR and teaches classes on virtual environments, I am often faced with the question of what will drive the adoption of social VR by broader society. </p>
<p>As the UTM lead of the University of Toronto’s <a href="https://datasciences.utoronto.ca/dsi-utm/">Responsible Data Science initiative</a>, I am also interested in the data collection, retention and deployment that is needed to build an <a href="https://datasciences.utoronto.ca/data-and-the-metaverse/">efficient and ethical metaverse</a>. </p>
<h2>Walled gardens</h2>
<figure class="align-right ">
<img alt="A book cover of Snow Crash by Neal Stephenson. It had a red sword against a navy swirly background dotted with yellow, red, blue and white circles." src="https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/511842/original/file-20230222-18-wteifr.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Author Neal Stephenson coined the term ‘metaverse’ in his 1992 science-fiction novel <em>Snow Crash</em>.</span>
<span class="attribution"><span class="source">(Penguin Random House)</span></span>
</figcaption>
</figure>
<p>At the present moment, our cultural imagination of the metaverse surpasses the real thing. In books about the metaverse, you can speed across the <a href="https://www.penguinrandomhouse.com/books/172832/snow-crash-by-neal-stephenson/">world on a motorcycle</a> with katana in-hand, or <a href="https://www.penguinrandomhouse.ca/books/293994/neuromancer-by-william-gibson/9780441007462">slip in and out of cyberspace</a> on a mission for artificial intelligences.</p>
<p>In films and television shows about it, you can leave behind your everyday life <a href="https://www.warnerbroscanada.com/movies/ready-player-one">to embark on a scavenger hunt</a> through ‘80s nostalgia or <a href="https://www.warnerbros.com/movies/matrix">save the world</a> while bending your body around the trajectory of bullets. Or you can <a href="https://collider.com/best-star-trek-holodeck-episodes/">walk through a door in your workplace</a> and find yourself in Sherlock Holmes’s London or the wild west. In all these versions of the metaverse, we imagine leaving the physical world and entering a new, fully formed digital universe.</p>
<p>However, this is not the current state of VR technologies. Rather, we seem to be stuck in the <a href="https://www.cnbc.com/2022/08/18/web3-is-in-chaos-metaverses-in-walled-gardens-randi-zuckerberg.html">walled garden phase</a> of this potentially revolutionary interactive technology. Until the VR industry figures out how to move beyond these walled gardens, the metaverse may never live up to the hype.</p>
<p>A <a href="https://www.pcmag.com/encyclopedia/term/walled-garden">walled garden</a> is a mediated environment that restricts users to specific content within a website or social media platform. This is how the early internet worked — providers like <a href="https://www.wsj.com/articles/SB968104011203980910">AOL, CompuServe and Prodigy</a> kept users on affiliated sites.</p>
<p>This later changed when the true potential of the internet was realized and users began freely traversing sites and platforms. Users connected and drew on information from many different sources.</p>
<p>Today, information, memes, images, celebrity gossip and cultural moments all diffuse across the internet and are accessible from many different hardware devices, including cellphones, tablets and computers. </p>
<p>Today’s VR more closely resembles a <a href="https://doi.org/10.1109/MC.2021.3130480">walled garden environment</a> than the interconnected internet. There are only a handful of social software programs that are accessible from different headsets. </p>
<p>Software developers may find it difficult to <a href="https://doi.org/10.1007/978-3-030-23528-4_59">program for multiple headsets</a> at once, in part due to a lack of a <a href="https://xrbootcamp.com/the-best-5-vr-sdk-for-interactions/#headline-91-640">standard software development kit</a> across VR hardware devices. This leaves the current virtual reality market, despite the potential for immersive, interactive, social experiences, more similar to the gaming console market than a communication channel.</p>
<p>For VR to become the next widely adopted communication channel, the industry needs to move beyond the walled garden phase. To do this, VR needs to increase its interoperability — the ability for programs and applications to be able to integrate and for software to run across VR hardware.</p>
<p>Interoperability raises important questions about the data infrastructure of VR hardware and software, the sharing of consumer and corporate data and our ability to traverse to different parts of the metaverse.</p>
<h2>The tipping point</h2>
<p>Virtual reality adoption is often talked about as if it’s just about to take off. In 1992, VR visionary <a href="http://www.jaronlanier.com/">Jaron Lanier</a> predicted the possibility of home VR <a href="https://doi.org/10.1111/j.1460-2466.1992.tb00816.x">by the turn of the century</a>. </p>
<p>Researchers <a href="https://doi.org/10.1177/1461444820924623">Tony Liao and Andrew Iliadis found something similar in their research</a> on the <a href="https://dynamics.microsoft.com/en-us/mixed-reality/guides/what-is-augmented-reality-ar/">augmented reality</a> industry. Augmented reality was consistently talked about as if widespread adoption was just another five to 10 years out.</p>
<p>Yet, as author and researcher <a href="https://www.wired.com/story/virtual-reality-rich-white-kid-of-technology/">Dave Karpf succinctly lays out in WIRED</a>, while both augmented and virtual reality technologies keep advancing, they have yet to reach the tipping point necessary for widespread social adoption. </p>
<p>The technology, Karpf argues, is always “about to turn a corner, about to be more than just a gaming device, about to revolutionize other fields.” Yet, the primary use case of virtual reality <a href="https://www.meta.com/blog/quest/best-of-quest-2022/">remains as a gaming device</a>. </p>
<p>Leaning into VR as a gaming platform could work for the industry — the <a href="https://www.roadtovr.com/oculus-quest-store-revenue-1-billion-milestone-growth-meta/">usage of VR as a gaming device is increasing</a> and gamers are used to buying consoles that can only run specific titles created for that console — but it misses the potential of virtual reality. VR has the ability to bring communicators together into shared spaces to engage, interact and share human social experiences. </p>
<figure class="align-center ">
<img alt="A person wearing a virtual reality headset standing in the middle of a virtual world surrounded by avatars" src="https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/511845/original/file-20230222-24-ru4v63.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social VR is a type of virtual reality experience where users can meet and interact with one another in a virtual world.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>The creation of these shared VR spaces will likely require movement towards interoperable social spaces where users can move easily and freely from one social VR space to another. </p>
<p>Interoperability, in turn, requires open software standards and data sharing between entities that have traditionally kept a close hold on their data collection and analysis processes. Consumers deserve to have confidence in the safety and protection of the data generated by their social interactions. </p>
<h2>The future of VR</h2>
<p>If the VR industry is to experience the kind of growth that will make it worthy of the billions of dollars that have been invested in it, we need to view the metaverse as public infrastructure, much like the internet is. </p>
<p>Those of us in both the VR industry and the VR research community must turn our attention to how data can contribute to interoperability while protecting individual instances of social interaction from surveillance and commodification. </p>
<p>The balance between the openness needed for interoperability, and the protections necessary to maintain consumer confidence, will be a tough balance to strike. Yet, without this balance, widely adopted social VR will continue to <a href="https://www.forbes.com/sites/qai/2023/01/06/vr-headset-sales-underperform-expectations-what-does-it-mean-for-the-metaverse-in-2023/">remain out of reach</a>.</p><img src="https://counter.theconversation.com/content/200134/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bree McEwan is affiliated with the University of Toronto, University of Toronto - Mississauga, and the UofT Data Sciences Institute. </span></em></p>If the VR industry is to experience the kind of growth that will make it worthy of the billions of dollars that have been invested in it, we need to view the metaverse as public infrastructure.Bree McEwan, Associate Professor, Institute of Communication, Culture, Information and Technology, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1850812022-06-24T12:02:22Z2022-06-24T12:02:22ZFive billion people can’t afford surgery – a team of innovators could soon change this<figure><img src="https://images.theconversation.com/files/470537/original/file-20220623-51718-lguur2.jpg?ixlib=rb-1.1.0&rect=33%2C42%2C5573%2C3690&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Gasless laparoscopic surgery performed by Dr Biju Islary using the RAIS system.</span> <span class="attribution"><span class="source">Dr. Hampher Kynjing, Nazareth Hospital, Shillong</span>, <span class="license">Author provided</span></span></figcaption></figure><p>Have you or a loved one ever needed surgery? Imagine what your life would be like if you couldn’t have it. Billions of people around the world lack access to surgery because equipment and general anaesthesia are too expensive or unsuitable in their region. </p>
<p>When we think about technological progress people tend to picture faster, shinier, more hi-tech upgrades of what we already have. But sometimes developers can have more impact by remodelling technology with cheaper and simpler versions. </p>
<p>Our group at the University of Leeds is developing surgical technology for low-to-middle income countries and our first project was a simplified surgical tool for performing laparoscopic – or keyhole – surgery in low-resource settings where it was not possible before.</p>
<p>Surgical technology has never been more advanced. The NHS is <a href="https://www.sciencedirect.com/science/article/pii/S0168851022000562">adopting robotic surgical systems</a>, which give surgeons new levels of precision and skill to perform complex procedures for <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2758472">prostate, gynaecology and bowel surgery</a>. </p>
<p>But while these advances are impressive, they highlight a stark inequality; an estimated 5 billion people (more than two-thirds of the global population) <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)60160-X/fulltext">cannot afford surgery</a>. And yet, surgery is the primary treatment for one-third of diseases. Of the 313 million procedures undertaken worldwide each year, only 6% are performed in the poorest countries, where more than one-third of the world’s population lives. </p>
<h2>Why surgery is so hard to access</h2>
<p>A <a href="https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(14)70349-3/fulltext">shortage of trained surgeons</a>, healthcare costs and cultural barriers (many people turn to traditional healers first) prevent access, but all too often there is not enough <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(12)61127-1/fulltext">appropriate surgical equipment</a> available. By that we mean technology that fits the resources and services available in the <a href="https://gh.bmj.com/content/4/5/e001808">local healthcare setting</a> and does so at a cost affordable to local patients. </p>
<p>The shortage of technology designed for low-resource regions is because biotech firms focus on the major commercial markets in the EU, US and China and are reluctant to undercut more expensive, profitable technologies. </p>
<p>The solution is not as simple as providing low-income countries with the same surgical technology used in high-income countries. <a href="https://gh.bmj.com/node/135437.full">Well-meaning donations</a> of surgical equipment are often unused because they are too expensive to maintain. Communities struggle to source items such as air filters, cutting blades and CO₂ gas to make equipment work. </p>
<p>Research reveals 40-70% of medical devices in low-to-middle income countries are <a href="https://link.springer.com/article/10.1186/s12992-017-0280-2">broken, unused or unfit for purpose</a>.</p>
<h2>What we did</h2>
<p>We set out trying to develop new surgical equipment tailored to low and middle income countries, using “frugal innovation” as our guiding principle, meaning we were aiming <a href="https://academic.oup.com/bjs/article/106/2/e34/6120763?login=false">to do more with less</a>“. <a href="https://journals.lww.com/ijsgh/Fulltext/2021/01010/Designing_devices_for_global_surgery__evaluation.7.aspx?context=LatestArticles">We also</a> involved clinical staff throughout the process. </p>
<p><a href="https://ieeexplore.ieee.org/document/9780179">Our project</a> helped surgeons practice vital keyhole surgery in remote areas of rural India. In laparoscopic surgery, the patient’s abdomen is inflated with CO₂ gas and the surgeon operates using long instruments which go through small incisions into the space created. The technique, <a href="https://pubmed.ncbi.nlm.nih.gov/11019611/">pioneered in 1901</a> in Germany, revolutionised modern surgery, reducing their risk of infections and dramatically lowering the recovery time for patients.</p>
<p>Unfortunately, it requires general anaesthesia, and a reliable CO₂ supply, both of which are too expensive in low-resource regions. General anaesthesia must be administered by an anaesthetist. An alternative technique, <a href="https://link.springer.com/article/10.1007/s00464-015-4433-1">gasless laparoscopy</a>, uses a mechanical retractor to lift the abdomen and create space. This method doesn’t require CO₂ and allows the use of readily available spinal anaesthesia instead. </p>
<p>Spinal anaesthesia can be carried out by the operating surgeon, removing the need for a specialist anaesthetist. It means that patients in poorer countries can be given essential surgical treatments such as appendectomy, gall bladder removal, gynecological procedures. It also enables patients to return to work quickly, which is important because the longer patients are off work sick, the deeper they fall below the poverty line. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=423&fit=crop&dpr=1 754w, https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=423&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/469869/original/file-20220620-26-zat5h3.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=423&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Gasless laparoscopic surgery: the abdomenal wall is lifted by a ‘retractor’ to create space for instruments and a camera.</span>
</figcaption>
</figure>
<p>There is <a href="https://www.sciencedirect.com/science/article/pii/S1072751520320986">huge potential for gasless surgery</a> but uptake has been limited because the retractors are bulky, hard to use and maintain and are expensive.</p>
<p>Our designers teamed with surgeons to create a modern retraction system. We worked together to understand their needs and develop better retractors through repeated testing. The result is <a href="https://ieeexplore.ieee.org/document/9780179">"RAIS” (Retractor for Abdominal Insufflation-less Surgery)</a> which is being produced by our commercial partner (<a href="https://www.xlo.in/">Ortho Life Systems</a>). It costs $980 (£802), about one-third of the price of the older retractors. </p>
<p>The response from our surgical partners has been encouraging. Dr Biju Islary, surgeon and expert in gasless laparoscopy at Crofts Memorial Christian Hospital, India, said: “I have been involved from the start … this is a very good device to use.”</p>
<p>It is being used in ten medical centres in rural Indian states and we are working to expand this to new areas in India and around the world. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=187&fit=crop&dpr=1 600w, https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=187&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=187&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=235&fit=crop&dpr=1 754w, https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=235&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/470538/original/file-20220623-60671-rxz6ee.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=235&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The RAIS system has been used in clinical centres across India. Pictured left to right; Dr Biju Islary (Crofts Memorial Christian Hospital, Assam), Prof. Anurag Mishra (Maulana Azad Medical College, New Delhi), Dr Jesudian Gnanaraj (SEESHA, Coimbatore), Dr Gordon Rangad (Nazareth Hospital, Shillong)</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>A problem close to home</h2>
<p>Even <a href="https://www.sciencedirect.com/science/article/pii/S2214999616307755">higher-income countries</a> struggle with <a href="https://research.one.surgery/beyond-technology-review-of-systemic-innovation-stories-in-global-surgery/">unequal access to surgical care</a>. Postcode lotteries create disparity in availability of healthcare, in places such as the US or UK. </p>
<p>The UK’s <a href="https://www.leedsth.nhs.uk/about-us/sustainability/news/2021/11/15/leeds-teaching-hospitals-nhs-trust-crowned-winners-of-the-green-surgery-challenge-2021">Green Surgery Challenge</a> has recently highlighted how frugal approaches could save the NHS money. For example, reusable instruments and surgical kit together with washable gowns and drapes, rather than single-use disposable items, are more environmentally friendly and cost effective.</p>
<p>Our aim is to form an international collaboration. We held the first <a href="https://surgicalinnovations.org/wpp/">International Congress for Innovation in Global Surgery</a> in April 2022. There is a lot of scope for improvement in access to gasless surgery and we will work together to improve the other technology involved, including camera systems and monitoring devices. </p>
<p>Technology innovation has an <a href="https://pubmed.ncbi.nlm.nih.gov/27890315/">important role to play in surgery</a>. People get excited about the release of a new video game or smartphone – but what could be more incredible than saving a life? Few products have as great an impact on people’s lives as accessible medical equipment. It is time for technology developers to think outside the box and create surgical products for low and medium-income countries – a market of billions.</p><img src="https://counter.theconversation.com/content/185081/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pete Culmer receives funding from the UK Engineering and Science Research Council (EPSRC) and the UK National Institute of Health Research (NIHR). He is affiliated with the Institute of Mechanical Engineering (IMechE) Biomedical Engineering Division.
The authors would like to acknowledge the research team whose dedication, passion and expertise have made this work possible, with thanks to everyone including: Association of Rural Surgeons of India and International Federation of Rural Surgeons, Anurag Mishra, Lovenish Bains and team at the Maulana Azad Medical College, New Delhi, India, Tim Beacon, Medical Aid International, Sundeep Singh Sawhney, Tamandeep Singh Kochhar and team at Ortho Life Systems, New Delhi, India; Richard Hall and Philippa Bridges at Pd-m International Ltd, Thirsk, UK; Millie Marriott Webb, Cheryl Harris and David Jayne at the University of Leeds, UK</span></em></p><p class="fine-print"><em><span>Noel Aruparayil worked as a clinical research fellow funded by NIHR Global Health. He is on the Global Surgery Foundation committee for the Royal College of Surgeons of Edinburgh and sits on the advisory board for GASOC (Global Anaesthesia, Surgery and Obstetric Collaboration). </span></em></p><p class="fine-print"><em><span>Jesudian Gnanaraj does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A team of doctors and academics worked together on back-to-basics surgical equipment that is already changing lives.Pete Culmer, Associate Professor in Surgical Technologies, University of LeedsJesudian Gnanaraj, Professor of electronics and instrumentation engineering, Karunya Institute of Technology and SciencesNoel Aruparayil, Clinical research fellow in global surgery, University of LeedsLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1543392021-05-17T21:19:37Z2021-05-17T21:19:37ZMachines can’t ‘personalize’ education, only people can<figure><img src="https://images.theconversation.com/files/400572/original/file-20210513-16-1h6flwv.jpg?ixlib=rb-1.1.0&rect=834%2C46%2C2989%2C1804&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Schools are facing accelerated COVID-19 pressures to integrate technology into children's education, and how they do has far-reaching implications. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>In the past year, COVID-19 abruptly disrupted schooling, and forced the question of <a href="https://globalnews.ca/news/7726753/covid-19-online-in-person-school-choice-2021-2022/">how much kindergarten to Grade 12 education should or will rely on online teaching in the near and distant future</a>. Education has taken a decided technological turn in its massive adaptation to online learning. This is precipitating a critical debate in education right now, with a most uncertain future and much depending on its outcome. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ontarios-choice-of-fully-online-school-would-gamble-on-children-for-profit-158292">Ontario's ‘choice’ of fully online school would gamble on children for profit</a>
</strong>
</em>
</p>
<hr>
<p>One key concern when considering both online learning and the tech platforms teachers may rely on in classrooms is a long-standing issue of how education should accommodate student individuality. For at least 150 years, education in the western world has been <a href="https://books.google.ca/books/about/The_Underground_History_of_American_Educ.html?id=p55tQgAACAAJ">conflicted over this issue</a>. </p>
<p>Education advocates like homeschooling champion <a href="https://simplycharlottemason.com/what-is-the-charlotte-mason-method/">Charlotte Mason</a> and <a href="https://www.britannica.com/biography/John-Dewey">education reformer John Dewey</a> advocated for recognition of students as unique persons whose interests and backgrounds shaped them in particular ways. Writing in 1897, Dewey argued it was <a href="https://books.google.ca/books?id=EgwVAAAAIAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false">critical for educators to note and consider students’ unique qualities when designing curriculum</a>. </p>
<p>Mason’s and Dewey’s philosophies and the schooling approaches they advocated helped spur <a href="https://journal.jctonline.org/index.php/jct/article/view/807">educational debates about the meaning of “personalized learning.”</a> These also pitted them against others like scientific management guru <a href="https://www.bl.uk/people/frederick-winslow-taylor">Frederick Taylor</a> who argued for mass standardization in education. </p>
<p>This conflict remains central to education debates unfolding today. For example, while some proponents of remote learning argue <a href="https://www.d2l.com/en-apac/blog/personalize-learning-digital-classroom/">teachers can still offer personalized learning online</a>, there are also industries focused on the notion that <a href="https://www.edweek.org/technology/q-a-the-promise-and-pitfalls-of-artificial-intelligence-and-personalized-learning/2019/11">AI can also “personalize” student experiences</a>. But machines aren’t persons.</p>
<p>Emerging research <a href="https://edsource.org/2020/disappointing-grades-technology-glitches-and-glimpses-of-learning-fun/641615">shows wide variability in student experiences</a> across technology-based approaches and platforms. Even when particular teachers are successful in delivering remote learning with students’ personal <a href="https://www.transformativelearningfoundation.org/faculty/michael-maser-v2/">and holistic interests</a> in mind, they are working in an educational context with <a href="https://theconversation.com/tax-pandemic-profiteering-by-tech-companies-to-help-fund-public-education-155705">increased marketing, uptake and profiting from educational technologies</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/teachers-have-been-let-down-by-a-decade-of-inaction-on-digital-technologies-142938">Teachers have been let down by a decade of inaction on digital technologies</a>
</strong>
</em>
</p>
<hr>
<p>Specific tech “solutions” like buying particular software for schools are often “Taylorist” insofar as the school or classroom is now committed to particular way of interacting and learning. In some cases <a href="https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html">school communities come to complain that personal contact has been replaced with computerization</a>. </p>
<p>Technology surely has a role in education, but determining what it will be, and whose interests it will really serve, is a critical public debate. To this end, here are three thinkers who can help guide parents, educators and administrators in considering how education can adapt to changing technological circumstances while centering students as people and fostering caring human communities. </p>
<h2>1. Nel Noddings</h2>
<p>In her <a href="https://www.ucpress.edu/book/9780520275706/caring">ground-breaking book, <em>Caring</em></a>, educational ethicist Nel Noddings describes the importance of seeing and “confirming” students as persons. Noddings says such “confirmation” elicits a practice of dialogue in which educators “see and receive the other” as they really are, as a teaching and moral responsibility. </p>
<p>I believe that truly “seeing” and acknowledging students is a feasible response in videoconferencing environments like Zoom and should be recognized as a best practice. The same is also true for how educators direct students to apps that enable students to pursue learning activities reflecting personal choices: for example, platforms like DIY.org, Khan Academy, YouTube and others. Teachers can can and should validate students’ particular interests as they engage these sources.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/vkmYzbwrufg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Interview with Ian McGilchrist on the divided brain and the search for meaning.</span></figcaption>
</figure>
<h2>2. Iain McGilchrist</h2>
<p>In a recent text, <a href="https://www.taylorfrancis.com/chapters/ways-attending-iain-mcgilchrist/e/10.4324/9781003049876-2">“Ways of attending: How our divided brain constructs the world</a>,” Scottish neuroscientist Iain McGilchrist asserts that technological thinking and <a href="https://www.youtube.com/watch?v=dFs9WO2B8uI">compartmentalization have come to dominate human thinking</a>. </p>
<p>This is thinking rooted in the brain’s left hemisphere and exemplified by mathematical reasoning and rationalization. He says the brain’s right hemisphere, responsible for whole-person, big-picture thinking, and moral decision-making, plays a secondary role. McGilchrist contends that new digital technologies driven by machine logic are effectively hijacking human attention, forcing us to become more machine-like. </p>
<p>McGilchrist advises everyone to study how we are interacting with technology to better understand how technology is influencing behaviours, including how it distracts us and channels our attention. If we don’t better perceive this, he warns, we risk becoming increasingly alienated from the feelings and moral decision-making that define our humanity. </p>
<h2>3. Ursula Franklin</h2>
<p>Scientist, <a href="https://alchetron.com/Ursula-Franklin">acclaimed humanitarian</a> and pacifist Ursula Franklin described in her <a href="https://www.cbc.ca/radio/ideas/the-humane-world-of-ursula-franklin-a-scientist-who-wanted-us-to-question-technology-1.5825485">1989 Massey Lecture series and book</a>, <a href="https://houseofanansi.com/products/the-real-world-of-technology-digital"><em>The Real World of Technology</em></a> how the Industrial Revolution set in motion technological processes, like assembly lines, that ushered in sweeping societal changes.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A woman at a microphone." src="https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=869&fit=crop&dpr=1 600w, https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=869&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=869&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1092&fit=crop&dpr=1 754w, https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1092&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/400578/original/file-20210513-13-5o3tdl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1092&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ursula Franklin speaks in Ottawa in July 1970. Franklin argued that money spent by Canada on warfare research could be better spent on environmental research.</span>
<span class="attribution"><span class="source">CP PHOTO/Chuck Mitchell</span></span>
</figcaption>
</figure>
<p>She characterized such processes as “prescriptive” in how they engineered human behaviour through compliance and conditioning, resulting in an “enormous social mortgage.” Franklin contrasts prescriptive technologies with “holistic” technologies that are controlled by an individual user, like personal craftsmanship. </p>
<p>To Franklin, holistic technologies enable people to enact caring gestures, and are spontaneous and flexible, where prescriptive technologies are rigid and mechanistic. Franklin’s philosophy points to the idea that we should recognize the limits and power of technology. </p>
<p>Franklin’s insights should lead us to remember that while <a href="https://link.springer.com/chapter/10.1007%2F978-3-030-13743-4_9">collaboration amongst students can be enhanced in technological environments</a>, some education researchers also caution that technological tools themselves don’t create holistic, inclusive or creative communities. Only humans can do this. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/digital-platforms-alone-dont-bridge-youth-divides-121222">Digital platforms alone don't bridge youth divides</a>
</strong>
</em>
</p>
<hr>
<h2>Serving people</h2>
<p>The insights of Noddings, McGilchrist, Franklin and others urge us to deeply consider the technologies we choose to use in our schools and what role they play. This does not mean that we reject the integration of technology into education. I believe many educators have demonstrated it is possible to strike a healthy balance when integrating technology with educational goals. </p>
<p>But future educational paths will reflect choices we make now. In facing today’s unprecedented challenges, educators and school administrators must continue to support education as an endeavour that holds at its core the mission of serving all people.</p><img src="https://counter.theconversation.com/content/154339/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>michael maser has previously received funding from mitacs. </span></em></p>Insights of neuroscientist Ian McGilchrist, philosopher Nel Noddings and physicist Ursula Franklin help centre students and our collective future in debates about education and technology.Michael Maser, PhD candidate - Faculty of Education, Simon Fraser UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1545622021-02-08T05:32:38Z2021-02-08T05:32:38Z5 tips to figure out if a tech company on the stock market is an ethical investment<figure><img src="https://images.theconversation.com/files/382945/original/file-20210208-15-1kyxeyx.jpg?ixlib=rb-1.1.0&rect=36%2C36%2C4883%2C3216&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>These days people trading on the stock market want more than just a strong financial return. They’re increasingly opting for investments that will also have a positive societal impact.</p>
<p>The coronavirus pandemic showed us even established tech companies can suffer downturns in the short term. <a href="https://edition.cnn.com/2020/01/30/tech/apple-coronavirus-response/index.html">Apple</a>, a tech behemoth, was left <a href="https://www.economist.com/business/2020/02/20/apples-chinese-troubles">reeling</a> when Chinese manufacturing hubs were temporarily <a href="https://edition.cnn.com/2020/01/30/tech/apple-coronavirus-response/index.html">shut down</a> last year. </p>
<p>In the longer term, however, technology stocks remain a first choice for many investors. <a href="https://www.fool.com/investing/2020/01/06/the-10-best-performing-stocks-of-the-decade.aspx">Historically</a>, they’ve dominated global stock markets and continue to grow at a remarkable rate.</p>
<p>Even during the downward spiral of the pandemic, tech stocks such as <a href="https://fortune.com/2020/08/31/zoom-stock-zm-shares-q2-earnings-customer-base/">Zoom</a> and <a href="https://www.afr.com/markets/equity-markets/microsoft-shares-surge-to-new-high-on-work-from-home-revenue-boost-20210127-p56x8f">Microsoft</a> soared in value as an influx of people started working from home. The question for many investors now is: how can one find profitable investments without supporting unethical activity?</p>
<h2>Growth of tech stocks</h2>
<p>According to investment advisers <a href="https://www.morningstar.com/articles/1011300/is-your-portfolio-too-heavy-on-technology-stocks">Morningstar</a>, technology stocks account for 24.2% of the top 500 stocks in the United States. Facebook, Apple, Amazon, Netflix and Alphabet (which owns Google) dominate the market, with a combined value of <a href="https://groww.in/blog/faang-stocks-performance-over-the-last-decade/">more than US$4 trillion</a>. </p>
<p>Tech stocks also take centre stage in Australia. We’ve seen the rapid rise of “buy now, pay later” companies such as Australian-owned Afterpay and Zip.</p>
<p>At the same time, we’ve seen an increase in the number of Australians moving to ethical superannuation funds and ethically-managed investment schemes. The latter lets investors contribute money (to be managed by professional fund managers) which is pooled for investment to produce collective gain.</p>
<p>It’s estimated indirect investment through these schemes has increased <a href="https://www.morningstar.com.au/etfs/article/sustainable-etfs-outpace-the-rest-during-mark/202652">by 79%</a> over the past six years.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/wall-street-isnt-just-a-casino-where-traders-can-bet-on-gamestop-and-other-stocks-its-essential-to-keeping-capitalism-from-crashing-154154">Wall Street isn't just a casino where traders can bet on GameStop and other stocks – it's essential to keeping capitalism from crashing</a>
</strong>
</em>
</p>
<hr>
<h2>What is ethical investing?</h2>
<p>While ethical investing is a broad concept, it can be understood simply as putting your money towards something that helps improve the world. This can range from companies that advocate for animal rights, to those aiming to limit the societal prevalence of gambling, alcohol or tobacco.</p>
<p>Although there is no strict definition of ethical investment in Australia, many managed funds and super funds seek accreditation by the <a href="https://responsibleinvestment.org/">Responsible Investment Association Australasia</a>. The “ethical” aspect can be grouped into three broad categories:</p>
<ol>
<li><p><strong>environmental</strong> — such as developing clean technology or engaging in carbon-neutral manufacturing</p></li>
<li><p><strong>social</strong> — such as supporting innovative technology, reducing social harms such as poverty or gambling, boosting gender equality, protecting human and consumer rights or supporting animal welfare</p></li>
<li><p><strong>corporate governance</strong> — such as being anti-corruption, promoting healthy employee relations or institutional transparency.</p></li>
</ol>
<p>As investors we must be very careful about the fine print of the companies we invest in. For example, accreditation guidelines dictate that a managed investment fund excluding companies with “significant” ties to fossil fuels could still include one that earns <em>up to</em> a certain amount of revenue from fossil fuels.</p>
<p>So while investment manager <a href="https://www.ampcapital.com/content/dam/capital/03-funds-files-only/aus-funds/responsible-investment-leaders/RIL_PDS_Class_A.pdf">AMP Capital</a> is accredited, it can still include companies earning up to 10% of their revenue from fossil fuel distribution and services. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Wind turbines in a field" src="https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=352&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=352&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=352&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=442&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=442&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382946/original/file-20210208-19-1px82kr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=442&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The terms ‘ethical’, ‘sustainable’ and ‘green’ are sometimes used interchangeably when referring to environmentally-responsible investing.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>5 tips for ethical tech investment</h2>
<p>Many technology stocks are well placed for ethical investment and you can choose to invest on your own, or indirectly via a managed investment fund. In either case, you should do some basic homework first.</p>
<p>1) <strong>Monitor the fund or company to ensure standards are maintained</strong></p>
<p>For a company to be listed with the Australian Securities Exchange (ASX) it has to be publicly listed. It is therefore required to submit an annual audit report (audited by third-party auditors) to the Australian Securities and Investments Commission (ASIC), as per the <a href="https://asic.gov.au/regulatory-resources/financial-reporting-and-audit/preparers-of-financial-reports/lodgement-of-financial-reports/">Corporations Act 2001</a>.</p>
<p>You can also contact ASIC for <a href="https://asic.gov.au/regulatory-resources/financial-reporting-and-audit/users-of-financial-reports/">further information</a> about a company listed on the ASX. The equivalent body for American companies is the US <a href="https://www.sec.gov/">Securities and Exchange Commission</a>.</p>
<p>If a company backtracks on the very ethical standards that prompted your initial investing, you should consider withdrawing your investment.</p>
<p>2) <strong>Stay updated on reported ethical breaches</strong></p>
<p>Reputable news reports are useful on this front. <a href="https://slate.com/technology/2020/01/evil-list-tech-companies-dangerous-amazon-facebook-google-palantir.html">Amazon, Facebook and Alphabet</a> are recurring names in reports about <a href="https://theconversation.com/the-ugly-truth-tech-companies-are-tracking-and-misusing-our-data-and-theres-little-we-can-do-127444">unethical practices</a> in the tech sector. </p>
<p>While you can access plenty of information about a tech company from its own website and distribution channels, this is usually embellished and/or handpicked by the company itself. Make sure your information comes from diverse sources.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/google-is-leading-a-vast-covert-human-experiment-you-may-be-one-of-the-guinea-pigs-154178">Google is leading a vast, covert human experiment. You may be one of the guinea pigs</a>
</strong>
</em>
</p>
<hr>
<p>3) <strong>Consider how employees rate the company and why</strong></p>
<p>Keep in mind a technology company might be environmentally ethical but still fall down on other issues, such as gender pay parity, for instance. It’s important to listen to <a href="https://www.buzzfeednews.com/article/ryanmac/facebook-employee-leaks-show-they-feel-betrayed">employees’ claims</a> about a company’s internal workings as such insight may otherwise <a href="https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa">be unavailable</a>. </p>
<p>There are a number of independent sites reporting on corporate culture ratings, including <a href="https://www.glassdoor.com.au/index.htm">Glassdoor</a>. </p>
<p>4) <strong>Assess the environmental, social and corporate governance (ESG) score</strong> </p>
<p>One benefit of investing in large to medium-sized tech companies is the ability to analyse their ESG score, issued by agencies such as <a href="https://www.refinitiv.com/en/financial-data/company-data/esg-data">Refinitiv</a>. This score reflects how well the company adheres to ethical practice across environmental, social and corporate governance-related matters. </p>
<p>5) <strong>Watch out for buzzwords</strong></p>
<p>If you’re looking to invest in clean technology, watch out for buzzwords used in company reports. These are terms which at face value may seem to align with your own ethical investment values, without actually delivering. </p>
<p>For instance, “carbon net zero” and “carbon neutral” are <a href="https://www.climatecouncil.org.au/resources/what-does-net-zero-emissions-mean/">not the same thing</a>. This is an important distinction to consider if you’re wanting to make environmentally-responsible investments.</p><img src="https://counter.theconversation.com/content/154562/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Historically, tech stocks have dominated global stock markets. Despite the pandemic, many continue to grow at remarkable rate.Angel Zhong, Senior Lecturer in Finance, RMIT UniversityBanita Bissoondoyal-Bheenick, Associate Professor and Associate Dean Finance, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1541782021-01-29T05:13:21Z2021-01-29T05:13:21ZGoogle is leading a vast, covert human experiment. You may be one of the guinea pigs<figure><img src="https://images.theconversation.com/files/381255/original/file-20210129-17-4so5r3.jpg?ixlib=rb-1.1.0&rect=44%2C26%2C5847%2C3869&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>On January 13 the <a href="https://www.afr.com/companies/media-and-marketing/google-blocks-australian-news-in-experiment-20210113-p56tqd">Australian Financial Review reported</a> Google had removed some Australian news content from its search results for some local users. </p>
<p>Speaking to <a href="https://www.theguardian.com/technology/2021/jan/13/google-admits-to-running-experiments-which-remove-some-media-sites-from-its-search-results">the Guardian</a>, a Google spokesperson confirmed the company was “running a few experiments that will each reach about 1% of Google Search users in Australia to measure the impacts of news businesses and Google Search on each other”.</p>
<p>So what are these “experiments”? And how concerned should we be about Google’s actions? </p>
<h2>Engineering our attention</h2>
<p>Google’s experiment (which is <a href="https://www.theguardian.com/technology/2021/jan/13/google-admits-to-running-experiments-which-remove-some-media-sites-from-its-search-results">supposed to run</a> until early February) involves displaying an “alternative” news website ranking for certain Australian users — at least 160,000, <a href="https://www.theguardian.com/technology/2021/jan/28/important-stories-hidden-in-googles-experiment-blocking-australian-news-sites">according to</a> The Guardian.</p>
<p>A Google spokesperson told The Conversation the experiment didn’t prevent users (being experimented on) from accessing a news story. Rather, they would not discover the story through Search and would have to access it another way, such as directly on a publisher’s website.</p>
<p>Google’s experiment is a form of “A/B testing”, which classically involves dividing a population randomly in half — into groups A and B — and subjecting each group to a different “stimulus”. </p>
<p>For example, in the case of web design, the two groups may be served different web layouts. This could be done to test changes to layout, the colour scheme or any other element. </p>
<p>Performance in A/B testing is judged on a range of factors, such as which links are clicked first, or the average time spent on a page. If group A perused the site longer than group B, the modification tested on group A may be considered favourable.</p>
<p>In Google’s case, we don’t know the motivation behind the tests. But we do know a small subset of users received different results to the majority and were not alerted.</p>
<p>The experiment has resulted in the promotion of dubious news sources over trusted ones, some of which have been <a href="https://www.nbcnews.com/tech/tech-news/trump-qanon-impending-judgment-day-behind-facebook-fueled-rise-epoch-n1044121">known to publish</a> disinformation (which intends to mislead) and misinformation (false claims that are spread regardless of intent). </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-accc-is-suing-google-for-misleading-millions-but-calling-it-out-is-easier-than-fixing-it-143447">The ACCC is suing Google for misleading millions. But calling it out is easier than fixing it</a>
</strong>
</em>
</p>
<hr>
<p>When asked about this ranking, Google’s spokesperson said it was a “single anecdotal screenshot” and the experiment didn’t “remove results that link to official government departments and agencies”. </p>
<h2>Intent to manipulate</h2>
<p>A/B testing is a widespread practice. It can range from being fairly benign — such as to determine the best location for an advertisement banner — to much more invasive, such as Facebook’s <a href="https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/">infamous mood experiment</a>. </p>
<p>In January 2012, Facebook conducted an experiment on 700,000 users without their knowledge or explicit consent. It adjusted users’ feeds to artificially boost either positive or negative news content. </p>
<p>One reported aim, according to Facebook’s own researchers, was to examine whether emotional states could spread from user to user on the platform. Results were reported in the <a href="https://www.pnas.org/content/111/24/8788.full">Proceedings of the National Academy of Sciences</a>.</p>
<p>Following the report’s publication, Facebook’s “experiment” was widely condemned by academics, journalists and the public as ethically dubious. It had a specific objective to emotionally manipulate users and didn’t obtain informed consent.</p>
<p>Similarly, it’s unlikely users caught in the midst of Google’s Australian news experiment would realise it. </p>
<p>And while the direct risk to those being tested may seem lower than with Facebook’s mood experiment, tweaking news results on Google Search introduces its own set of risks. As <a href="https://research.qut.edu.au/dmrc/2020/04/30/like-a-virus/">research</a> my colleagues and I has shown, platforms and news media both play a large role in <a href="https://journals.sagepub.com/doi/full/10.1177/1329878X20946113">spreading conspiracy theories</a>. </p>
<p>Google tried to downplay the significance of the experiment, <a href="https://www.theguardian.com/technology/2021/jan/13/google-admits-to-running-experiments-which-remove-some-media-sites-from-its-search-results">noting that</a> it conducts “tens of thousands of experiments in Google Search” each year. </p>
<p>But this doesn’t excuse the company from scrutiny. If anything, it’s even more concerning.</p>
<p>Imagine if a police officer pulled you over for speeding and you said: “Well, I speed thousands of times each year, so why should I pay a fine just this one time I’ve been caught?”</p>
<p>If this is just one experiment among of tens of thousands, as Google has admitted, in what other ways have we been manipulated in the past? Without basic disclosures, it’s difficult to know. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=424&fit=crop&dpr=1 600w, https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=424&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=424&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=533&fit=crop&dpr=1 754w, https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=533&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/381258/original/file-20210129-15-ljtpih.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=533&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A report from the Australian Financial Review said ‘anecdotal evidence’ suggested Google was ‘experimenting with its algorithm to remove stories from Australian news publishers from its search results’.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>A history of non-disclosure</h2>
<p>This isn’t the first time Google has been caught experimenting on users without adequate disclosure. In 2018, the company <a href="https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html">released Google Duplex</a>, a speech-enabled digital assistant that could purportedly make restaurant and other personal service bookings on a user’s behalf.</p>
<p>In the Duplex <a href="https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html">demos</a>, Google played audio of an AI-enabled speech agent making bookings via conversations with real service workers. What was missing from the calls, however, was a disclosure that the agent opening the call <a href="https://www.theverge.com/2018/11/26/18112807/google-duplex-robot-calls-restaurants-businesses-transparency">was a bot</a>, not a human. </p>
<p><a href="https://www.afr.com/technology/google-duplex-humanlike-voice-raises-ethical-concerns-20180510-h0zw3f">Critics</a> questioned the <a href="https://mashable.com/2018/05/10/google-duplex-disclosures-robot/">deceptiveness of the technology</a>, given its mimicry of human speech.</p>
<p>Google’s <a href="https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/">controversial dismissal</a> in December of world-leading AI ethics researcher Timnit Gebru (former co-lead of its ethical AI team) cast further shade over the company’s internal culture.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1334341991795142667"}"></div></p>
<h2>What needs to change?</h2>
<p>Digital media platforms including Google, Facebook, Netflix and Amazon (among others) exert enormous power over our lives. They also have vast <a href="https://theconversation.com/facebook-is-tilting-the-political-playing-field-more-than-ever-and-its-no-accident-148314">political influence</a>. </p>
<p>It’s no coincidence Google’s news ranking experiment took place against the backdrop of the escalating news media bargaining code debate, wherein the federal government wants Google and Facebook to negotiate with Australian news providers to pay for using their content. </p>
<p>Google’s spokesperson confirmed the experiment is “directly connected to the need to gather information for use in arbitration proceedings, should the code become law”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/googles-open-letter-is-trying-to-scare-australians-the-company-simply-doesnt-want-to-pay-for-news-144573">Google's 'open letter' is trying to scare Australians. The company simply doesn't want to pay for news</a>
</strong>
</em>
</p>
<hr>
<p>While users benefit from the services big tech provides, we need to appreciate we’re more than mere consumers of these services. The data we forfeit are essential input for the massive algorithmic machinery that runs at the core of enterprises such as Google. </p>
<p>The result is what digital media scholars call an “<a href="https://journals.sagepub.com/doi/pdf/10.1177/1367549415577392">algorithmic culture</a>”. We feed these machines our data and in the process tune them towards our tastes. Meanwhile, they feed us back more things to consume, in a giant <a href="https://journals.sagepub.com/doi/full/10.1177/1461444815605463">human-machine algorithmic loop</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=375&fit=crop&dpr=1 600w, https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=375&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=375&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=471&fit=crop&dpr=1 754w, https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=471&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/381262/original/file-20210129-17-179tdws.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=471&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Large tech enterprises such as Facebook and Google rely on user data to stay afloat.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Until recently, we have been uncritical participants in these algorithmic loops and experiments, willing to use “free” services in exchange for our data. But we need to rethink our relationship with platforms and must hold them to a higher standard of accountability. </p>
<p>Governments should mandate minimum standards of disclosure for platforms’ user testing. A/B testing by platforms can still be conducted properly with adequate disclosures, oversight and opt-in options.</p>
<p>In the case of Google, to “<a href="https://www.engadget.com/2015-10-02-alphabet-do-the-right-thing.html">do the right thing</a>” would be to adopt a <a href="https://techcrunch.com/2014/06/29/ethics-in-a-data-driven-world/">higher standard of ethical conduct</a> when it comes to user testing.</p><img src="https://counter.theconversation.com/content/154178/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Angus receives funding from Australian Research Council through Discovery projects DP200100519 ‘Using machine vision to explore Instagram’s everyday promotional cultures’, and DP200101317 ‘Evaluating the Challenge of ‘Fake News’ and Other Malinformation’.</span></em></p>If this is just one experiment among of tens of thousands, as Google has admitted, in what other ways might users have been manipulated in the past?Daniel Angus, Associate Professor in Digital Communication, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1114152019-03-22T10:45:05Z2019-03-22T10:45:05ZCars are regulated for safety – why not information technology?<figure><img src="https://images.theconversation.com/files/265128/original/file-20190321-93063-ouhsqj.jpg?ixlib=rb-1.1.0&rect=152%2C6%2C4096%2C2816&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Modern cars are safer than this – but not because auto companies got more ethical.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/bakersfield-ca-oct-24-beautifully-restored-39559930">Richard Thornton/Shutterstock.com</a></span></figcaption></figure><p>As the computing industry grapples with its role in society, many people, both <a href="https://gzconsulting.org/2018/06/04/salesforce-there-is-a-crisis-of-trust-concerning-data-privacy-and-cybersecurity/">in the field</a> and <a href="https://www.wsj.com/articles/in-praise-of-hierarchy-1515175338">outside it</a>, are talking about a <a href="https://www.bostonglobe.com/ideas/2018/03/22/computer-science-faces-ethics-crisis-the-cambridge-analytica-scandal-proves/IzaXxl2BsYBtwM4nxezgcP/story.html">crisis</a> of <a href="https://www.wsj.com/articles/the-culture-of-deathand-of-disdain-1507244198">ethics</a>. </p>
<p>There is a massive rush to hire <a href="https://www.nytimes.com/2018/10/21/opinion/who-will-teach-silicon-valley-to-be-ethical.html">chief ethics officers</a>, retool <a href="https://theconversation.com/programmers-need-ethics-when-designing-the-technologies-that-influence-peoples-lives-100802">codes of professional ethics</a> and <a href="https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/">teach ethics to students</a>. But as a <a href="https://scholar.google.com/citations?user=DQaARsgAAAAJ&hl=en">scholar of computing</a> – and a teacher of <a href="https://www.cs.rice.edu/%7Evardi/COMP301-2019.pdf">a course on computing, ethics and society</a> at Rice University – I am skeptical of the assumptions that what ails technology is a lack of ethics, and that the best fix is to teach technologists about ethics.</p>
<p>Instead, in my view, the solution is government action, which aims at balancing regulation, ethics and markets. This isn’t a radical new idea: It’s how society treats cars and driving.</p>
<p>Consider, for instance, the Ford Model T, the first mass-produced and mass-consumed automobile. Its debut in 1908 launched the automobile age, a time of great mobility – and widespread death. Car crashes kill <a href="https://www.who.int/gho/road_safety/mortality/en/">more than a million people worldwide each year</a> – but the fatality rate per mile driven <a href="https://en.wikipedia.org/wiki/Transportation_safety_in_the_United_States">has been dropping</a> almost since the first Model T rolled off the assembly line. </p>
<p><iframe id="m9zdG" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/m9zdG/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The reason for that improving safety record is not that people learning to drive studied the ethics of responsible and safe driving. Rather, they were taught, and tested on, the rules of the road, in order to obtain a driving license. Other regulations <a href="https://www.fhwa.dot.gov/programadmin/standards.cfm">improved how roads were built</a>, required car makers to adopt <a href="https://www.npr.org/2015/10/16/449090584/why-arent-auto-safety-standards-universal">new safety features</a>, mandated <a href="https://www.nerdwallet.com/blog/insurance/car-insurance/">accident insurance</a>, and <a href="https://www.nhtsa.gov/laws-regulations/impaired-driving">outlawed drunk driving</a> and <a href="https://en.wikipedia.org/wiki/Texting_while_driving">other unsafe behaviors</a>. I believe a similar approach – regulation, in addition to ethics education for technologists, as well as market competition – is needed today to make modern technology safe for society as a whole.</p>
<h2>Flaws in the basic business model</h2>
<p>In the 1980s, internet pioneers adopted a philosophy that “<a href="https://en.wikipedia.org/wiki/Information_wants_to_be_free">information wants to be free</a>” – so website owners didn’t charge readers for access to the content. Instead, internet companies used advertising to support their efforts. That led them to collect personal data on their users and offer <a href="https://theconversation.com/solving-the-political-ad-problem-with-transparency-85366">micro-targeted advertising</a> to make money, which social scientist Shoshana Zuboff calls “<a href="https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/">surveillance capitalism</a>.”</p>
<p>This business model is <a href="https://www.macrotrends.net/stocks/charts/GOOG/alphabet/revenue">enormously profitable</a>, so it’s unlikely internet companies will abandon it on their own as a result of ethical qualms. Even in the face of <a href="https://techcrunch.com/2018/10/24/apples-tim-cook-makes-blistering-attack-on-the-data-industrial-complex/">blistering critiques</a> and <a href="https://theconversation.com/understanding-facebooks-data-crisis-5-essential-reads-94066">Facebook’s Cambridge Analytica</a> scandal, the massive profits are compelling.</p>
<p><iframe id="19C8Z" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/19C8Z/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The real problem with surveillance capitalism is not that it is unethical – which I believe it is – but that it is completely legal in most countries. It is unreasonable to expect for-profit corporations to avoid profitable businesses that are legal. In my view, it is not enough to simply criticize internet companies’ ethics. If society finds the surveillance business model offensive, then the remedy is not an ethical outrage, but making laws and regulations that govern it, or even prevent it altogether.</p>
<p>Of course, public policy cannot be divorced from ethics. Selling human organs <a href="https://doi.org/10.1097%2FTP.0000000000001778">is banned in the U.S.</a> in part because society finds it ethically repugnant to profit from life itself. But the ban is enforced by laws, not by an ongoing ethics debate. As Chief Justice Earl Warren remarked: “<a href="https://www.brainyquote.com/quotes/earl_warren_112607">In civilized life, law floats in a sea of ethics</a>.”</p>
<h2>Regulation has benefits</h2>
<p>For decades, the information-technology industry has <a href="https://www.wired.com/story/the-case-against-elon-musk-will-chill-innovation/">successfully lobbied</a> against attempts to legislate or regulate it, arguing that “<a href="https://bigthink.com/peter-thiel-regulation-stifles-innovation">regulation stifles innovation</a>.” Of course, that assumes all innovation is good. It has become evidently clear that this is not always the case: Some of the internet giants’ <a href="https://theconversation.com/facebook-is-killing-democracy-with-its-personality-profiling-data-93611">innovation has harmed democratic society</a> in the U.S. and around the world. </p>
<p>In fact, one purpose of regulation is to chill certain kinds of innovation – specifically, those that the public finds wrong, distasteful or unhelpful to the advancement of society. Regulation can also encourage innovation in ways society deems beneficial. There is no question that regulations on the automobile industry encouraged innovation in <a href="https://en.wikipedia.org/wiki/Automotive_safety">safety</a> and <a href="https://www.ucsusa.org/clean-vehicles/fuel-efficiency/fuel-economy-basics.html">fuel efficiency</a>.</p>
<p>Some members of Congress have proposed a number of <a href="https://www.documentcloud.org/documents/4620765-PlatformPolicyPaper.html#document/p1">ambitious plans</a> to tackle <a href="https://theconversation.com/weaponized-information-seeks-a-new-target-in-cyberspace-users-minds-100069">information warfare</a>, <a href="https://theconversation.com/fragmented-us-privacy-rules-leave-large-data-loopholes-for-facebook-and-others-94606">consumer protection</a>, <a href="https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791">competition in digital technology</a> and the <a href="https://theconversation.com/artificial-intelligence-must-know-when-to-ask-for-human-help-112207">role of artificial intelligence</a> in society. But much simpler – and more widely supported – rules could make a huge difference for individual customers and society as a whole.</p>
<p>For instance, federal regulators could require software terms and licenses include plain language that’s easily understood by anyone – perhaps modeled on the longstanding “<a href="https://www.sec.gov/rules/final/33-7497.txt">plain English rule</a>” for corporate financial filings to the U.S. Securities and Exchange Commission. Laws or rules could also require companies to <a href="https://finance.yahoo.com/news/need-federal-law-protecting-consumers-data-leaks-195017782.html">disclose data breaches quickly</a>, both to officials and the public at large. That might even spark innovation as firms increase their efforts to prevent and detect network intrusions and data theft. Another relatively easy opportunity would be to regulate automated judicial decision systems, including requiring that they not be deployed before passing an independent audit showing that they are <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">fair and unbiased</a>.</p>
<p>Those straightforward regulations could pave the way for thinking and talking about whether and how to regulate <a href="https://www.cnn.com/2018/12/17/tech/big-tech-too-big-tim-wu/index.html">the sizes of these big technology firms</a>. But rule-making need not start with the hardest problems – there’s plenty to do that most people would agree on right away.</p>
<p>The bottom line is that technology advances have been moving <a href="https://en.wikipedia.org/wiki/Moore%27s_law">very fast</a>, while public policy has lagged behind. It is time for public policy to catch up with technology. If technology is driving the future, society should do the steering.</p><img src="https://counter.theconversation.com/content/111415/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Moshe Y. Vardi is affiliated with the Association for Computing Machinery, a professional association. </span></em></p>Of course people need ethics. But the current troubles in the technology industry are not evidence of an ethics crisis; it is a public-policy crisis.Moshe Y. Vardi, Professor of Computer Science, Rice UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1008022018-08-08T10:36:30Z2018-08-08T10:36:30ZProgrammers need ethics when designing the technologies that influence people’s lives<figure><img src="https://images.theconversation.com/files/230468/original/file-20180802-136655-83kzbw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What does this code do – and what does it mean?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/developing-programming-coding-technologies-website-design-613463807">REDPIXEL.PL/Shutterstock.com</a></span></figcaption></figure><p>Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers <a href="https://theconversation.com/equifax-breach-is-a-reminder-of-societys-larger-cybersecurity-problems-84034">steal the personal information of hundreds of thousands</a> of people from a large corporation. Their work can protect – or jeopardize – critical infrastructure like <a href="https://theconversation.com/cybersecurity-of-the-power-grid-a-growing-challenge-73102">electrical grids</a> and <a href="https://theconversation.com/connected-cars-can-lie-posing-a-new-threat-to-smart-cities-95339">transportation lines</a>. And the algorithms they write may determine who gets a job, who is <a href="https://theconversation.com/did-artificial-intelligence-deny-you-credit-73259">approved for a bank loan</a> or who gets <a href="https://theconversation.com/we-need-to-know-the-algorithms-the-government-uses-to-make-important-decisions-about-us-57869">released on bail</a>.</p>
<p>Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure the connections between one component and another are appropriate, safe and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.</p>
<p>That’s why the world’s largest organization of computer scientists and engineers, the <a href="https://www.acm.org/">Association for Computing Machinery</a>, of which I am president, has issued a <a href="https://ethics.acm.org/">new code of ethics for computing professionals</a>. And it’s why ACM is taking other steps to help technologists engage with ethical questions. </p>
<h2>Serving the public interest</h2>
<p>A code of ethics is more than just a document on paper. There are <a href="http://ethicscodescollection.org/">hundreds of examples of the core values and standards</a> to which every member of a field is held – including for <a href="https://www.agohq.org/careers/codes-procedures/">organist guilds</a> and <a href="https://oaaa.org/AboutOAAA/WhoWeAre/OAAACodeofIndustryPrinciples.aspx">outdoor advertising associations</a>. The <a href="https://www.nlm.nih.gov/hmd/greek/greek_oath.html">world’s oldest code of ethics</a> is also its most famous: the <a href="http://www.pbs.org/wgbh/nova/body/hippocratic-oath-today.html">Hippocratic oath medical doctors</a> take, promising to care responsibly for their patients.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/230599/original/file-20180803-41360-1v5jppq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Medical professionals are ethically bound to put their patients’ needs and interests first.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/doctor-explaining-diagnosis-her-female-patient-112862074">Alexander Raths/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>I suspect that one reason for the Hippocratic oath’s fame is how personal medical treatment can be, with people’s lives hanging in the balance. It’s important for patients to feel confident their medical caregivers have their interests firmly in mind.</p>
<p>Technology is, in many ways, similarly personal. In modern society computers, software and digital data are everywhere. They’re visible in laptops and smartphones, social media and video conferencing, but they’re also hidden inside the devices that help manage people’s daily lives, from thermostats to timers on coffee makers. New developments in autonomous vehicles, sensor networks and machine learning mean computing will play an even more central role in everyday life in coming years.</p>
<h2>A changing profession</h2>
<p>As the creators of these technologies, computing professionals have helped usher in the new and richly vibrant rhythms of modern life. But as computers become increasingly interwoven into the fabric of life, we in the profession must personally recommit to serving society through ethical conduct. </p>
<p>ACM’s last code of ethics was adopted in 1992, when many people saw computing work as purely technical. The internet was in its infancy and people were just beginning to understand the value of being able to aggregate and distribute information widely. It would still be years before artificial intelligence and machine learning had applications outside research labs.</p>
<p>Today, technologists’ work can affect the lives and livelihoods of people in ways that may be unintended, even unpredictable. I’m not an ethicist by training, but it’s clear to me that anyone in today’s computing field can benefit from guidance on ethical thinking and behavior.</p>
<h2>Updates to the code</h2>
<p>ACM’s new ethics code has several important differences from the 1992 version. One has to do with unintended consequences. In the 1970s and 1980s, technologists built software or systems whose effects were limited to specific locations or circumstances. But over the past two decades, it has become clear that as technologies evolve, they can be applied in contexts very different from the original intent. </p>
<p>For example, computer vision research has led to ways of <a href="https://arxiv.org/pdf/1612.00523v1.pdf">creating 3D models of objects</a> – and people – based on 2D images, but it was never intended to be used in conjunction with <a href="https://maliciousaireport.com/">machine learning in surveillance or drone applications</a>. The old ethics code asked software developers to be sure a program would actually do what they said it would. The new version also exhorts developers to explicitly evaluate their work to identify potentially harmful side effects or potential for misuse.</p>
<p>Another example has to do with human interaction. In 1992, most software was being developed by trained programmers to run operating systems, databases and other basic computing functions. Today, many applications rely on user interfaces to <a href="https://theconversation.com/how-universal-design-can-help-every-voter-cast-a-ballot-54373">interact directly with a potentially vast number of people</a>. The updated code of ethics includes more detailed considerations about the needs and sensitivities of very diverse potential users – including discussing discrimination, exclusion and harassment.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/230601/original/file-20180803-41320-1u54hxg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When computers help bankers consider loan applications, the algorithms need to be treating customers ethically.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/serious-investment-broker-financial-advisor-bank-1075401797">fizkes/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>More and more software is being developed to run with little or no input or human understanding, producing analytical results to guide decision-making, such as when to approve bank loans. The outputs can have completely unintended social effects, skewed against whole classes of people – like recent cases where data-mining predictions of who would default on a loan showed <a href="https://www.mckinsey.com/business-functions/risk/our-insights/controlling-machine-learning-algorithms-and-their-biases">biases against people who seek longer-term loans</a> or <a href="https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/">live in particular areas</a>. There are also dangers of what are called “false positives,” when a computer links two things that shouldn’t be connected – as when facial recognition software recently <a href="https://www.nytimes.com/2018/07/26/technology/amazon-aclu-facial-recognition-congress.html">matched members of Congress to criminals’ mug shots</a>. The revised code exhorts technologists to take special care to avoid creating systems with the potential to oppress or disenfranchise whole groups of people.</p>
<h2>Living ethics in technology</h2>
<p>The code was revised over the course of more than two years, including ACM members and people outside the organization and even outside the computing and technological professions. All these perspectives made the code better. For example, a government-employed weapons designer asked whether that job inherently required violating the code; the wording was changed to clarify that systems must be “consistent with the public good.”</p>
<p>Now that the code is out, there’s more to do. ACM has created a <a href="https://ethics.acm.org/code-of-ethics/using-the-code/">repository for case studies</a> showing how ethical thinking and the guidelines can be applied in a variety of real-world situations. The group’s <a href="https://ethics.acm.org/integrity-project/ask-an-ethicist/">“Ask An Ethicist” blog and video series</a> invites the public to submit scenarios or quandaries as they arise in practice. Word is also underway to develop teaching modules so the concepts can be integrated into computing education from primary school through university.</p>
<p>Feedback has been overwhelmingly positive. My personal favorite was the comment from a young programmer after reading the code: “Now I know what to tell my boss if he asks me to do something like that again.”</p>
<p>The ACM Code of Ethics and Professional Conduct begins with the statement, “Computing professionals’ actions change the world.” We don’t know if our code will last as long as the Hippocratic oath. But it highlights how important it is that the global computing community understands the impact our work has – and takes seriously our obligation to the public good.</p><img src="https://counter.theconversation.com/content/100802/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cherri M. Pancake receives research funding from the US Federal Government. </span></em></p>Technological professionals are the first, and last, lines of defense against the misuse of technology.Cherri M. Pancake, Professor Emeritus of Electrical Engineering & Computer Science, Oregon State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1010462018-08-03T16:06:29Z2018-08-03T16:06:29ZGoogle’s censored Chinese search engine: a catalogue of ethical violations?<figure><img src="https://images.theconversation.com/files/230621/original/file-20180803-41351-lgqdh0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/censorship-on-internet-restricted-limited-searching-730052776?src=a3mTqWZJRjnw0c08bbG1JA-1-99">Shutterstock</a></span></figcaption></figure><p>The Great Firewall of China is the <a href="https://www.theguardian.com/news/2018/jun/29/the-great-firewall-of-china-xi-jinpings-internet-shutdown">largest-scale internet censorship</a> operation in the world. The Chinese state says the firewall is there to promote societal harmony within an increasing population of billions of people. It considers the internet in China as part of its <a href="https://www.washingtonpost.com/world/asia_pacific/chinas-scary-lesson-to-the-world-censoring-the-internet-works/2016/05/23/413afe78-fff3-11e5-8bb1-f124a43f84dc_story.html">sovereign territory</a>. </p>
<p>Eight years ago, Google <a href="https://www.independent.co.uk/news/world/asia/google-set-to-pull-out-of-china-over-censorship-1925052.html">withdrew from China</a>, pulling its search and other services out because of country’s limits to freedom of speech. But it is now planning to relaunch a heavily censored version of its services in China, according to a whistleblower who spoke to online news website <a href="https://theintercept.com/2018/08/01/google-china-search-engine-censorship/">The Intercept</a>. </p>
<p>This project, named Dragonfly, will encompass a new, heavily censored version of Google’s search services, including mobile apps, that will be run in partnership with a local company in China. <a href="https://informationisbeautiful.net/visualizations/what-does-china-censor-online/">Censorship in China includes</a> returning no results for searches that depict Chinese police or military brutality (such as the Tiananmen Square massacre), pro-democracy sites, sites linked with the Dalai Lama, and anything related to Taiwanese or Tibetan independence.</p>
<p>The whistleblower who spoke to The Intercept <a href="https://theintercept.com/2018/08/01/google-china-search-engine-censorship/">cited ethical concerns</a> over this project – and rightly so. There are several ethical dilemmas with Google’s move back into China. Should large Western companies such as Google give up ethical values to make money in China? Is it okay to design technology to assist the Chinese government in restricting the human rights of their citizens? Where does “respect for Chinese values” turn into “assistance in oppressing Chinese people through censorship”? Is Google being hypocritical by making money on the freedom of information available in most societies but then selling it out when they go into China? </p>
<p>The largest professional organisation for computing, the Association of Computing Machinery, recently updated its <a href="https://www.acm.org/code-of-ethics">code of ethics</a>, which includes some specific provisions that we can use to think through these issues. Many Google employees are members of the ACM, meaning they have agreed to abide by this code. Some of these employees may be working on Project Dragonfly, so they will need to evaluate their work in terms of the code. An initial analysis using the code (and this complex case requires more than space allows) offers three insights.</p>
<p>First, the primary goal of technology development should be to benefit the public good, “to contribute to society and to human well-being”, “promoting human rights and protecting each individual’s right to autonomy” (principle 1.1). Taking part in censorship at the Chinese state’s behest and censoring the topics mentioned above would appear to be inconsistent with this principle.</p>
<p>Individual freedom is heavily curtailed in China, and this is reflected in the censorship of the internet there. But, despite what the Chinese government argues, promoting social harmony doesn’t require the restriction of freedom or violation of human rights.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/230582/original/file-20180803-41369-1cdqpnb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Digital oppression.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/woman-work-night-203069269?src=YdoY9eubkiHEsSFLvNJp2g-1-3">Shutterstock</a></span>
</figcaption>
</figure>
<p>Second, there are specific provisions within the code against assisting in the oppression of a population within the code. “Computing professionals should take action to avoid creating systems or technologies that disenfranchise or oppress people” (principle 1.4). In developing technology to censor sites related to democracy and Chinese-committed atrocities, Google employees would arguably be violating this as well.</p>
<p>But surely this is a case for respecting “local, regional, national, and international laws and regulations” (principle 2.3)? The code of ethics expects computing professionals to challenge unethical rules – and break them if a rule “has an inadequate moral basis or causes recognisable harm”.</p>
<p>It’s also one thing to respect local customs and laws, and another to actively implement them, as Google will be doing. By collaborating, Google, as a large Western company, stands accused of giving credence to these oppressive laws. providing the Chinese state with political weight and propaganda for their policies.</p>
<h2>Betraying its values?</h2>
<p>It would be highly hypocritical of Google to take advantage of the values that have allowed it to grow to the behemoth it is today in much of the world – democracy, freedom of speech, personal autonomy == and then drop these when moving into the Chinese market. Instead of being a values-driven company, it seems from this move that it is purely profit-driven.</p>
<p>So what should Google do? One way of responsibly dealing with this would be to open up project Dragonfly to input from the rest of the company, not just the hundred or so working directly on it. Let the Google employee base and other non-shareholder stakeholders decide where the red lines for Google’s values should be.</p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S2340943615000791">Research has indicated</a> that ethical companies are more profitable, <a href="https://www.fastcompany.com/1679578/what-do%20es-it-take-to-be-one-of-the-worlds-most-ethical-companies">retaining employees</a> who are proud to work for the company, and <a href="https://link.springer.com/article/10.1023/A:1023238525433">earning respect</a> and loyalty from the public. Standing up and showing China the value of democratic participation in company value identification will likely earn Google more respect both home and abroad.</p><img src="https://counter.theconversation.com/content/101046/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Catherine Flick receives funding from the European Union Horizon 2020 Framework Programme under grant agreements 710543 and 787991. She is affiliated with the Association of Computing Machinery as a member of its Committee on Professional Ethics and a member of the steering committee of the Code of Ethics update taskforce. </span></em></p>Google’s secret plan to comply with Chinese censorship laws betrays the values that helped create the tech giant.Catherine Flick, Reader in Computing & Social Responsibility, De Montfort UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/910402018-04-03T10:44:49Z2018-04-03T10:44:49ZIt’s not my fault, my brain implant made me do it<figure><img src="https://images.theconversation.com/files/212717/original/file-20180329-189810-cbug78.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Probes that can transmit electricity inside the skull raise questions about personal autonomy and responsibility.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Tiefe_Hirnstimulation_-_Sonden_RoeSchaedel_seitl.jpg">Hellerhoff</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p><a href="https://www.theguardian.com/music/2014/may/27/johnny-cash-deep-brain-stimulation-urge-listen">Mr. B loves Johnny Cash</a>, except when he doesn’t. Mr. X has <a href="http://www.sciencemag.org/news/2014/04/scienceshot-deep-brain-stimulation-triggers-hallucinations">watched his doctors morph into Italian chefs</a> right before his eyes.</p>
<p>The link between the two? Both Mr. B and Mr. X received deep brain stimulation (<a href="https://doi.org/10.1038/507290a">DBS</a>), a procedure involving an implant that sends electric impulses to specific targets in the brain to alter neural activity. While brain implants aim to <a href="https://doi.org/10.1038/nature.2017.23031">treat neural dysfunction</a>, cases like these demonstrate that they may influence an individual’s perception of the world and behavior in undesired ways. </p>
<p>Mr. B received DBS as treatment for his severe obsessive compulsive disorder. He’d never been a music lover until, <a href="https://doi.org/10.3389/fnbeh.2014.00152">under DBS</a>, he developed a distinct and entirely new music preference for Johnny Cash. When the device was turned off, the preference disappeared. </p>
<p>Mr. X, an epilepsy patient, received DBS as part of an investigation to locate the origin of his seizures. During DBS, he hallucinated that doctors became chefs with aprons before the stimulation ended and the scene faded. </p>
<p>In both of these real-world cases, DBS clearly triggered the changed perception. And that introduces a host of thorny questions. As neurotechnologies like this become more common, the behaviors of people with DBS and other kinds of brain implants might challenge current societal views on responsibility.</p>
<p>Lawyers, philosophers and ethicists have labored to define the conditions under which individuals are to be judged legally and morally responsible for their actions. The brain is generally regarded as the center of control, rational thinking and emotion – it orchestrates people’s actions and behaviors. As such, the brain is key to agency, autonomy and responsibility. </p>
<p>Where does responsibility lie if a person acts under the influence of their brain implant? As <a href="http://www.bioethics.msu.edu/73-people/300-cabrera">a neuroethicist</a> and <a href="http://www.law.msu.edu/faculty_staff/profile.php?prof=723">a legal expert</a>, we suggest that society should start grappling with these questions now, before they must be decided in a court of law. </p>
<h2>Who’s to blame if something goes wrong?</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/212719/original/file-20180329-189824-17tsjlv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An uncontrollable urge to aim right for them?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/fabiovenni/2065036619">Fabio Venni</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Imagine that Ms. Q was driving one day and had a sudden urge to swerve into a crowded bus stop. As a result, she ended up injuring several people and damaging the bus stop. During their investigation, police found that Ms. Q had a brain implant to treat her Parkinson’s disease. This implant malfunctioned at the time the urge occurred. Furthermore, Ms. Q claims that the bus stop was not there when she acted on the impulse to swerve.</p>
<p>As brain stimulating technology advances, a hypothetical case like Ms. Q’s raises questions about moral and legal responsibility. Is Ms. Q solely responsible for her actions? Can we attribute any blame to the device? What about to the engineers who designed it or the manufacturer? The neurosurgeon who implanted it or the neurologist who programmed the device parameters?</p>
<p>Historically, moral and legal responsibility have largely focused on the autonomous individual – that is, someone with the capacity to deliberate or act on the basis of one’s own desires and plans, free of distorting external forces. However, with modern technological advances, many hands may be involved in the operation of these brain implants, <a href="https://doi.org/10.1038/nature.2017.23031">including artificial intelligence programs directly influencing the brain</a>. </p>
<p>This external influence raises questions about the degree to which someone with an implant can control their actions and behaviors. If brain implants influence someone’s decisions and behaviors, do they undermine the person’s autonomy? If autonomy is undermined, can we attribute responsibility to the individual? </p>
<p>Society needs to discuss what happens when science and technology start challenging those long-held assumptions.</p>
<h2>So many shades of gray</h2>
<p>There are different legal distinctions concerning responsibility, such as causal responsibility and liability responsibility.</p>
<p>Using this distinction, one may say that the implant is causally responsible, but that Ms. Q still has liability for her actions. One might be tempted to split the liability in this way because Ms. Q still acted on the urge – especially if she knew the risk of brain implant side effects. Perhaps Ms. Q still bears all primary responsibility but the influence of the implant should mitigate some of her punishment.</p>
<p>These are important gradations to reckon with, because the way we as a society divide liability may force patients to choose between potential criminal liability and treating a debilitating brain condition.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/212718/original/file-20180329-189830-pslzsp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Would the surgeon bear some responsibility? Or the device manufacturer?</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/File:Peds_DBS.jpg">Allurimd (talk)</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Questions also arise about product liability for companies, professional responsibility issues for researchers and technology developers, and medical malpractice for the health professionals who placed and programmed the device. Even if multiple actors share responsibility, the question regarding how to distribute responsibility among multiple actors still remains. </p>
<p>Adding an additional layer is the potential for malicious interference of these implants by criminals. Newer implants may have <a href="https://www.scientificamerican.com/article/wireless-brain-implant-allows-ldquo-locked-in-rdquo-woman-to-communicate/">wireless connectivity</a>. Hackers could attack such implants to use Ms. Q for their own (possibly nefarious) purposes, posing more challenges to questions of responsibility. </p>
<p>Insulin pumps and implantable cardiac defibrillators have already been hacked in real life. While there have not been any reports of malicious interference with brain implants, their increasing adoption brings greater opportunity for tech-savvy individuals <a href="https://doi.org/10.1016/j.wneu.2016.05.010">to potentially use the technology for evil</a>.</p>
<p>Considering the impact brain implants can have on moral and legal notions of responsibility, it’s time to discuss whether and when brain interventions should excuse people. New technologies often require some modification or extension of existing legal mechanisms. For example, assisted reproductive technologies have required society to <a href="https://www.uscis.gov/news/uscis-expands-definition-mother-and-parent-include-gestational-mothers-using-assisted-reproductive-technology-art">redefine what it means to be a “parent.”</a></p>
<p>It’s possible that soon we will start hearing in courtrooms: “It’s not my fault. My brain implant made me do it.”</p><img src="https://counter.theconversation.com/content/91040/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Where does responsibility lie if a person acts under the influence of their brain implant? As neurotechnologies advance, a neuroethicist and a legal expert write that now’s the time to hash it out.Laura Y. Cabrera, Assistant Professor of Neuroethics, Michigan State UniversityJennifer Carter-Johnson, Associate Professor of Law, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/923312018-03-27T10:41:23Z2018-03-27T10:41:23ZSelf-driving cars can’t be perfectly safe – what’s good enough? 3 questions answered<figure><img src="https://images.theconversation.com/files/212038/original/file-20180326-188622-1f09cip.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Is it going to stop?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/futuristic-self-driving-car-waiting-when-489690373">marat marihal/Shutterstock.com</a></span></figcaption></figure><p><em>Editor’s note: On March 19, an Uber self-driving vehicle being tested in Arizona <a href="https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html">struck and killed Elaine Herzberg</a>, who was walking her bike across the street. This is the first time a self-driving vehicle has killed a pedestrian,
and it raises questions about the ethics of developing and testing
emerging technologies. Some answers will need to wait until the full investigation is complete. Even so, <a href="https://scholar.google.com/citations?user=N_0jmg8AAAAJ&hl=en">Nicholas Evans</a>, a philosophy professor at the University of Massachusetts-Lowell who studies the ethics of autonomous vehicles’ decision-making processes, says some questions can be answered now.</em></p>
<h2>1. Could a human driver have avoided this crash?</h2>
<p>Probably so. It’s easy to think that most people would have trouble seeing a pedestrian crossing a road at night. But what’s already clear about this particular event is that the road <a href="https://arstechnica.com/cars/2018/03/police-chief-said-uber-victim-came-from-the-shadows-dont-believe-it/">was not as dark as the local police chief initially claimed</a>. </p>
<p>The chief also <a href="https://www.bloomberg.com/news/articles/2018-03-20/video-shows-woman-stepped-suddenly-in-front-of-self-driving-uber">originally said</a> Herzberg suddenly stepped out into traffic in front of the car. However, the <a href="https://arstechnica.com/tech-policy/2018/03/video-uber-driver-looks-down-for-seconds-before-fatal-crash/">disturbing and alarming video footage</a> released by Uber and local authorities shows this isn’t true: Rather, Herzberg had already walked across one lane of the two-lane road, and was in the process of continuing the road-crossing when the Uber hit her. (The safety driver also didn’t notice the pedestrian, but video suggests <a href="https://arstechnica.com/tech-policy/2018/03/video-uber-driver-looks-down-for-seconds-before-fatal-crash/">the driver was looking down</a>, not through the windshield.)</p>
<p>A normal human driver, someone actively paying attention to the road, would likely have had little problem avoiding Herzberg: With headlights on while <a href="https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html">traveling 40 mph</a> on an actually dark road, it’s not difficult to avoid obstacles on a straightaway when they’re <a href="https://www.wsj.com/video/experts-break-down-the-self-driving-uber-crash/1E24A9B7-0B7B-4FA6-96BD-AD1889B921C5.html">100 or more</a> <a href="http://www.kylesconverter.com/speed-or-velocity/miles-per-hour-to-feet-per-second">feet ahead</a>, including people or wildlife trying to get across. This crash was avoidable.</p>
<p>One tragic implication of that fact is clear: A self-driving car killed a person. But there is a public significance too. At least this one Uber car drove itself on populated streets while unable to perform the crucial safety task of detecting a pedestrian, and braking or steering so as not to hit the person.</p>
<p>In the wake of Herzberg’s death, the <a href="https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html">safety and reliability of Uber’s self-driving cars</a> has come into question. It’s also worth examining the ethics: Just as Uber has been criticized for <a href="https://www.independent.co.uk/voices/uber-drivers-employment-tribunal-never-existed-a7385691.html">exploiting its drivers for profits</a>, the company may arguably be <a href="https://www.recode.net/2018/3/24/17159936/uber-self-driving-arizona-crash-report">exploiting the driving, riding and walking public</a> for its own research purposes.</p>
<h2>2. Even if this crash was avoidable, are self-driving cars still generally safer than human-driven cars?</h2>
<p>Not yet. The death toll on U.S. roads is indeed alarming: approximately <a href="https://www.cdc.gov/vitalsigns/motor-vehicle-safety/index.html">32,000 deaths per year</a>. The <a href="https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html">federal estimate</a> is that 1.18 people die per 100 million road miles driven by humans. Uber’s cars only <a href="https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html">drove 3 million miles</a>, however, before their first fatality. It’s not fair to do statistical analysis from a single point of data, but it’s not a great start: Companies should be aiming to make their robots at least as good as humans, if not yet fulfilling the <a href="https://www.telegraph.co.uk/technology/2016/04/25/elon-musk-teslas-autopilot-makes-accidents-50pc-less-likely/">promise of being significantly better</a>.</p>
<p>Even if Uber’s autonomous cars were better drivers, the numbers don’t tell the whole story. Of the 32,000 people who die on U.S. roads each year, <a href="https://www.cdc.gov/motorvehiclesafety/pedestrian_safety/index.html">5,000 to 6,000 are pedestrians</a>. When aiming for safety improvements, should the goal be to reduce overall deaths – or to put special emphasis on protecting the most vulnerable victims? It’s certainly hypothetically possible to imagine a self-driving car system that cuts overall road deaths in half – to 16,000 – while doubling the pedestrian death rate – to 12,000. Overall, that might seem far better than human drivers – but not from the perspective of people walking along the nation’s roads!</p>
<p>My research group has been working to develop ethical decision frameworks for self-driving cars. One potential approach is called “<a href="https://doi.org/10.1007/s10676-017-9419-3">maximin</a>.” Most fundamentally, that way of thinking suggests people designing autonomous vehicles – both physically and in terms of software that runs them – should identify the worst possible outcomes of any decision, even if rare, and work to minimize their effects. Anyone who has been unfortunate enough to be hit by a car both as a pedestrian and while in a vehicle knows that being on foot is far worse. Under maximin, people should design and test cars, among other things, to prioritize pedestrian safety.</p>
<p>Maximin probably isn’t the best possible – and certainly isn’t the only – moral decision theory to use. In some cases, the worst outcome could be avoided if a car never pulls out of its driveway! But maximin provides food for thought about how to integrate self-driving cars into daily life. Even if autonomous cars are always evaluated as safer than humans, what counts as “safer” matters very much.</p>
<h2>3. How much better should self-driving cars be than humans before the public accepts them?</h2>
<p>Even if people could agree on the ways in which self-driving cars should be safer than humans, it’s not clear that people should be okay with self-driving cars when they first become only barely better than humans. If anything, that’s when <a href="https://theconversation.com/before-hitting-the-road-self-driving-cars-should-have-to-pass-a-driving-test-90364">tests on city streets</a> should begin.</p>
<p>Consider a new drug developed by a pharmaceutical company. The company can’t market it as soon as it’s proven not to kill people who take it. Rather, the drug has to go through a <a href="https://www.fda.gov/Drugs/DevelopmentApprovalProcess/HowDrugsareDevelopedandApproved/">series of tests proving it is effective</a> at treating the symptom or condition it’s intended to. Increasingly, drug tests seek to prove a medication is <a href="https://dx.doi.org/10.3345%2Fkjp.2012.55.11.403">significantly better</a> than what’s already on the market. People should expect the same with self-driving cars before companies put the public at risk.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/O1qw2pqkqR8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How should self-driving cars make decisions?</span></figcaption>
</figure>
<p>The crash in Arizona wasn’t just a tragedy. The failure to see a pedestrian in low light was an avoidable basic error for a self-driving car. Autonomous vehicles should be able to do much more than that before they’re allowed to be driven, even in tests, on the open road. Just like pharmaceutical companies, massive technology companies should be required to thoroughly – and ethically – test their systems before their self-driving cars serve or endanger the public.</p><img src="https://counter.theconversation.com/content/92331/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicholas G. Evans receives funding from the National Science Foundation for Award 1734521, "Ethical Algorithms in Autonomous Vehicles."</span></em></p>In the wake of a self-driving Uber car killing a pedestrian in Arizona, an ethicist examines the state of autonomous vehicle development.Nicholas G. Evans, Assistant Professor of Philosophy, UMass LowellLicensed as Creative Commons – attribution, no derivatives.