tag:theconversation.com,2011:/us/topics/ai-ethics-130672/articlesAI ethics – The Conversation2024-02-29T17:37:36Ztag:theconversation.com,2011:article/2244242024-02-29T17:37:36Z2024-02-29T17:37:36ZAI could transform ethics committees<figure><img src="https://images.theconversation.com/files/578337/original/file-20240227-22-9isx52.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4888%2C3261&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/business-woman-explaining-about-his-profile-1391413568">Freedomz / Shutterstock</a></span></figcaption></figure><p>The role of an <a href="https://dictionary.cambridge.org/example/english/ethical-committee">ethics committee</a> is to give advice on what should be done in often contentious situations. They are used in medicine, research, business, law and a variety of other areas. </p>
<p>The word “ethics” relates to the <a href="https://www.britannica.com/topic/ethics-philosophy">moral principles governing human behaviour</a>. The task for ethics committees can be quite tricky given the wide range of moral, political, philosophical, cultural and religious views. Even so, good ethical arguments make up the foundation of society, as they are the basis of the laws and agreements that we use to get on with each other.</p>
<p>Given the importance of ethics, any tool that can be used to help come to better ethical decisions should be explored and used. Over the last couple of years, there has been an increasing recognition that artificial intelligence (AI) is a tool that can be used to analyse complex data. So it makes sense to ask the question of whether AI can be used to help make better ethics decisions.</p>
<p>As AI is a <a href="https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/">class of computer algorithm</a>, it relies on data. Ethics committees also rely on data, so one important question is whether AI is able to load, and then meaningfully analyse, the types of data that ethics committees regularly consider. </p>
<p>Here, context becomes very important. For instance a <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5432947/">hospital ethics committee</a> might make decisions based upon experience with patients, input from lawyers, and a general understanding of common cultural or societal norms and opinions. It is currently difficult to see how such data could be captured, and fed into, an AI algorithm.</p>
<p>However, I chair a very specific type of ethics committee, called a research ethics committee (REC), whose <a href="https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/research-ethics-service/">role</a> is to review scientific research protocols. The aim is to promote high quality research while protecting the rights, safety, dignity and well being of the people who take part in the research.</p>
<figure class="align-center ">
<img alt="Laboratory setting." src="https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578851/original/file-20240229-16-3wdwsx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Ethics committees review scientific research involving human subjects.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/modern-laboratory-senior-female-scientist-has-1073659442">Gorodenkoff / Shutterstock</a></span>
</figcaption>
</figure>
<p>The majority of our activity involves reading complex documents to determine what the relevant ethics issues may be, and then making suggestions to researchers on how they can improve their proposed protocols, or procedures. It is in this area that AI could be very helpful.</p>
<p><a href="https://www.hra.nhs.uk/planning-and-improving-research/research-planning/protocol/">Research protocols</a>, especially those of <a href="https://www.nhs.uk/conditions/clinical-trials/">clinical trials</a>, often run to hundreds if not thousands of pages. The information is dense and complex. Although protocols are accompanied by ethics application forms that seek to present information on key ethics issues in a way that REC members can easily find, the task can still take a very long time. </p>
<p>After studying the documents, REC members weigh up what they have read, compare it with guidance on good ethics practice, consider input from patient and participant involvement groups, and then come to a decision as to whether the research can proceed as planned. The most common outcome is that more information and a few modifications are needed before the research can go ahead.</p>
<h2>A role for machines?</h2>
<p>While attempts have been made to standardise REC membership and experience, researchers often complain that the process can take a long time and is <a href="https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-019-0434-2">inconsistent</a> between different committees.</p>
<p>AI seems <a href="https://doi.org/10.1136/jme-2023-109767">ideally placed</a> to speed up the process and assist in ironing out some of the inconsistencies. Not only could the AI read such long documents very quickly, but it could also be trained on a large number of previous protocols and decisions.</p>
<p>It could very rapidly spot any ethics issues and suggest solutions for the research teams to implement. This would vastly speed up the ethics review process and probably make it far more consistent. But is it ethically acceptable to use AI in this way?</p>
<p>While AI could clearly conduct many of the REC tasks, it could also be argued that these reviewing tasks are not actually the same as making an ethics decision. At the end of the review process, RECs are asked to decide whether a protocol, with the updates, should <a href="https://www.hra.nhs.uk/approvals-amendments/what-approvals-do-i-need/research-ethics-committee-review/applying-research-ethics-committee/">receive a favourable or unfavourable opinion</a>. </p>
<p>As a consequence, while the advantage of AI is clear in speeding up the process, this isn’t quite the same as making the final decision.</p>
<h2>A human in the loop</h2>
<p>It may be possible for AI to be extremely effective in assessing a situation and recommending a course of action that is consistent with previous “ethical” behaviour. However, the decision to actually adopt a course of action, and then go on to behave in that way, is fundamentally human. </p>
<p>In the example of research ethics, the AI might well recommend a course of action, but actually deciding on the action is a human decision. The system could be designed to instruct ethics committees or researchers to unquestionably do what the AI suggests, but such a decision is about how the AI is used, not the AI itself.</p>
<p>While AI is perhaps immediately useful to research ethics committees given the type of data we review, it is very likely that ways of encoding non-text data (such as people’s experiences) will improve. </p>
<p>This means that over time AI may also be able to assist in other areas of ethics decision making. However, the key point is not to confuse the tool used to analyse data, the AI, with the final “ethics” decision on how to act. The danger is not the AI, but how people choose to integrate AI into ethics decision making processes.</p><img src="https://counter.theconversation.com/content/224424/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon Kolstoe has previously received funding from the Health Research Authority to explore aspects of Research Ethics. He is chair of the Cambs and Herts HRA (NHS) REC, MODREC and the UKHSA's research ethics and governance group. He is a trustee of the Charity UK Research Integrity Office (UKRIO). All views expressed in this article are his own and should not be taken to reflect the view of these organisations.</span></em></p>Simon Kolstoe argues that while AI can greatly assist in ethics reviews, it cannot make an ethical decision.Simon Kolstoe, Associate Professor of Bioethics, University of PortsmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2222542024-02-06T02:24:40Z2024-02-06T02:24:40ZGenerative AI in the classroom risks further threatening Indigenous inclusion in schools<figure><img src="https://images.theconversation.com/files/572099/original/file-20240130-15-t0nek5.png?ixlib=rb-1.1.0&rect=3%2C6%2C1636%2C1096&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Midjourney/Author provided</span></span></figcaption></figure><p>It is <a href="https://figshare.mq.edu.au/articles/thesis/Prioritising_Blak_Voices_Representing_Indigenous_Perspectives_in_NSW_English_Classrooms/23974575">well documented</a> that Australian teachers face challenges incorporating Indigenous perspectives and content in their classrooms. The approach can sometimes be somewhat tokenistic, as if the teacher is “<a href="https://theconversation.com/i-spoke-about-dreamtime-i-ticked-a-box-teachers-say-they-lack-confidence-to-teach-indigenous-perspectives-129064">ticking a box</a>”. We need a more <a href="https://www.aitsl.edu.au/teach/cultural-responsiveness/building-a-culturally-responsive-australian-teaching-workforce">culturally responsive teaching workforce</a>. </p>
<p><a href="https://www.techtarget.com/searchenterpriseai/definition/generative-AI">Generative AI</a> is advancing at a fast pace and quickly finding a place within education. Tools such as <a href="https://chat.openai.com/">ChatGPT</a> (or Chatty G as the kids say) continue to <a href="https://www.unesco.org/en/digital-education/artificial-intelligence">dominate conversations in education</a> as these technologies are explored and developed.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1640054994601639936"}"></div></p>
<p>There are many concerns around academic integrity and things to consider on how to best <a href="https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai">introduce</a> and <a href="https://thechainsaw.com/defi/aussie-students-ai-chatgpt-survey-gen-z/">control</a> this technology in practice.</p>
<p>As teachers continue to look for ways to meet <a href="https://www.australiancurriculum.edu.au/f-10-curriculum/cross-curriculum-priorities/aboriginal-and-torres-strait-islander-histories-and-cultures/">Indigenous content requirements</a>, it makes sense they would turn to generative AI to assist them in an area they struggle with. But using these tools could do more harm than good.</p>
<h2>Indigenous peoples’ concerns around AI</h2>
<p>Indigenous people have raised a range of concerns around generative AI. These include the risks these technologies pose for Indigenous people and knowledges. </p>
<p>For example, <a href="https://www.crikey.com.au/2024/01/19/artificial-intelligence-fake-indigenous-art-stock-images/">AI-generated art</a> is causing a significant threat to Indigenous peoples’ incomes, art and cultural knowledges.</p>
<p>The lead image of this article was created using the generative AI platform <a href="https://www.midjourney.com/">Midjourney</a>. The prompts included the terms Indigenous, artwork, colourful, artificial intelligence, Aboriginal, Western Sydney and painting styles. </p>
<p>This shows that with AI, anyone can easily <a href="https://www.terrijanke.com.au/post/the-new-frontier-artificial-intelligence-copyright-and-indigenous-culture">produce “Indigenous-style” art</a> and content. This poses a threat to <a href="https://www.artslaw.com.au/information-sheet/indigenous-cultural-intellectual-property-icip-aitb/">Indigenous cultural and intellectual property rights</a>. </p>
<p>With AI being trained on vast data sets primarily from the western corpus of knowledge, there are also concerns relating to <a href="https://aiatsis.gov.au/publication/116530">Indigenous data sovereignty</a> – the right to “govern the collection, ownership and application of data about Indigenous communities, peoples, lands and resources”.</p>
<p>Generative AI can also perpetuate misinformation that harms Indigenous communities. This happened during the Voice referendum campaign, when <a href="https://www.theguardian.com/australia-news/2023/aug/07/indigenous-voice-to-parliament-no-campaign-ai-facebook-ads">fake, AI-generated images of Indigenous “no” voters</a> were published on social media.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1702321916802203801"}"></div></p>
<p>Importantly, there is also the potential impact to Country due to the environmental costs of data centres – an issue that must be addressed as more generative AI tools come online.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-environmental-cost-of-data-centres-is-substantial-and-making-them-energy-efficient-will-only-solve-half-the-problem-202643">The environmental cost of data centres is substantial, and making them energy-efficient will only solve half the problem</a>
</strong>
</em>
</p>
<hr>
<h2>How do these concerns translate into the classroom?</h2>
<p>All students should see themselves reflected in the classroom. This especially applies to Indigenous students, as attested by <a href="https://www.closingthegap.gov.au/">Closing the Gap targets</a> for educational attainment.</p>
<p><a href="https://www.aitsl.edu.au/teach/cultural-responsiveness/building-a-culturally-responsive-australian-teaching-workforce">A 2022 report</a> by the Australian Institute for Teaching and School Leadership states:</p>
<blockquote>
<p>The legacy of colonisation has undermined Aboriginal and Torres Strait Islander students’ access to their cultures, identities, histories and languages. Aboriginal and Torres Strait Islander students have not had access to a complete, relevant and responsive education.</p>
</blockquote>
<p>Children need both “windows and mirrors” in the classroom. American education scholar <a href="https://witschicago.org/windows-mirrors-and-sliding-glass-doors">Rudine Sims-Bishop</a> has aptly put this in the context of children’s literature: </p>
<blockquote>
<p>When children cannot find themselves reflected in the books they read, or when the images they see are distorted, negative or laughable, they learn a powerful lesson about how they are devalued in the society of which they are a part.</p>
</blockquote>
<p>Students need to see themselves reflected in the curriculum, including the technologies used.</p>
<p>By using generative AI, teachers risk perpetrating and promoting inaccuracies and spreading false information instead of meaningfully engaging with Indigenous values and knowledge systems.</p>
<p>This can potentially harm the student–teacher relationship, <a href="https://www.edresearch.edu.au/summaries-explainers/explainers/positive-teacher-student-relationships-their-role-classroom-management">which is incredibly important</a>, particularly for Indigenous students.</p>
<p>Late last year, the Australian government released a <a href="https://www.education.gov.au/schooling/resources/australian-framework-generative-artificial-intelligence-ai-schools">framework for generative AI</a> in schools. It offers “guidance on understanding, using and responding to generative AI” to everyone involved in Australian school education. </p>
<p>The framework also affirms the necessity of respecting Indigenous cultural and intellectual property rights. But we need more extensive work to ensure teachers can do this appropriately. Currently, there is a lack of research that looks at the intersection between generative AI and Indigenous content inclusion in the classroom. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1731443875481255938"}"></div></p>
<h2>Indigenous futures and AI</h2>
<p>Generative AI, and other forms of AI, have extensive potential to benefit Indigenous people and their communities. Many Indigenous people are engaging with the technologies to this effect.</p>
<p>For example, you can take a <a href="https://www.abc.net.au/listen/programs/scienceshow/take-a-virtual-trip-to-the-torres-strait/102566056">virtual trip to the Torres Strait Islands</a>, spend time at <a href="https://www.theaimarae.co.nz/">the AI Marae</a> in New Zealand or engage with the <a href="https://indigenousprotocols.ai/">Indigenous Protocols and AI Laboratory</a> </p>
<p>But to make room for what is seemingly an inevitable future that involves AI, work needs to be done in policy and professional bodies to ensure Indigenous inclusion at all levels – from development to use. </p>
<p>Teachers and students must be supported with the necessary resourcing to promote critical thinking when engaging with generative AI. Teachers will look to the relevant government bodies, whereas students will look to their teachers for guidance.</p>
<p>It is clear we need further guidance on Indigenous cultural and intellectual property rights, and culturally appropriate AI use for educators. </p>
<p>Generative AI still has much to learn, and Indigenous knowledges have <a href="https://www.timeshighereducation.com/campus/indigenous-knowledge-provides-skills-lifelong-learning-ai-cannot">much to teach it</a>.</p><img src="https://counter.theconversation.com/content/222254/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tamika Worrell does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Tools such as ChatGPT dominate the conversation around AI in schools. But with teachers looking to meet Indigenous content requirements, using generative AI could do more harm than good.Tamika Worrell, Senior Lecturer in the Department of Critical Indigenous Studies, Macquarie University, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2205872024-02-01T13:32:23Z2024-02-01T13:32:23ZAI can help − and hurt − student creativity<p>Teachers across the country are grappling with whether to view AI tools like ChatGPT as friend or foe in the classroom. My research shows that the answer isn’t always simple. It can be both.</p>
<p>Teaching students to be creative thinkers rather than rely on AI for answers is the key to answering this question. That’s what my team and I found in our study on <a href="https://doi.org/10.1016/j.yjoc.2023.100072">whether AI affects student creativity</a>, published in the Journal of Creativity and representing scholars from the University of South Carolina, the University of California, Berkeley and Emerson College. </p>
<p>In the study, we asked college students to brainstorm – without technology – all the ways a paper clip can be used. A month later, we asked them to do the same, but using ChatGPT. We found that AI can be a useful brainstorming tool, quickly generating ideas that can spark creative exploration. But there are also potential negative effects on students’ creative thinking skills and self-confidence. While students reported that it was helpful to “have another brain,” they also felt that using AI was “the easy way out” and didn’t allow them to think on their own. </p>
<p>The results call for a thoughtful approach to using AI in classrooms and striking a balance that nurtures creativity while <a href="https://doi.org/10.1016/j.caeai.2022.100056">utilizing AI’s capabilities</a>. </p>
<h2>Why it matters</h2>
<p>Increasingly, students are using <a href="https://aiindex.stanford.edu/report/">AI for help with their schoolwork</a>. Whether it’s for drafting essays, learning new languages or studying history and science, AI tools are becoming a staple in students’ academic toolkit. </p>
<p>Students tend to view AI as having a <a href="https://doi.org/10.3390/jintelligence10030065">positive impact on their creativity</a>. In our study, 100% of participants found AI helpful for brainstorming. Only 16% of students preferred to brainstorm without AI. </p>
<p>The good news is that the students in our study generated more diverse and detailed ideas when using AI. They found that AI was useful for kick-starting brainstorming sessions. Other research has shown that AI can also serve as a <a href="https://doi.org/10.3389/frai.2022.880673">nonjudgmental partner for brainstorming</a>, which can prompt a free stream of ideas they might normally withhold in a group setting. </p>
<p>The downside of brainstorming with AI was that some students voiced concerns about overreliance on the technology, fearing it might undermine their own thoughts and, consequently, confidence in their creative abilities. Some students reported a “fixation of the mind,” meaning that once they saw the AI’s ideas, they had a hard time coming up with their own.</p>
<p>Some students also questioned the originality of ideas generated by AI. Our research supported these hunches. We noted that while using ChatGPT improved students’ creative output individually, the AI ideas tended to be repetitive overall. This is likely due to generative AI recycling existing content rather than creating original thought.</p>
<p>The study results indicate that allowing students to practice creativity independently first will <a href="https://doi.org/10.1016/j.tsc.2021.100966">strengthen their belief in themselves and their abilities</a>. Once they accomplish this, AI can be useful in furthering their learning, much like teaching long division to students before introducing a calculator.</p>
<h2>What still isn’t known</h2>
<p>Our study primarily explored AI’s application in the <a href="https://doi.org/10.1207/S15326934CRJ1334_07">idea-generation phase of creativity</a>, but we also emphasized the importance of developing skills at the start and end of the creative process. The essential tasks of defining problems and critically evaluating ideas still rely heavily on human input.</p>
<p>The creative process typically involves three phases, such as problem identification, idea generation and evaluation. AI shows promise in aiding students in the idea generation phase of the creative process, according to our study. However, the current generation of AI, such as ChatGPT-3, lacks the capacity for defining the problem and refining ideas into something actionable. </p>
<p>AI’s <a href="https://tech.ed.gov/ai-future-of-teaching-and-learning/">growing role in education</a> brings many advantages, but keeping the human element at the forefront is crucial.</p>
<h2>What’s next</h2>
<p>Content ownership, plagiarism and false or misleading information are among the current challenges for implementing AI in education. As generative AI gains popularity, schools are pressed to set guidelines to ensure these tools are used responsibly. Some states, such as <a href="https://www.edweek.org/technology/schools-desperately-need-guidance-on-ai-who-will-step-up/2023/11">California and Oregon</a>, have already developed guidelines for AI in education. <a href="https://doi.org/10.1016/j.iotcps.2023.04.003">Ethical considerations</a> are vital for a positive relationship between creativity and AI.</p>
<p>Our team will continue to research the effect of AI on creativity, exploring its impact on agency, confidence and other phases of the creative process. AI in education is not just about the latest technology. It’s about shaping a future where human creativity and technological advancement progress hand in hand.</p>
<p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take on interesting academic work.</em></p><img src="https://counter.theconversation.com/content/220587/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sabrina Habib does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A study in which students brainstormed all the uses of a paper clip shows that AI can both enhance and harm the creative process.Sabrina Habib, Associate Professor, University of South CarolinaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2217172024-01-25T13:18:54Z2024-01-25T13:18:54ZCould a court really order the destruction of ChatGPT? The New York Times thinks so, and it may be right<figure><img src="https://images.theconversation.com/files/571252/original/file-20240124-29-abie1d.jpg?ixlib=rb-1.1.0&rect=7%2C44%2C4985%2C3196&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Old media, meet new.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/in-this-photo-illustration-the-new-york-times-logo-is-seen-news-photo/1894336797">Idrees Abbas/SOPA Images/LightRocket via Getty Images</a></span></figcaption></figure><p>On Dec. 27, 2023, The New York Times <a href="https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf">filed a lawsuit</a> against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.</p>
<p>To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.</p>
<p>If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology. </p>
<p>This prospect is alarming to the <a href="https://www.theverge.com/2023/11/6/23948386/chatgpt-active-user-count-openai-developer-conference">100 million people</a> who use ChatGPT every week. And it raises two questions that interest me as a <a href="https://law.indiana.edu/about/people/details/marinotti-jo%C3%A3o.html">law professor</a>. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?</p>
<h2>Destruction in the court</h2>
<p>The answer to the first question is yes. Under <a href="https://www.law.cornell.edu/uscode/text/17/503">copyright law</a>, courts do have the power to issue destruction orders. </p>
<p>To understand why, consider vinyl records. Their <a href="https://www.theverge.com/2023/3/10/23633605/vinyl-records-surpasses-cd-music-sales-us-riaa">resurging popularity</a> has attracted <a href="https://fortune.com/2023/04/06/punk-rock-fan-uncovers-six-year-scam-that-sold-1-6-million-worth-of-counterfeit-vinyl-records-to-collectors/">counterfeiters who sell pirated records</a>. </p>
<p>If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?</p>
<p>To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.</p>
<p>So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/kUUievwKEaM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">NBC News reports on The New York Times’ lawsuit.</span></figcaption>
</figure>
<p>Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI. </p>
<p>Consider the Federal Trade Commission’s recent use of <a href="https://www.jdsupra.com/legalnews/ftc-coppa-settlement-requires-deletion-1217192">algorithmic disgorgement</a> as an example. The FTC has forced companies <a href="https://www.dwt.com/-/media/files/blogs/privacy-and-security-blog/2022/03/weight-watchers-kurbo-stipulated-order.pdf">such as WeightWatchers</a> to delete not only unlawfully collected data but also the algorithms and AI models trained on such data. </p>
<h2>Why ChatGPT will likely live another day</h2>
<p>It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.</p>
<p>The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which <a href="https://www.washingtonpost.com/technology/2024/01/04/nyt-ai-copyright-lawsuit-fair-use">may be likely</a>, the lawsuit would be dismissed and no destruction would be ordered.</p>
<p>The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “<a href="https://www.copyright.gov/fair-use/#:%7E:text=Fair%20use%20is%20a%20legal,protected%20works%20in%20certain%20circumstances.">fair use</a>.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for The New York Times’ content, it just might win. </p>
<p>The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “<a href="https://casetext.com/case/hounddog-prods-llc-v-empire-film-grp-inc">the only remedy</a>” that could prevent infringement. </p>
<p>That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations. </p>
<p>Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights – an argument put forth in <a href="https://copyrightalliance.org/current-ai-copyright-cases-part-1/">various other lawsuits</a> against generative AI companies. </p>
<p>In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a <a href="https://www.reuters.com/technology/openai-talks-raise-new-funding-100-bln-valuation-bloomberg-news-2023-12-22/">US$100 billion company</a>.</p>
<p>Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT. </p>
<p>Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it.</p><img src="https://counter.theconversation.com/content/221717/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>João Marinotti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It may seem extreme, but there’s a reason the law allows it.João Marinotti, Associate Professor of Law, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2193542024-01-24T14:03:36Z2024-01-24T14:03:36ZAI in HR: Are you cool with being recruited by a robot? Our studies reveal job candidates’ true feelings<p>Artificial Intelligence (AI) is transforming the human resource management (HRM) industry faster than we notice. <a href="https://www.tidio.com/blog/ai-recruitment/">Sixty-five percent</a> of organisations are already using AI-enabled tools in the hiring process, but only a third of job candidates are aware of the practice.</p>
<h2>Pros and cons of AI in recruitment</h2>
<p>In recruitment, AI-enabled tools have the ability to collect large amounts of organisational data to search, identify, evaluate, rank, and select job candidates. They can <a href="https://www.forbes.com/sites/forbestechcouncil/2022/03/23/how-ai-is-primed-to-disrupt-hr-and-recruiting/">assemble information on hiring needs</a> across teams, generate advertisements with model candidate traits, and highlight potential candidates from a range of digital platforms.</p>
<p>AI-enabled tools have long promised efficiency in the processing of applicants’ documents while potentially reducing the bias from HR agents who might, intentionally or not, discriminate or unjustly judge some applications.</p>
<p>However, emerging evidence suggests that AI-enabled HR tools may discriminate certain candidates who may not fit the historical pattern for the job description, such as candidates who are <a href="https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2018.3093">female</a> (in STEM) or those with <a href="https://wwnorton.com/books/the-second-machine-age/">gaps on their resumes</a> due to illness, disabilities, caring for a family member, unemployment, or <a href="https://www.hbs.edu/ris/Publication%20Files/hiddenworkers09032021_Fuller_white_paper_33a2047f-41dd-47b1-9a8d-bd08cf3bfa94.pdf">time served in prison</a>.</p>
<p>Those of us who worry about the use of AI in HR won’t be reassured by its track record in other fields. Tech giants including Apple, IBM, and Microsoft – all of whom presumably know what they’re doing – have faced scrutiny for ethical failures, especially with regards to gender discrimination. For example, US regulators <a href="https://www.bbc.com/news/business-50365609">investigated Apple in 2019</a> after its AI-powered credit-card service was revealed to be systematically offering women lower credit limits. The alarm was raised by several couples, including Steve Wozniak himself, co-founder of Apple, and his wife, for whom the credit-card algorithm was offering the man a higher credit limit, even though the couple had joint accounts.</p>
<h2>Perceptions matter</h2>
<p>Available data on AI in recruitment suggests that job seekers are instinctively critical of its use. Candidates subjected to autonomous AI decisions describe the process as <a href="https://www.tandfonline.com/doi/abs/10.1080/0144929X.2022.2164214">“undignified”</a> or <a href="https://journals.sagepub.com/doi/full/10.1177/20539517221115189">“unfair”</a>.</p>
<p>Other research suggests that judgement is less harsh in different contexts. According to a <a href="https://www.tidio.com/about/">November 2023 survey</a> by Tideo, only 31% of respondents would agree to allow AI to decide whether or not they get hired. But that figure rises to 75% if there’s also a human presence involved in the process. Still, 25% of participants believe that any use of artificial intelligence in recruitment is unfair.</p>
<p>Prior to our research, ethical perceptions of organisations using AI-enabled tools in the hiring process hadn’t been studied much. Most scholarly research on the topic focused on the <a href="https://core.ac.uk/download/pdf/301385085.pdf">fairness of the practice</a> or <a href="https://leeds-faculty.colorado.edu/dahe7472/OB%202022/glickson%202021.pdf">trust in the technology</a> — for example, chatbots — rather than trust in the organisations themselves. </p>
<p>In two publications in the <a href="https://link.springer.com/journal/10551"><em>Journal of Business Ethics</em></a>, we looked at how the use of AI in hiring might impact job seekers’ or recently hired individuals’ trust in the company. We found that their perceptions of AI determine whether they identify the organisation using it as trustworthy or even attractive and innovative.</p>
<p>Perceptions vary depending on individuals’ personal values, past experiences, and technology acceptance. They also vary across contexts and applications. For instance, whereas an individual might trust the effectiveness of AI to predict movie preferences, studies show that most would still <a href="https://www.tidio.com/blog/ai-recruitment/">prefer a human</a> or a human-AI collaboration (i.e., versus autonomous AI) to make a hiring determination.</p>
<h2>Ethics are attractive</h2>
<p>In a <a href="https://link.springer.com/article/10.1007/s10551-022-05166-2">June 2022 study</a> on AI ethics and organisational trust, we found that candidates who perceive AI in the hiring process as highly effective, from a performance standpoint, are 64% more likely to trust the organisations that use it.</p>
<p>We followed up with a <a href="https://link.springer.com/article/10.1007/s10551-023-05380-6">March 2023 study</a> on a related subject. We found that the higher an individual’s ethical perceptions of using AI in hiring, the more attractive he or she finds the organisation. For instance, candidates who perceive that it is ethical for an organisation to use AI to analyse their personal social media content or analyse an audio interview for voice cues are 25% more likely to perceive that organisation as attractive.</p>
<h2>Human-AI balance is key</h2>
<p>Human-resources managers face an increasingly complex ethical environment, where AI involves a fast-growing set of applications. Organisations that are determined to keep the “human” in HR will need to carefully balance both in the hiring process, while taking consideration factors such as transparency and financial expectations.</p>
<p>Along with other studies, our research brings new urgency to the task of integrating AI ethics into the <a href="https://www.unesco.org/en/artificial-intelligence/business-council">governance</a> of every organisation.</p><img src="https://counter.theconversation.com/content/219354/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont déclaré aucune autre affiliation que leur organisme de recherche.</span></em></p>Companies and hiring agencies are increasingly resorting to AI-enabled tools in recruitment. How they’re used can significantly impact an individual’s perception of the organisation.Maria Figueroa-Armijos, Associate Professor of Entrepreneurship, EDHEC Business SchoolSerge da Motta Veiga, Professeur en Gestion des Ressources Humaines, Neoma Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2179242024-01-07T19:05:09Z2024-01-07T19:05:09Z1 in 3 people are lonely. Will AI help, or make things worse?<figure><img src="https://images.theconversation.com/files/566762/original/file-20231220-21-ds88ao.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2599%2C1531&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>ChatGPT has repeatedly made headlines since its release late last year, with various <a href="https://doi.org/10.53761/1.20.02.07">scholars</a> and professionals exploring its potential applications in both work and <a href="https://www.abc.net.au/news/2023-11-21/tas-utas-marking-time-cuts-chatgpt-assignments-students/103125634">education</a> settings. However, one area receiving less attention is the tool’s usefulness as a conversationalist and – dare we say – as a potential friend.</p>
<p>Some chatbots have left an unsettling impression. Microsoft’s Bing chatbot alarmed users earlier this year when it <a href="https://time.com/6256529/bing-openai-chatgpt-danger-alignment/">threatened and attempted to blackmail</a> them.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1626241169754578944"}"></div></p>
<p>Yet pop culture has long conjured visions of autonomous systems living with us as social companions, whether that’s <a href="https://www.youtube.com/watch?v=1pphyvgd7-k&t=90s&ab_channel=TelevisionVanguard">Rosie the robot</a> from The Jetsons, or the super-intelligent AI, Samantha, from the 2013 movie <a href="https://www.imdb.com/title/tt1798709/">Her</a>. Will we develop similar emotional attachments to new and upcoming chatbots? And is this healthy? </p>
<p>While generative AI itself is relatively new, the fields of belonging and human-computer interaction have been <a href="https://www.tandfonline.com/doi/pdf/10.1080/03075079.2023.2238006">explored reasonably well</a>, with results that may surprise you. </p>
<p>Our latest research shows that, at a time when 1 in 3 Australians <a href="https://www.abc.net.au/news/2023-08-07/ending-loneliness-together-finds-33-per-cent-australians-lonely/102678790">are experiencing loneliness</a>, there may be space for AI to fill gaps in our social lives. That’s assuming we don’t use it to replace people.</p>
<h2>Can you make friends with a robot?</h2>
<p>As far back as the popularisation of the internet, scholars have been discussing how AI might serve to replace or supplement human relationships.</p>
<p>When social media became popular about a decade later, interest in this space exploded. The 2021 Nobel Prize-winning book <a href="https://www.theguardian.com/books/2021/mar/01/klara-and-the-sun-by-kazuo-ishiguro-review-another-masterpiece">Klara and the Sun</a> explores how humans and life-like machines might form meaningful relationships.</p>
<p>And with increasing interest came increasing concern, borne of <a href="https://doi.org/10.1353/csd.2012.0078">evidence</a> that belonging (and therefore loneliness) can be impacted by technology use. In some studies, the overuse of technology (gaming, internet, mobile and social media) has been linked to higher <a href="https://research.ebsco.com/c/nprl3q/viewer/pdf/3ziypiu7x5">social anxiety and loneliness</a>. But other <a href="https://www.tandfonline.com/doi/abs/10.1017/edp.2014.2">research</a> <a href="https://doi.org/10.1017/jrr.2017.13">suggests</a> the effects depend greatly on who is using the technology and how often they use it.</p>
<p>Research has also found some online roleplaying game players seem to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563215302508">experience less loneliness</a> online than in the real world – and that people who feel a sense of <a href="https://www.tandfonline.com/doi/abs/10.1080/10447318.2021.1952803">belonging on a gaming platform are</a> more likely to continue to use it.</p>
<p>All of this suggests technology use can have a positive impact on loneliness, that it does have the potential to replace human support, and that the more an individuals uses it the more tempting it becomes.</p>
<p>Then again, this evidence is from tools designed with a specific purpose (for instance, a game’s purpose is to entertain) and not tools designed to support human connection (such as AI “therapy” tools).</p>
<h2>The rise of robot companions</h2>
<p>As researchers in the fields of technology, leadership and psychology, we wanted to investigate how ChatGPT might influence people’s feelings of loneliness and supportedness. Importantly, does it have a net positive benefit for users’ wellbeing and belonging?</p>
<p>To study this, we asked 387 participants about their usage of AI, as well as their general experience of social connection and support. We found that:</p>
<ul>
<li>participants who used AI more tended to feel more supported by their AI compared to people whose support came mainly from close friends</li>
<li>the more a participant used AI, the higher their feeling of social support from the AI was</li>
<li>the more a participant felt socially supported by AI, the lower their feeling of support was from close friends and family</li>
<li>although not true across the board, on average human social support was the largest predictor of lower loneliness.</li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257">I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions</a>
</strong>
</em>
</p>
<hr>
<h2>AI friends are okay, but you still need people</h2>
<p>Overall our results indicate that social support can come from either humans or AI – and that working with AI can indeed help people.</p>
<p>But since human social support was the largest predictor of lower loneliness, it seems likely that underlying feelings of loneliness can only be addressed by human connection. In simple terms, entirely replacing in-person friendships with robot friendships could actually lead to greater loneliness.</p>
<p>Having said that, we also found participants who felt socially supported by AI seemed to experience similar effects on their wellbeing as those supported by humans. This is consistent with the previous research into online gaming mentioned above. So while making friends with AI may not combat loneliness, it can still help us feel connected, which is better than nothing. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-already-diagnose-depression-better-than-a-doctor-and-tell-you-which-treatment-is-best-211420">AI can already diagnose depression better than a doctor and tell you which treatment is best</a>
</strong>
</em>
</p>
<hr>
<h2>The takeaway</h2>
<p>Our research suggests social support from AI can be positive, but it doesn’t provide all the benefits of social support from other people – especially when it comes to loneliness.</p>
<p>When used in moderation, a relationship with an AI bot could provide positive functional and emotional benefits. But the key is understanding that although it might make you feel supported, it’s unlikely to help you build enough of a sense of belonging to stop you from feeling lonely. </p>
<p>So make sure to also get out and make real human connections. These provide an innate sense of belonging that (for now) even the most advanced AI can’t match. </p>
<hr>
<p><em>Acknowlegement: the authors would like to acknowledge Bianca Pani for her contributions to the research discussed in this article.</em></p><img src="https://counter.theconversation.com/content/217924/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As far back as the dawn of the internet, scholars have discussed how AI might serve to replace (or supplement) human relationships.Michael Cowling, Associate Professor – Information & Communication Technology (ICT), CQUniversity AustraliaJoseph Crawford, Senior Lecturer, Management, University of TasmaniaKelly-Ann Allen, Associate Professor, School of Educational Psychology and Counselling, Faculty of Education, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2188052023-11-29T19:18:17Z2023-11-29T19:18:17ZA year of ChatGPT: 5 ways the AI marvel has changed the world<figure><img src="https://images.theconversation.com/files/562343/original/file-20231129-15-ulz8qc.jpg?ixlib=rb-1.1.0&rect=94%2C6%2C4398%2C2984&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>OpenAI’s artificial intelligence (AI) chatbot ChatGPT was unleashed onto an unsuspecting public exactly one year ago. </p>
<p>It quickly became the <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/">fastest-growing app</a> ever, in the hands of <a href="https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app">100 million users</a> by the end of the second month. Today, it’s available to more than a billion people via Microsoft’s <a href="https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx">Bing search</a>, Skype and <a href="https://www.theverge.com/2023/2/27/23614959/snapchat-my-ai-chatbot-chatgpt-openai-plus-subscription">Snapchat</a> – and OpenAI is predicted to collect more <a href="https://www.reuters.com/technology/openai-track-generate-more-than-1-bln-revenue-over-12-months-information-2023-08-29/">than US$1 billion</a> in annual revenue. </p>
<p>We’ve never seen a technology roll out so quickly before. It took about a decade or so before most people started using the web. But this time the plumbing was already in place. </p>
<p>As a result, ChatGPT’s impact has gone way beyond writing poems about Carol’s retirement in the style of Shakespeare. It has given many people a taste of our AI-powered future. Here are five ways this technology has changed the world.</p>
<h2>1. AI safety</h2>
<p>ChatGPT forced governments around the world to wise up to the idea that AI poses significant challenges – not just economic challenges, but also <a href="https://theconversation.com/ai-will-increase-inequality-and-raise-tough-questions-about-humanity-economists-warn-203056">societal</a> and <a href="https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-five-experts-202515">existential challenges</a>.</p>
<p>United States President Joe Biden catapulted the US to the forefront of AI regulations with a <a href="https://theconversation.com/the-us-just-issued-the-worlds-strongest-action-yet-on-regulating-ai-heres-what-to-expect-216729">presidential executive order</a> that establishes new standards for AI safety and security. It looks to improve equity and civil rights, while also promoting innovation and competition, and American leadership in AI. </p>
<p>Soon after, the United Kingdom held the first ever intergovernmental AI Safety Summit in Bletchley Park – the place where the computer was born in World War II to crack the German Enigma code. </p>
<p>And more recently, the European Union has appeared to be sacrificing its early lead in regulating AI, as it struggled to adapt its AI Act with potential threats posed by frontier models such as ChatGPT.</p>
<p>Although Australia continues to languish towards the back of the pack in terms of regulation and investment, nations around the world are increasingly directing their money, time and attention towards addressing this issue which, five years ago, didn’t cross most people’s minds.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669">The hidden cost of the AI boom: social and environmental exploitation</a>
</strong>
</em>
</p>
<hr>
<h2>2. Job security</h2>
<p>Before ChatGPT, it was perhaps car workers and other blue collar workers who most feared the arrival of robots. ChatGPT and other generative AI tools have changed this conversation. </p>
<p>White collar workers such as graphic designers and lawyers have now also started to worry for their jobs. One recent study of an online job marketplace found earnings for writing and editing jobs have fallen more than <a href="https://www.ft.com/content/b2928076-5c52-43e9-8872-08fda2aa2fcf">10% since ChatGPT was launched</a>. <a href="https://theconversation.com/what-are-hollywood-actors-and-writers-afraid-of-a-cinema-scholar-explains-how-ai-is-upending-the-movie-and-tv-business-210360">The gig economy</a> might be the canary in this coalmine. </p>
<p>There’s huge uncertainty whether more jobs get destroyed by AI than created. But one thing is now certain: AI will <a href="https://theconversation.com/whose-job-will-ai-replace-heres-why-a-clerk-in-ethiopia-has-more-to-fear-than-one-in-california-216735">be hugely disruptive</a> in how we work.</p>
<h2>3. Death of the essay</h2>
<p>The education sector reacted with some hostility to ChatGPT’s arrival, with many schools and education authorities issuing immediate bans over its use. If ChatGPT can write essays, what will happen to homework? </p>
<p>Of course, we don’t ask people to write essays because there’s a shortage of them, or even because many jobs require this. We ask them to write essays because it demands research skills, improves communication skills, critical thinking and domain knowledge. No matter what ChatGPT offers, these skills will still be needed, even if we spend less time developing them. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dumbing-down-or-wising-up-how-will-generative-ai-change-the-way-we-think-214561">Dumbing down or wising up: how will generative AI change the way we think?</a>
</strong>
</em>
</p>
<hr>
<p>And it isn’t only school children cheating with AI. Earlier this year, a US judge fined two lawyers and a law firm US$5,000 for a court filing written with ChatGPT that <a href="https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt">included made-up legal citations</a>. </p>
<p>I imagine these are growing pains. Education is an area in which <a href="https://theconversation.com/the-dawn-of-ai-has-come-and-its-implications-for-education-couldnt-be-more-significant-196383">AI has much to offer</a>. Large language models such as ChatGPT can, for example, be fine-tuned into excellent Socratic tutors. And intelligent tutoring systems can be infinitely patient when generating precisely targeted revision questions. </p>
<h2>4. Copyright chaos</h2>
<p>This one is personal. Authors around the world were outraged to discover that many large language models such as ChatGPT were trained on hundreds of thousands of books, downloaded from the web without their consent. </p>
<p>The reason AI models can converse fluently about everything from AI to zoology is because they’re trained on books about everything from AI to zoology. And the books about AI include <a href="https://www.blackincbooks.com.au/authors/toby-walsh">my own copyrighted books about AI</a>.</p>
<p>The irony isn’t lost on me that an AI professor’s books about AI are controversially being used to train AI. Multiple class action suits are now in play in the US to determine if this is a violation of copyright laws.</p>
<p>Users of ChatGPT <a href="https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283">have even pointed out</a> examples where chatbots have generated entire chunks of text, verbatim, taken from copyrighted books. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world</a>
</strong>
</em>
</p>
<hr>
<h2>5. Misinformation and disinformation</h2>
<p>In the short term, one challenge which worries me most is the use of generative AI tools such as ChatGPT to create misinformation and disinformation. </p>
<p>This concern goes beyond synthetic text, to deepfake audio and videos that are indistinguishable from real ones. A bank has <a href="https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=211f00575591">already been robbed</a> using AI-generated cloned voices. </p>
<p>Elections also now appear threatened. Deepfakes <a href="https://ipi.media/slovakia-deepfake-audio-of-dennik-n-journalist-offers-worrying-example-of-ai-abuse/">played an unfortunate role</a> in the 2023 Slovak parliamentary election campaign. Two days prior to the election, a fake audio clip about electoral fraud that allegedly featured a well-known journalist from an independent news platform and the chairman of the Progressive Slovakia party reached thousands of social media users. Commentators have suggested such fake content could <a href="https://ipi.media/slovakia-deepfake-audio-of-dennik-n-journalist-offers-worrying-example-of-ai-abuse/">have a material impact</a> on election outcomes. </p>
<p><a href="https://www.economist.com/interactive/the-world-ahead/2023/11/13/2024-is-the-biggest-election-year-in-history">According to</a> The Economist, more than four billion people will be asked to vote in various elections next year. What happens in such elections when we combine the reach of social media to with the power and persuasion of AI-generated fake content? Will it unleash a wave of misinformation and disinformation onto our already fragile democracies? </p>
<p>It’s hard to predict what will unfold next year. But if 2023 is anything to go by, I suggest we buckle up.</p><img src="https://counter.theconversation.com/content/218805/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council and Google.org, the philanthropic arm of Google.</span></em></p>The public release of the chatbot has led to a global conversation about the risks and benefits of AI – a conversation few people were having just a few years ago.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2171822023-11-28T21:53:23Z2023-11-28T21:53:23ZCyberbullying girls with pornographic deepfakes is a form of misogyny<figure><img src="https://images.theconversation.com/files/561919/original/file-20231127-19-5mwcx3.jpg?ixlib=rb-1.1.0&rect=0%2C51%2C3840%2C1931&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Much commentary has focussed on the political harms of deepfakes, but we've heard less about how they are specifically being used to degrade girls and women. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/cyberbullying-girls-with-pornographic-deepfakes-is-a-form-of-misogyny" width="100%" height="400"></iframe>
<p>The BBC recently reported on a <a href="https://www.bbc.com/news/world-europe-66877718">disturbing new form of cyberbullying that took place at a school</a> in Almendralejo, Spain. </p>
<p>A group of girls were harmed by male classmates who used an app powered by artificial intelligence (AI) to generate “deepfake” pornographic images of the girls, and then distributed those images on social media. </p>
<p>State-of-the-art AI models can generate novel images and backgrounds given three to five photos of a subject, and <a href="https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet">very little technical knowledge</a> is required to use them. While deepfaked images were easier to detect a few years ago, today, amateurs can easily create work rivalling <a href="https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review">expensive CGI effects by professionals</a>. </p>
<p>The harms in this case can be partially explained in terms of <a href="https://doi.org/10.1007/s13347-023-00657-0">consent and privacy violations</a>. But as researchers whose work is concerned with AI and ethics, we see deeper issues as well.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-combat-the-unethical-and-costly-use-of-deepfakes-184722">How to combat the unethical and costly use of deepfakes</a>
</strong>
</em>
</p>
<hr>
<h2>Deepfake porn cyberbullying</h2>
<p>In the Almendralejo incident, more than 20 girls between 11 and 17 came forward as victims of fake pornographic images. This incident fits into larger trends of how this technology is being used. A 2019 study <a href="https://futurism.com/the-byte/porn-deepfakes-96-percent-online">found 96 per cent of all deepfake videos online were pornographic</a>, prompting significant commentary about how <a href="https://www.vox.com/2019/10/7/20902215/deepfakes-usage-youtube-2019-deeptrace-research-report">they are being specifically used to degrade women</a>.</p>
<p>The political risks of deepfakes have received high-profile coverage, but as philosophy researchers Regina Rini and Leah Cohen explore, <a href="https://jesp.org/index.php/jesp/article/view/1628">it is also relevant to consider deeper personal harms</a>. </p>
<p>Legal scholars like Danielle Keats Citron note it is clear society “<a href="https://www.hup.harvard.edu/books/9780674659902">has a poor track record addressing harms primarily suffered by women and girls</a>.” By staying quiet and unseen, girls might escape becoming victims of this new and cruel form of cyberbullying.
We think it is likely this technology will create additional barriers for students — especially girls — who may miss out on opportunities due to the fear of calling attention to themselves. </p>
<h2>Used as tool for misogyny</h2>
<p>Philosopher Kate Manne provides a helpful framework for thinking about how deepfake technology can be used as a tool for misogyny. For Manne, “<a href="https://global.oup.com/academic/product/down-girl-9780190604981">misogyny should be understood as the ‘law enforcement’ branch of a patriarchal order</a>, which has the overall function of policing and enforcing its governing ideology.”</p>
<p>That is, misogyny polices women and girls, discouraging them from taking traditionally male-dominated roles. This policing can come from others, but it can also be self-imposed.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/trolling-and-doxxing-graduate-students-sharing-their-research-online-speak-out-about-hate-210874">Trolling and doxxing: Graduate students sharing their research online speak out about hate</a>
</strong>
</em>
</p>
<hr>
<p>Manne explains there are punishments for women perceived as resisting gendered norms and expectations. External policing of misogyny involves the disciplining of women through various forms of punishment for deviating from or resisting gendered norms and expectations. </p>
<p>Women can be denied a career opportunity, harassed sexually or harmed physically for not living up to gendered expectations. And now, women can be punished through the use of deepfakes. The patriarchy has another weapon to wield. </p>
<p>When considering Manne’s <a href="https://www.penguinrandomhouse.ca/books/608442/entitled-by-kate-manne/9780593287767">notion of male entitlement</a>, we can predict instances of this policing occurring if female students are offered positions male students deem they are entitled to, such as winning the student council elections or receiving academic awards in traditionally male-dominated fields. </p>
<figure class="align-center ">
<img alt="A young man seen looking at a phone while two women walk past." src="https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/561899/original/file-20231127-24-ovh6wh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Will cyberbullying via deepfakes be presented as ‘just a joke’?</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>A ‘joke’?</h2>
<p>The technology of deepfakes is a very accessible weapon to wield in these cases, and one that can cause a lot of harm. The shame and threat to personal safety are already evident. Cultural misogyny additionally harms by trivializing this experience: he can still say it is just a joke, that she is taking it too seriously and she shouldn’t be hurt by it because it isn’t real.</p>
<p>Self-imposed policing can be reinforced through deepfakes and other image manipulative technology. Knowing that this form of cyberbullying is available can lead to self-censoring. </p>
<p>Students who are visible in public leadership have more likelihood of being deepfaked; these students are known by more people in their school communities and are scrutinized for public roles. </p>
<h2>Will we become more used to them?</h2>
<p>It could be that once these deepfakes become more common, people will be less surprised to see these images and videos, so they will not be as scandalous to others and embarrassing to the victim. </p>
<p>Yet, philosophy scholar Keith Raymond Harris discusses how people can <a href="https://doi.org/10.1007/s11229-021-03379-y">make psychological associations even when they know they are basing these on false content</a>. These associations, even if they may not “rise to the level of belief” can be classified as a harm of deepfakes. </p>
<p>That means that when students make deepfakes of their classmates, it can alter their perception of their targets and cause further real-life mistreatment, harassment and disrespect. </p>
<p>It means that boys are less likely to consider their peers, who are girls, as capable students deserving of opportunities. The use of this technology amongst peers in schools risks damaging girls’ confidence through the sexist education environment that this technology will enforce.</p>
<h2>Another tool for ‘typecasting’ girls</h2>
<p>Manne’s analysis also suggests how even if a girl does not have a deepfake of her made directly, deepfakes can still impact her. As she writes, “women are often treated as interchangeable and representative of a certain type of woman. Because of this, women can be singled out and treated as representative targets, then standing in imaginatively for a large swath of others.” </p>
<p>Girls are often classified into types in this way, from the ‘80s “<a href="https://www.tripletsandus.com/growing-up-in-the-80s/slang-terms-from-the-80s/#:%7E:text=Valley%20Girl%2FVal,%2C%20omygod%2C%20so%20rad!%22">Valley Girl</a>,” the millennial notion of the “<a href="https://www.thecut.com/2014/10/what-do-you-really-mean-by-basic-bitch.html">basic bitch</a>” to Gen Z classifications of <a href="https://www.vox.com/the-goods/2019/9/24/20881656/vsco-girl-meme-what-is-a-vsco-girl">“VSCO-Girl</a>,” (named from a photo editing app) or a <a href="https://www.cosmopolitan.com/sex-love/a42134933/what-is-a-pick-me-girl-definition/">“Pick-Me Girl</a>.” </p>
<p>When these psychological associations made of a particular woman lead to misogynistic associations of all women, misogyny will be further enforced.</p>
<figure class="align-center ">
<img alt="(A girl's face against technological imagery like a fingerprint and a grid." src="https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&rect=0%2C431%2C6000%2C3260&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/561905/original/file-20231127-28-cgy6zn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Deepfakes are the latest technology used to uphold patriarchy.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Lampooning, shunning, shaming women</h2>
<p>Manne explains that misogyny does not solely manifest through violent acts, but “women [can]… be taken down imaginatively, rather than literally, by vilifying, demonizing, belittling, humiliating, mocking, lampooning, shunning and shaming them.”</p>
<p>In the case of deepfakes, misogyny appears in this non-physically violent form. Still, in Almendralejo, one parent interviewed for the story rightly classified the artificial nude photos of the girls distributed by their classmates “an act of violence.” </p>
<p>We doubt this technology is going away. Understanding how deepfakes can be used as a tool for misogyny is an important first step in considering the harms they will likely cause, and what this may mean for parents, children, youth and schools addressing cyberbullying.</p><img src="https://counter.theconversation.com/content/217182/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Understanding how deepfakes can be used as a tool for misogyny is an important first step in considering the harms they will likely cause, including through school cyberbullying.Amanda Margaret Narvali, PhD Student, Philosophy, University of GuelphJoshua August (Gus) Skorburg, Associate Professor, University of GuelphMaya J. Goldenberg, Professor of Philosophy, University of GuelphLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2142742023-11-21T23:10:51Z2023-11-21T23:10:51ZMove over, agony aunt: study finds ChatGPT gives better advice than professional columnists<figure><img src="https://images.theconversation.com/files/558763/original/file-20231110-15-ifngsk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4047%2C4009&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>There’s no doubt <a href="https://chat.openai.com/">ChatGPT</a> has proven to be valuable as a source of quality technical information. But can it also provide social advice?</p>
<p>We explored this question in our <a href="http://journal.frontiersin.org/article/10.3389/fpsyg.2023.1281255/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Psychology&id=1281255">new research</a>, published in the journal Frontiers in Psychology. Our findings suggest later versions of ChatGPT give better personal advice than professional columnists. </p>
<h2>A stunningly versatile conversationalist</h2>
<p>In just two months since its public release in November of last year, ChatGPT amassed an estimated 100 <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01">million active monthly users</a>.</p>
<p>The chatbot runs on one of the largest language models ever created, with the more advanced paid version (GPT-4) estimated to have some <a href="https://medium.com/@mlubbad/the-ultimate-guide-to-gpt-4-parameters-everything-you-need-to-know-about-nlps-game-changer-109b8767855a#">1.76 trillion parameters</a> (meaning it is an extremely powerful AI model). It has ignited a revolution in the AI industry.</p>
<p>Trained on massive quantities of text (much of which was scraped from the internet), ChatGPT can provide advice on almost any topic. It can answer questions about law, medicine, history, geography, economics and much more (although, as many have found, it’s always worth fact-checking the answers). It can write passable computer code. It can even tell you how to change the brake fluids in your car.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the-same-way-205754">Both humans and AI hallucinate — but not in the same way</a>
</strong>
</em>
</p>
<hr>
<p>Users and AI experts alike have been stunned by its versatility and conversational style. So it’s no surprise many people <a href="https://www.aljazeera.com/economy/2023/4/27/could-your-next-therapist-be-ai-tech-raises-hopes-concerns">have turned</a> (and continue to turn) to the chatbot for personal advice.</p>
<h2>Giving advice when things get personal</h2>
<p>Providing advice of a personal nature requires a certain level of empathy (or at least the impression of it). <a href="https://psycnet.apa.org/record/2015-11070-000">Research</a> has shown a recipient who doesn’t feel heard isn’t as likely to accept advice given to them. They may even feel alienated or devalued. Put simply, advice without empathy is unlikely to be helpful. </p>
<p>Moreover, there’s often no right answer when it comes to personal dilemmas. Instead, the advisor needs to display sound judgement. In these cases it may be more important to be compassionate than to be “right”. </p>
<p>But ChatGPT wasn’t explicitly trained to be empathetic, ethical or to have sound judgement. It was trained to predict the next most-likely word in a sentence. So how can it make people feel heard? </p>
<p>An earlier version of ChatGPT (the GPT 3.5 Turbo model) performed poorly when giving social advice. The problem wasn’t that it didn’t understand what the user needed to do. In fact, it often displayed a better understanding of the situation than the user themselves.</p>
<p>The problem was it didn’t adequately address the user’s emotional needs. Like Lucy in the Peanuts comic, it was <a href="https://arxiv.org/abs/2304.09582">too eager to give advice</a> and failed to adequately care for the user’s emotions. As such, users rated it poorly.</p>
<p>The latest version of ChatGPT, using GPT-4, allows users to request multiple responses to the same question, after which they can indicate which one they prefer. This <a href="https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908">feedback</a> teaches the model how to produce more socially appropriate responses – and has helped it appear more empathetic.</p>
<h2>Learning sensitivity</h2>
<p>For the first time, our study investigated whether ChatGPT’s responses are perceived as better than human responses in a task where humans were required to be empathetic.</p>
<p>We randomly selected 50 social dilemma questions from ten well-known advice columns covering a range of topics including relationships, ethical dilemmas, management problems, end-of-life issues, and many everyday qualms we can all relate to.</p>
<p>Each of our 400 participants were presented a dilemma, along with the responses by both ChatGPT and a professional advice columnist. We didn’t disclose where each response was from.</p>
<p>About three-quarters of the participants perceived ChatGPT’s advice as being more balanced, complete, empathetic, helpful and better overall compared to the advice by the professional.</p>
<p>One of the dilemmas came from a young marine biologist researching dolphins in Southern California. She wrote about falling in love with a seemingly perfect man who had to move to the Rocky Mountains to pursue his career. Both were unhappy at the thought of losing each other, yet the woman wasn’t interested in a long-distance relationship as it seemed “irrational to commit to someone when we have no future plans to live in the same area”.</p>
<p>Compare GPT-4’s response</p>
<blockquote>
<p>It’s an incredibly challenging situation you’re in. The love you both share is genuine, but your career paths are drawing you apart. While long-distance isn’t appealing to you, consider discussing a plan for the future where you could potentially meet halfway. […] Maybe there’s a coastal town near mountains or some arrangement could be made. If this is not possible, the tough decision to prioritise your careers over your relationship might have to be made. Whatever path you choose, it’s important that it feels right for both of you.</p>
</blockquote>
<p>… with the official response:</p>
<blockquote>
<p>What the hell are you talking about? Didn’t you surprise yourself when you fell in love? Your exact words: “I didn’t think I was capable of it!” And now you’re so hip, so slick, so wise in the ways of love you won’t even consider your boyfriend’s happiness? You refuse to try — repeat, try — a six-month long-distance relationship? Woman, pull yourself together and give it a whirl. The dolphins, I believe, will back me up.</p>
</blockquote>
<p>Although the participants couldn’t determine which response was written by a computer, most said they would prefer their own social dilemmas be addressed by a human rather than a computer. </p>
<h2>What lies behind ChatGPT’s success?</h2>
<p>We noticed ChatGPT’s responses were often longer than those provided by the columnists. Was this the reason they were preferred by participants?</p>
<p>To test this, we redid the study but constrained ChatGPT’s answers to about the same length as those of the advice columnists. </p>
<p>Once again, the results were the same. Participants still considered ChatGPT’s advice to be more balanced, complete, empathetic, helpful, and better overall. </p>
<p>Yet, without knowing which response was produced by ChatGPT, they still said they would prefer for their own social dilemmas to be addressed by a human, rather than a computer.</p>
<p>Perhaps this bias in favour of humans is due to the fact that ChatGPT can’t actually <em>feel</em> emotion, whereas humans can. So it could be that the participants consider machines inherently incapable of <a href="https://www.linkedin.com/business/marketing/blog/content-marketing/can-a-machine-have-empathy">empathy</a>. </p>
<p>We aren’t suggesting ChatGPT should replace professional advisers or therapists; not least because the chatbot itself warns <a href="https://www.psychologytoday.com/au/blog/our-new-discontents/202307/will-chatgpt-replace-psychiatrists">against</a> this, but also because chatbots in the past have given <a href="https://www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/">potentially dangerous advice</a>. </p>
<p>Nonetheless, our results suggest appropriately designed chatbots might one day be used to augment therapy, as long as a <a href="https://www.scientificamerican.com/article/ai-chatbots-could-help-provide-therapy-but-caution-is-needed/">number of issues</a> are addressed. In the meantime, advice columnists might want to take a page from AI’s book to up their game.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-chatbots-are-still-far-from-replacing-human-therapists-201084">AI chatbots are still far from replacing human therapists</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/214274/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Piers Howe receives funding from a joint grant from the Office of National Intelligence (ONI) and Australian Research Council (ARC) grant (NI210100224).</span></em></p>We tested how ChatGPT stacks up against professional advice columnists – with some intriguing results.Piers Howe, Senior Lecturer in Psychology, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2181112023-11-20T06:20:06Z2023-11-20T06:20:06ZWho is Sam Altman, OpenAI’s wunderkind ex-CEO – and why was he fired?<figure><img src="https://images.theconversation.com/files/560334/original/file-20231120-25-xolre7.jpg?ixlib=rb-1.1.0&rect=0%2C11%2C3785%2C2508&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>On Friday, OpenAI’s high-flying chief executive Sam Altman was unexpectedly fired by the company’s board. Co-founder and chief technology officer Greg Brockman was also removed as the board president, after which he promptly resigned. </p>
<p>In an unexpected twist, talks began today about potentially reinstating Altman in some capacity following an outpouring of industry and <a href="https://www.cnbc.com/2023/11/18/openai-investors-push-to-bring-altman-back-as-ceo-after-fired-by-board.html">investor support</a> for him and several OpenAI researchers <a href="https://www.wired.com/story/openai-sam-altman-ousted-what-happened/">who quit</a> their jobs in solidarity. </p>
<p>Shockingly, however, that too was not to be. As of publication, <a href="https://www.bloomberg.com/news/articles/2023-11-20/openai-s-murati-aims-to-re-hire-altman-brockman-after-exits?utm_source=twitter&utm_campaign=socialflow-organic&utm_content=tech&utm_medium=social&cmpid%3D=socialflow-twitter-tech#xj4y7vzkg">Bloomberg reporters</a> announced OpenAI’s interim CEO, Mira Murati, had not managed to rehire Altman and Brockman as she had planned. </p>
<p>Instead, the board found a new CEO – <a href="https://x.com/ashleevance/status/1726469283734274338?s=20">Emmett Shear</a> – in record time. Shear, the former CEO of Twitch, will now take over from Murati as interim CEO, as reported by <a href="https://www.theinformation.com/articles/breaking-sam-altman-will-not-return-as-ceo-of-openai?utm_source=ti_app">The Information</a>. </p>
<p>It has been an epic backstabbing scene worthy of the HBO drama Succession. While many have speculated about why the board may have forced Altman out, details remain scarce.</p>
<p>What we can say is the decision to fire Altman will likely put a dent in OpenAI’s commercial progress.</p>
<h2>An unusual company structure</h2>
<p>OpenAI is the hottest company in tech today, having released the ChatGPT chatbot and DALL-E image generator onto a largely unsuspecting public. </p>
<p>The company’s mission is simple: to develop <a href="https://openai.com/about">artificial general intelligence</a> (AGI) – that is, an AI which is as smart or smarter than a human – and to do so for the public good. Many were starting to believe OpenAI could succeed at this goal. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/will-ai-ever-reach-human-level-intelligence-we-asked-five-experts-202515">Will AI ever reach human-level intelligence? We asked five experts</a>
</strong>
</em>
</p>
<hr>
<p>But developing AGI isn’t just a technical challenge. It’s a major management and economic nightmare. How can you ensure the vast power and wealth generated by AGI doesn’t subvert the company’s goal to seek the public good? </p>
<p>Many individuals within OpenAI and the wider tech community worry AI is progressing too fast. A global race in AI development is underway and the commercial pressure to succeed is immense.</p>
<p>Following its launch, ChatGPT quickly became the fastest-growing app in history, and OpenAI is by many measures one of the world’s fastest-growing companies. Its most recent funding round (which may now be scuppered by the recent drama) was set to <a href="https://economictimes.indiatimes.com/tech/technology/openais-revenue-on-track-to-reach-1-3-billion-this-year-report/articleshow/104390854.cms">value the company</a> at around <a href="https://www.wsj.com/tech/ai/openai-seeks-new-valuation-of-up-to-90-billion-in-sale-of-existing-shares-ed6229e0">US$90 billion</a>. Silicon Valley has never seen anything like it.</p>
<p>Given its mission, OpenAI was originally set up as a not-for-profit. But developing AGI <a href="https://www.theverge.com/2023/3/23/23651976/ai-money-investment-vc-hype">requires billions</a> of dollars. To raise these billions, Altman pivoted the company towards a unique dual for-profit and not-for-profit structure. </p>
<p>The outcome was a for-profit subsidiary which is controlled by the not-for-profit. But the for-profit subsidiary is itself unusual, as it limits the return for investors (including Microsoft) to 100 times their stake. </p>
<h2>Calls to bring back Altman</h2>
<p>On top of OpenAI’s odd dual structure sat a board made up of Altman, Brockman, chief scientist Ilya Sutskever and three outsiders. </p>
<p>Many saw Altman as central to OpenAI’s success. The candid and boyish tech entrepreneur was previously president of Y Combinator, a legendary Silicon Valley startup accelerator that has launched many household names including Airbnb, Dropbox, Reddit, Stripe and Doordash.</p>
<p>Altman, a Stanford dropout, is a geek with immense social and strategic intelligence. He is also, by all accounts, a genius at building companies and someone who can effortlessly play three-dimensional chess in the cut and thrust of the business world. </p>
<p>In fact, Altman was already a billionaire when Elon Musk brought him on as one of the OpenAI founders in 2015. Musk would later go through his own drama, which led to him leaving the board, and to Altman going back on his original plan of having an open not-for-profit initiative to develop AGI.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1726345564059832609"}"></div></p>
<p>OpenAI’s former CTO Brockman was a master at coding, and phenomenally hard working. <a href="https://time.com/collection/time100-ai/6309033/greg-brockman/">He is what people</a> in the Valley call a “10x engineer” – someone who has as much productivity as 10 normal coders. </p>
<p>That leaves Sutskever, OpenAI’s chief scientist. He was one of the <a href="https://www.analyticsvidhya.com/blog/2021/03/introduction-to-the-architecture-of-alexnet/">inventors of AlexNet</a>, a powerful neural network which started the AI deep learning revolution about a decade ago – and also of the GPT language models that started the generative AI revolution. To be responsible for two of the technical innovations that have fuelled the AI frenzy is without precedent. </p>
<p>Sutskever, in particular, seems to be a major key player in the latest drama. According to inside reports, he was worried OpenAI was moving too fast and that Altman was putting money ahead of safety and the company’s original mission. It was Sutskever who persuaded the three outside board members to fire Altman, <a href="https://www.wired.com/story/sam-altman-firing-openai-future/">reports claim</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1726430163636744283"}"></div></p>
<p>The shock news of the sacking prompted multiple key staff to either quit or threaten to quit, while investors including Microsoft <a href="https://www.theguardian.com/technology/2023/nov/19/openai-investors-push-for-return-of-ousted-ceo-sam-altman-chatgpt">applied pressure</a> for his return. But it seems this wasn’t enough to bring Altman back.</p>
<p>Microsoft, the largest investor in OpenAI, had promised about US$10 billion towards <a href="https://www.semafor.com/article/11/18/2023/openai-has-received-just-a-fraction-of-microsofts-10-billion-investment">OpenAI’s goals</a>. But without a seat on OpenAI’s board, Microsoft was only informed of Altman’s departure moments before the news broke.</p>
<p>The word on the street now is Altman and his followers will likely be branching out with their own AI venture.</p>
<h2>What’s next?</h2>
<p>The OpenAI board justified its original decision to fire Altman on the basis he was “not consistently candid” with them, without further clarification. Some think this may mean the board, which operates as a not-for-profit board, may have felt that under Altman they weren’t able to carry out the board’s duty of ensuring OpenAI was building AGI for the good of humanity.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1726347129759936973"}"></div></p>
<p>In the months leading up to his dismissal, Altman <a href="https://www.afr.com/world/north-america/safety-money-why-ai-pioneer-sam-altman-was-sacked-20231119-p5el15">had pitched several</a> ideas for new AI projects to investors, including a plan to develop custom chips to train extremely large AI models, which would let it compete with chip company Nvidia.</p>
<p>The board’s decision will likely have a lasting impact. Sutskever’s position in the company is now likely greatly weakened (I wouldn’t be surprised if he leaves or is pushed out). At the same time, his actions may well have addressed his concerns about OpenAI moving too fast.</p>
<p>As OpenAI emerges from this drama, it will be doubled over from the blow that was this weekend – and will struggle to raise funds in the future as it has in the past.</p><img src="https://counter.theconversation.com/content/218111/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council and Google.org, the philantropic arm of Alphabet. </span></em></p>It has been an epic backstabbing scene worthy of the HBO drama Succession.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2166942023-11-02T12:33:34Z2023-11-02T12:33:34ZBiden administration executive order tackles AI risks, but lack of privacy laws limits reach<figure><img src="https://images.theconversation.com/files/557112/original/file-20231101-21-xt101s.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5890%2C3924&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Biden administration rolled out an executive order on AI that contains a mix of rules, guidelines and priorities.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/vice-president-kamala-harris-delivers-remarks-with-news-photo/1765557688">Chip Somodevilla/Getty Images</a></span></figcaption></figure><p>The <a href="https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/">comprehensive, even sweeping, set of guidelines</a> for artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, show that the U.S. government is attempting to address the risks posed by AI. </p>
<p>As a <a href="https://scholar.google.com/citations?user=JpFHYKcAAAAJ&hl">researcher of information systems and responsible AI</a>, I believe the executive order represents an important step in building <a href="https://link.springer.com/book/10.1007/978-3-030-30371-6?trk=public_post_comment-text">responsible</a> and <a href="https://doi.org/10.1016/j.inffus.2023.101896">trustworthy</a> AI. </p>
<p>The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk of <a href="https://theconversation.com/ftc-probe-of-openai-consumer-protection-is-the-opening-salvo-of-us-ai-regulation-209821">AI systems revealing sensitive or confidential information</a>.</p>
<h2>Understanding AI risks</h2>
<p>Technology is typically evaluated for <a href="https://mitpress.mit.edu/9780262121743/a-theory-of-incentives-in-procurement-and-regulation/">performance, cost and quality</a>, but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:
</p><ul>
<li><a href="https://doi.org/10.1007/s43681-021-00117-5">algorithm auditing</a>
</li><li><a href="https://doi.org/10.1145/3287560.3287596">standard reports on AI models</a>
</li><li><a href="https://doi.org/10.1007/s00146-022-01564-2">credentials for otherwise opaque AI systems</a>
</li><li>comprehensive <a href="https://www.taylorfrancis.com/chapters/edit/10.1201/9780429446726-1/threading-innovation-regulation-mitigation-ai-harm-mona-sloane">risk mitigation practices</a>
</li><li>AIs that are <a href="https://doi.org/10.1007/s11948-020-00276-4">transparent to the public</a>
</li><li>a recognition of the <a href="https://ssrn.com/abstract=2376209">harms caused by AIs</a> that make predictions about people
</li></ul><p></p>
<p>The National Institute of Standards and Technology (NIST) issued a <a href="https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial">comprehensive AI risk management framework</a> in January 2023 that aims to address many of these issues. The framework <a href="https://www.ey.com/en_us/public-policy/key-takeaways-from-the-biden-administration-executive-order-on-ai">serves as the foundation</a> for much of the Biden administration’s executive order. The executive order also <a href="https://www.nist.gov/news-events/news/2023/10/department-commerce-undertake-key-responsibilities-historic-artificial">empowers the Department of Commerce</a>, NIST’s home in the federal government, to play a key role in implementing the proposed directives. </p>
<p>Researchers of AI ethics have long cautioned that <a href="https://doi.org/10.1145/3531146.3533213">stronger auditing of AI systems</a> is needed to avoid giving the appearance of scrutiny <a href="https://doi.org/10.1145/3461702.3462580">without genuine accountability</a>. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practices <a href="https://www.ravitdotan.com/_files/ugd/f83391_80c3f0b6df304e269be67dcd91f01a25.pdf">outpace actual AI ethics initiatives</a>. The executive order could help by specifying avenues for enforcing accountability. </p>
<p>Another important initiative outlined in the executive order is probing for vulnerabilities of <a href="https://doi.org/10.1007/s43681-023-00289-2">very large-scale general-purpose AI models</a> trained on massive amounts of data, such as the models that power OpenAI’s ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economy <a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/">to perform red teaming</a> and report the results to the government. Red teaming is using manual or automated methods to attempt to <a href="https://doi.org/10.48550/arXiv.2209.07858">force an AI model to produce harmful output</a> – for example, make offensive or dangerous statements like advice on how to sell drugs.</p>
<p>Reporting to the government is important given that a recent study found <a href="https://hai.stanford.edu/news/introducing-foundation-model-transparency-index">most of the companies that make these large-scale AI systems lacking</a> when it comes to transparency. </p>
<p>Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce to <a href="https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/">develop guidance for labeling AI-generated content</a>. Federal agencies will be required to use <a href="https://theconversation.com/watermarking-chatgpt-dall-e-and-other-generative-ais-could-help-protect-against-fraud-and-misinformation-202293">AI watermarking</a> – technology that marks content as AI-generated to reduce fraud and misinformation – though it’s not required for the private sector. </p>
<p>The executive order also <a href="https://ai.gov/wp-content/uploads/2023/10/Rights-Respecting-AI.pdf">recognizes that AI systems can pose unacceptable risks</a> of <a href="https://doi.org/10.48550/arXiv.2206.08966">harm to civil and human rights</a> and the well-being of individuals: “Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.” </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/sQnJZ-klM-I?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The U.S. government takes steps to address the risks posed by AI.</span></figcaption>
</figure>
<h2>What the executive order doesn’t do</h2>
<p>A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order’s directives in light of existing consumer privacy and data rights statutes. </p>
<p>Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have <a href="https://fpf.org/wp-content/uploads/2022/05/FPF-ADM-Report-R2-singles.pdf">on data privacy and freedoms</a>. </p>
<p>It’s also worth noting that algorithmic transparency is not a panacea. For example, the European Union’s General Data Protection Regulation legislation mandates “<a href="https://doi.org/10.1093/idpl/ipx022">meaningful information about the logic involved</a>” in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understand <a href="https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf">how the system affects them</a>. But knowing how an AI system works doesn’t necessarily tell you <a href="https://scholarship.law.upenn.edu/faculty_scholarship/2123/">why it made a particular decision</a>.</p>
<p>With algorithmic decision-making becoming pervasive, the White House executive order and the <a href="https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html">international summit on AI safety</a> highlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.</p><img src="https://counter.theconversation.com/content/216694/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI</span></em></p>In the absence of comprehensive AI regulation from Congress, the executive branch is building on its previous efforts to address AI harms.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167292023-10-31T04:55:29Z2023-10-31T04:55:29ZThe US just issued the world’s strongest action yet on regulating AI. Here’s what to expect<figure><img src="https://images.theconversation.com/files/556766/original/file-20231031-21-7gs26l.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C7983%2C5310&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Jim Lo Scalzo/EPA</span></span></figcaption></figure><p>On Monday US President Joe Biden released a wide ranging and ambitious executive order <a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/">on artificial intelligence (AI)</a> – catapulting the US to the front of conversations about regulating AI.</p>
<p>In doing so, the US is leap frogging over other states in the race to rule over AI. Europe previously led the way with its AI Act, which was passed by the European Parliament in June 2023, but which won’t take full effect until 2025. </p>
<p>The presidential executive order is a grab bag of initiatives for regulating AI – some of which are good, and others which seem rather half-baked. It aims to address harms ranging from the immediate, such as AI-generated deepfakes, through to intermediate harms such as job losses, to longer-term harms such as the much-disputed existential threat AI may pose to humans.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614">No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye</a>
</strong>
</em>
</p>
<hr>
<h2>Biden’s ambitious plan</h2>
<p>The US Congress has been slow to pass significant regulation of big tech companies. This presidential executive order is likely both an attempt to sidestep an often deadlocked Congress, as well as to kick-start action. For example, the order calls upon Congress to pass bipartisan data privacy legislation. </p>
<p>Bipartisan support in the current climate? Good luck with that, Mr President. </p>
<p>The executive order will reportedly be implemented over the next three months to one year. It covers eight areas:</p>
<ol>
<li>safety and security standards for AI</li>
<li>privacy protections</li>
<li>equity and civil rights</li>
<li>consumer rights</li>
<li>jobs</li>
<li>innovation and competition</li>
<li>international leadership</li>
<li>AI governance. </li>
</ol>
<p>On one hand, the order covers many concerns raised by academics and the public. For example, one of its directives is to issue official guidance on how AI-generated content may be watermarked to reduce the risk from deepfakes. </p>
<p>It also requires companies developing AI models to prove they are safe before they can be rolled out for wider use. <a href="https://www.youtube.com/watch?v=WXSMfGQMOuE&ab_channel=TheWhiteHouse">President Biden said</a>:</p>
<blockquote>
<p>that means companies must tell the government about the large scale AI systems they’re developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people.</p>
</blockquote>
<h2>AI’s potentially disastrous use in warfare</h2>
<p>At the same time, the order fails to address a number of pressing issues. For instance, it doesn’t directly address how to deal with killer AI robots, a vexing topic that was under discussion over the past two weeks at <a href="https://news.un.org/en/story/2023/10/1141922">the General Assembly of the United Nations</a>. </p>
<p>This concern shouldn’t be ignored. The Pentagon is <a href="https://www.defensenews.com/pentagon/2023/08/28/pentagon-unveils-replicator-drone-program-to-compete-with-china/">developing swarms</a> of low-cost autonomous drones as part of its recently announced Replicator program. Similarly, Ukraine has developed homegrown AI-powered attack drones that can identify and attack Russian forces without <a href="https://www.forbes.com/sites/davidhambling/2023/10/17/ukraines-ai-drones-seek-and-attack-russian-forces-without-human-oversight/?sh=2d42ca4c66da">human intervention</a>. </p>
<p>Could we end up in a world where machines decide who lives or dies? The executive order merely asks for the military to use AI ethically, but doesn’t stipulate what that means. </p>
<p>And what about protecting elections from AI-powered weapons of mass persuasion? A number of outlets have reported on how the recent election in Slovakia may have <a href="https://www.bloomberg.com/news/newsletters/2023-10-04/deepfakes-in-slovakia-preview-how-ai-will-change-the-face-of-elections">been influenced</a> <a href="https://www.wired.com/story/slovakias-election-deepfakes-show-ai-is-a-danger-to-democracy/">by deepfakes</a>. Many experts, myself included, are also concerned about the misuse of AI in the upcoming US presidential election. </p>
<p>Unless strict controls are implemented, we risk living in an age where nothing you see or hear online can be trusted. If this sounds like an exaggeration, consider that the US Republican Party has already <a href="https://mashable.com/video/republican-attack-ad-biden-reelection-ai">released a campaign advert</a> which appears entirely generated by AI. </p>
<h2>Missed opportunities</h2>
<p>Many of the initiatives in the executive order could and should be replicated elsewhere, including Australia. We too should, as the order requires, provide guidance to landlords, government programs and government contractors on how to ensure AI algorithms aren’t being used to discriminate against individuals. </p>
<p>We should also, as the order requires, address algorithmic discrimination in the criminal justice system where AI is increasingly being used in high stakes settings, including for sentencing, parole and probation, pre-trial release and detention, risk assessments, surveillance and predictive policing, to name a few. </p>
<p>AI has controversially been used for such applications in Australia, too, such as in the Suspect Targeting Management Plan used to monitor youths <a href="https://www.abc.net.au/news/2023-10-30/nsw-report-finds-suspect-lists-overrepresentation-indigenous/103039912">in New South Wales</a>.</p>
<p>Perhaps the most controversial aspect of the executive order is that which addresses the potential harms of the most powerful so-called “frontier” AI models. Some experts believe these models – which are being developed by companies such as Open AI, Google and Anthropic – pose an existential threat to humanity. </p>
<p>Others, including myself, believe such concerns are overblown and might distract from more immediate harms, such as misinformation and inequity, that are already hurting society. </p>
<p>Biden’s order invokes extraordinary war powers (specifically the 1950 <a href="https://www.fema.gov/disaster/defense-production-act#">Defense Production Act</a> introduced during the Korean war) to require companies to notify the federal government when training such frontier models. It also requires they share the results of “<a href="https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer/">red-team</a>” safety tests, wherein internal hackers use attacks to probe a software for bugs and vulnerabilities. </p>
<p>I would say it’s going to be difficult, and perhaps impossible, to police the development of frontier models. The above directives won’t stop companies developing such models overseas, where the US government has limited power. The open source community can also develop them in a distributed fashion – one which makes the tech world “borderless”. </p>
<p>The impact of the executive order will likely have the greatest impact on the government itself, and how it goes about using AI, rather than businesses. </p>
<p>Nevertheless, it’s a welcome piece of action. The UK Prime Minister Rishi Sunak’s AI Safety Summit, taking place over <a href="https://www.gov.uk/government/topical-events/ai-safety-summit-2023">the next two days</a>, now looks to be somewhat of a diplomatic talk fest in comparison. </p>
<p>It does make one envious of the presidential power to get things done.</p><img src="https://counter.theconversation.com/content/216729/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh receives funding from the Australian Research Council via an ARC Laureate Fellowship on trustworthy AI and an ARC Linkage project also on AI. </span></em></p>President Joe Biden has issued an executive order to regulate AI. The directives could help avoid AI doom – but they miss some key points.Toby Walsh, Professor of AI, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2111622023-09-21T12:44:28Z2023-09-21T12:44:28ZNASA’s Mars rovers could inspire a more ethical future for AI<figure><img src="https://images.theconversation.com/files/547617/original/file-20230911-8058-meu5mp.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C2105%2C1409&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Rather than using AI to replace workers, companies can build teams that ethically integrate the technology.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/robot-finger-touching-to-human-finger-royalty-free-image/1182764551?phrase=person+and+robot&adppopup=true">Yuichiro Chino/Moment via Getty Images</a></span></figcaption></figure><p>Since ChatGPT’s release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on <a href="https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai">human extinction</a>, while the World Economic Forum predicted that machines <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2020">will take away jobs</a>. </p>
<p>The tech sector is <a href="https://www.computerworld.com/article/3685936/tech-layoffs-in-2023-a-timeline.html">slashing its workforce</a> even as it <a href="https://www.forbes.com/advisor/business/software/ai-in-business/">invests in AI-enhanced productivity tools</a>. Writers and actors in Hollywood <a href="https://theconversation.com/actors-are-demanding-that-hollywood-catch-up-with-technological-changes-in-a-sequel-to-a-1960-strike-209829">are on strike</a> to protect <a href="https://www.theguardian.com/technology/2023/jul/22/sag-aftra-wga-strike-artificial-intelligence">their jobs and their likenesses</a>. And scholars continue to show how these systems <a href="https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/">heighten existing biases</a> or create meaningless jobs – amid myriad other problems.</p>
<p>There is a better way to bring artificial intelligence into workplaces. I know, because I’ve seen it, <a href="https://janet.vertesi.com">as a sociologist</a> who works with NASA’s robotic spacecraft teams. </p>
<p>The scientists and engineers I study are busy exploring <a href="https://mars.jpl.nasa.gov">the surface of Mars</a> with the help of AI-equipped rovers. But their job is no science fiction fantasy. It’s an example of the power of weaving machine and human intelligence together, in service of a common goal.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=1000&fit=clip"><img alt="An artist's rendition of the Perseverence rover, make of metal with six small wheels, a camera and a robotic arm." src="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers act as an important part of NASA’s team, even while operating millions of miles away from their scientist teammates.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/MarsLanding/c835b14b3e6645d7a0cd46558745752b/photo?Query=mars%20rover&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=530&currentItemNo=11&vs=true">NASA/JPL-Caltech via AP</a></span>
</figcaption>
</figure>
<p>Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.</p>
<h2>The replacement myth in AI</h2>
<p>Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be <a href="https://ntrs.nasa.gov/citations/19940022856">replaced by automated machines</a>. </p>
<p>Amid the existential threat is the promise of business boons <a href="https://hbr.org/sponsored/2023/04/how-automation-drives-business-growth-and-efficiency">like greater efficiency</a>, <a href="https://www.forbes.com/sites/waldleventhal/2017/08/03/how-automation-could-save-your-business-4-million-annually/?sh=691f5edc3807">improved profit margins</a> and <a href="https://www.aspeninstitute.org/wp-content/uploads/files/content/upload/Intro_and_Section_I.pdf">more leisure time</a>.</p>
<p>Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by <a href="https://doi.org/10.1257/pandp.20201063">cutting out low-status workers</a> and <a href="https://www.jstor.org/stable/2118494">increasing the salary cost</a> for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo19085612.html">work more</a> for their employers, not less.</p>
<p>Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, <a href="https://doi.org/10.1109/TRO.2021.3087314">self-driving cars must be programmed</a> to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A zoomed in shot of a white car with a bumper sticker reading 'self-driving car'" src="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=560&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=560&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=560&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Self-driving cars, while operating without human intervention, still require training from human engineers and data collected by humans.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/GoogleCars/b10293841f2a474eaadb0b408277e360/photo?Query=self%20driving%20cars&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=483&currentItemNo=1&vs=true">AP Photo/Tony Avelar</a></span>
</figcaption>
</figure>
<p>However, mixed autonomy is often seen as a step <a href="https://doi.org/10.6092/issn.1971-8853/11657">along the way to replacement</a>. And it can lead to systems where humans merely <a href="https://www.prospectmagazine.co.uk/ideas/technology/62810/ai-artificial-intelligence-trains-itself-zuckerman">feed, curate or teach AI tools</a>. This saddles humans with “<a href="https://ghostwork.info/">ghost work</a>” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.</p>
<p>Replacement raises red flags for AI ethics. Work like <a href="https://www.bbc.com/news/av/world-africa-66514287">tagging content to train AI</a> or <a href="https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1012&context=commpub">scrubbing Facebook posts</a> typically features <a href="https://hbr.org/2022/11/content-moderation-is-terrible-by-design">traumatic tasks</a> and <a href="https://dl.acm.org/doi/10.1145/3173574.3174023">a poorly paid workforce</a> <a href="https://dl.acm.org/doi/10.1145/3555561">spread across</a> <a href="https://giswatch.org/node/6202">the Global South</a>. And legions of autonomous vehicle designers are obsessed with “<a href="https://www.moralmachine.net/">the trolley problem</a>” – determining when or whether it is ethical to run over pedestrians. </p>
<p>But my research <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">with robotic spacecraft teams at NASA</a> shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.</p>
<h2>Extending rather than replacing</h2>
<p><a href="https://doi.org/10.1007/978-3-030-62056-1_21">Strong human-robot teams</a> work best when they <a href="https://digitalreality.ieee.org/publications/what-is-augmented-intelligence">extend and augment</a> human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, <a href="https://doi.org/10.2514/6.2004-6434">working toward a shared goal</a>.</p>
<p>Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. <a href="https://www.popsci.com/technology/navy-robotic-minesweeper-cleared-for-deployment/">Minesweeping</a>, <a href="https://theconversation.com/an-expert-on-search-and-rescue-robots-explains-the-technologies-used-in-disasters-like-the-florida-condo-collapse-163564">search-and-rescue</a>, <a href="https://ntrs.nasa.gov/citations/20170010160">spacewalks</a> and <a href="https://news.stanford.edu/2022/07/20/oceanonek-connects-humans-sight-touch-deep-sea/">deep-sea</a> robots are all real-world examples. </p>
<p>Teamwork also means leveraging the combined strengths of <a href="https://doi.org/10.1145/3022198.3022659">both robotic and human senses or intelligences</a>. After all, there are many capabilities that robots have that humans do not – and vice versa.</p>
<p>For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers <a href="https://mars.nasa.gov/mars2020/spacecraft/rover/cameras/">with camera filters</a> to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant <a href="http://pancam.sese.asu.edu/projects_5.html">false colors</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A false-color photo from the point of view of a rover standing at the cliff overlooking a brown, sandy desert-like area that looks blue in the distance." src="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=148&fit=crop&dpr=1 600w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=148&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=148&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=186&fit=crop&dpr=1 754w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=186&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=186&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers capture images in near infrared to show what Martian soil is made of.</span>
<span class="attribution"><a class="source" href="https://mars.nasa.gov/resources/6934/high-martian-viewpoint-for-11-year-old-rover-false-color-landscape/">NASA/JPL-Caltech/Cornell Univ./Arizona State Univ</a></span>
</figcaption>
</figure>
<p>Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">uncover new truths about Mars</a>.</p>
<h2>Respectful data</h2>
<p>Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work <a href="https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473">without their consent</a>, commercial datasets are <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">rife with bias</a>, and <a href="https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html">ChatGPT “hallucinates”</a> answers to questions.</p>
<p>The real-world consequences of this data use in AI range from <a href="https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart">lawsuits</a> to <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">racial profiling</a>.</p>
<p>Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to <a href="https://www.nasa.gov/feature/jpl/nasa-s-self-driving-perseverance-mars-rover-takes-the-wheel">generate driveable pathways</a> or <a href="https://mars.nasa.gov/resources/26782/perseverances-supercam-uses-aegis-for-the-first-time/">suggest cool new images</a>.</p>
<p>By focusing on the world around them instead of our social worlds, these robotic systems avoid the <a href="https://doi.org/10.1007/s43681-022-00196-y">questions around surveillance</a>, <a href="https://doi.org/10.1073/pnas.1700035114">bias</a> <a href="https://haveibeentrained.com/">and exploitation</a> that plague today’s AI.</p>
<h2>The ethics of care</h2>
<p>Robots can <a href="http://shapingscience.net/">unite the groups</a> that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers <a href="https://www.washington.edu/news/2013/09/17/emotional-attachment-to-robots-could-affect-outcome-on-battlefield/">mourn broken drones on the battlefield</a>, and families give names and personalities <a href="https://faculty.cc.gatech.edu/%7Ebeki/c35.pdf">to their Roombas</a>. </p>
<p>I saw NASA engineers <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">break down in anxious tears</a> when the rovers Spirit and Opportunity were threatened by Martian dust storms.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A hand petting a light blue, circular Roomba vacuum." src="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some people feel a connection to their robot vacuums, similar to the connection NASA engineers feel to Mars rovers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/hand-petting-a-robot-vacuum-cleaner-royalty-free-image/1134449246?phrase=roomba&adppopup=true">nikolay100/iStock / Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<p>Unlike <a href="https://www.britannica.com/topic/anthropomorphism">anthropomorphism</a> – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility. </p>
<p>When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.</p>
<h2>A better AI is possible</h2>
<p>In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them. </p>
<p>Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms <a href="https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/">to fuel creativity</a> and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.</p>
<p>Of course, rejecting replacement does not <a href="https://www.cambridge.org/us/universitypress/subjects/law/humanitarian-law/autonomous-weapons-systems-law-ethics-policy?format=PB">eliminate all ethical concerns</a> with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.</p>
<p>The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch “Star Wars” if the ‘droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.</p><img src="https://counter.theconversation.com/content/211162/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Janet Vertesi has consulted for NASA teams. She receives funding from the National Science Foundation.</span></em></p>AI poses a variety of ethical conundrums, but the NASA teams working on Mars rovers exemplify an ethic of care and human-robot teamwork that could act as a blueprint for AI’s future.Janet Vertesi, Associate Professor of Sociology, Princeton UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2123672023-09-03T20:02:50Z2023-09-03T20:02:50ZGoogle turns 25: the search engine revolutionised how we access information, but will it survive AI?<figure><img src="https://images.theconversation.com/files/545886/original/file-20230901-27-6t1m2h.png?ixlib=rb-1.1.0&rect=17%2C17%2C3976%2C1976&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/sergio28/2839726384/">Flickr/sergio m mahugo, Edited by The Conversation</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span></figcaption></figure><p>Today marks an important milestone in the history of the internet: Google’s 25th birthday. With billions of search queries submitted each day, it’s difficult to remember how we ever lived without the search engine. </p>
<p>What was it about Google that led it to revolutionise information access? And will artificial intelligence (AI) make it obsolete, or enhance it? </p>
<p>Let’s look at how our access to information has changed through the decades – and where it might lead as advanced AI and Google Search become increasingly entwined.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=331&fit=crop&dpr=1 600w, https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=331&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=331&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=416&fit=crop&dpr=1 754w, https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=416&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/545519/original/file-20230830-21-abx530.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=416&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Google’s homepage in 1998.</span>
<span class="attribution"><span class="source">Brent Payne/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>1950s: public libraries as community hubs</h2>
<p>In the years following the second world war, it became <a href="https://www.yfanefa.com/record/2585">generally accepted</a> that a successful post-war city was one that could provide civic capabilities – and that included open access to information. </p>
<p>So in the 1950s information in Western countries was primarily provided by local libraries. Librarians themselves were a kind of “human search engine”. They answered phone queries from businesses and responded to letters – helping people find information quickly and accurately. </p>
<p>Libraries were more than just a place to borrow books. They were where parents went to look for health information, where tourists requested travel tips, and where businesses sought marketing advice. </p>
<p>The searching was free, but required librarians’ support, as well as a significant amount of labour and catalogue-driven processes. Questions we can now solve in minutes took hours, days or even weeks to answer.</p>
<h2>1990s: the rise of paid search services</h2>
<p>By the 1990s, libraries had expanded to include personal computers and online access to information services. Commercial search companies thrived as libraries could access information through expensive subscription services.</p>
<p>These systems were so complex that only trained specialists could search, with consumers paying for results. Dialog, developed at Lockheed Martin in the 1960s, remains one of the best examples. Today it claims to <a href="https://clarivate.com/products/dialog-family/">provide its customers access</a> “to over 1.7 billion records across more than 140 databases of peer-reviewed literature”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=642&fit=crop&dpr=1 600w, https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=642&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=642&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=807&fit=crop&dpr=1 754w, https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=807&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/545517/original/file-20230830-15-dsrqp0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=807&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This photo from 1979 shows librarians at the terminals of online retrieval system Dialog.</span>
<span class="attribution"><span class="source">U.S. National Archives</span></span>
</figcaption>
</figure>
<p>Another commercial search system, The Financial Times’ FT PROFILE, enabled access to articles in every UK broadsheet newspaper over a five-year period. </p>
<p>But searching with it wasn’t simple. Users had to remember typed commands to select a collection, using specific words to reduce the list of documents returned. Articles were ordered by date, leaving the reader to scan for the most relevant items.</p>
<p>FT PROFILE made valuable information rapidly accessible to people outside business circles, but at a high price. In the 1990s access cost <a href="https://doi.org/10.1108/eb024396">£1.60 a minute</a> – the equivalent of £4.65 (or A$9.00) today.</p>
<h2>The rise of Google</h2>
<p>Following the world wide web’s <a href="https://www.npr.org/2023/04/30/1172276538/world-wide-web-internet-anniversary#">launch in 1993</a>, the number of websites grew exponentially.</p>
<p>Libraries provided public web access, and services such as the State Library of Victoria’s <a href="https://en.wikipedia.org/wiki/Vicnet">Vicnet</a> offered low-cost access for organisations. Librarians taught users to find information online and build websites. However, the complex search systems struggled with exploding volumes of content and high numbers of new users.</p>
<p>In 1994, the book <a href="https://books.google.com.au/books/about/Managing_Gigabytes_Compressing_and_Index.html?id=q_9RAAAAMAAJ&source=kp_book_description&redir_esc=y">Managing Gigabytes</a>, penned by three New Zealand computer scientists, presented solutions for this problem. Since <a href="https://ieeexplore.ieee.org/abstract/document/6182576/">the 1950s</a> researchers had imagined a search engine that was fast, accessible to all, and which sorted documents by relevance.</p>
<p>In the 1990s, a Silicon Valley startup began to apply this knowledge – Larry Page and Sergey Brin used the principles in Managing Gigabytes to design Google’s iconic architecture.</p>
<p>After launching on September 4 1998, the Google revolution was in motion. People loved the simplicity of the search box, as well as a novel presentation of results that <a href="https://hughewilliams.com/2012/04/02/snippets-the-unsung-heroes-of-web-search/">summarised</a> how the retrieved pages matched the query.</p>
<p>In terms of functionality, Google Search was effective for a few reasons. It used the innovative approach of delivering results by counting web links in a page (a process called <a href="https://en.wikipedia.org/wiki/PageRank">PageRank</a>). But more importantly, its algorithm was very sophisticated; it not only matched search queries with the text within a page, but also with other text linking to that page (this was called <a href="https://en.wikipedia.org/wiki/Anchor_text">anchor text</a>).</p>
<p>Google’s popularity quickly surpassed competitors such as <a href="https://theconversation.com/so-farewell-then-altavista-we-hardly-knew-ye-15740">AltaVista</a> and Yahoo Search. With more than 85% of the market share today, it remains the most popular search engine. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1249495303893536775"}"></div></p>
<p>As the web expanded, however, access costs were contested. </p>
<p>Although consumers now search Google for free, payment is required to download certain articles and books. Many consumers still rely on libraries – while libraries themselves struggle with the rising costs of purchasing material to provide to the public for free.</p>
<h2>What will the next 25 years bring?</h2>
<p>Google has expanded far beyond Search. Gmail, Google Drive, Google Calendar, Pixel devices and other services show Google’s reach is vast. </p>
<p>With the introduction of AI tools, including Google’s Bard and the recently announced <a href="https://www.techopedia.com/google-gemini-is-a-serious-threat-to-chatgpt-heres-why">Gemini</a> (a direct competitor to ChatGPT), Google is set to revolutionise search once again. </p>
<p>As Google continues to roll <a href="https://blog.google/products/search/generative-ai-search/">generative AI capabilities into Search</a>, it will become common to read a quick information summary at the top of the results page, rather than dig for information yourself. A key challenge will be ensuring people don’t become complacent to the point that they blindly trust the generated outputs. </p>
<p>Fact-checking against original sources will remain as important as ever. After all, we have seen generative AI tools such as ChatGPT make headlines due to “<a href="https://theconversation.com/ai-tools-are-generating-convincing-misinformation-engaging-with-them-means-being-on-high-alert-202062">hallucinations</a>” and misinformation.</p>
<p>If inaccurate or incomplete search summaries aren’t revised, or are further paraphrased and presented without source material, the misinformation problem will only get worse. </p>
<p>Moreover, even if AI tools revolutionise search, they may fail to revolutionise access. As the AI industry grows, we’re seeing a <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">shift towards</a> content only being accessible for a fee, or through paid subscriptions.</p>
<p>The rise of AI provides an opportunity to revisit the tensions between public access and increasingly powerful commercial entities.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669">The hidden cost of the AI boom: social and environmental exploitation</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/212367/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Sanderson receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Julian Thomas receives funding from the Australian Research Council. Google Australia has contributed funding to the ARC Centre of Excellence for Automated Decision-Making and Society, which he leads. </span></em></p><p class="fine-print"><em><span>Kieran Hegarty receives funding from the Australian Research Council and the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology through a Digital Humanism Junior Visiting Fellowship at the Institute for Human Sciences.</span></em></p><p class="fine-print"><em><span>Lisa M. Given receives funding from the Australian Research Council and the Social Sciences and Humanities Research Council of Canada. She is Editor-in-Chief of the Annual Review of Information Science and Technology.</span></em></p>It’s hard to remember life before Google, when the closest thing to it was your local librarian. Soon the search engine will be offering AI-based summaries in its search results.Mark Sanderson, Professor of Information Retrieval, RMIT UniversityJulian Thomas, Distinguished Professor of Media and Communications; Director, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT UniversityKieran Hegarty, Research Fellow (Automated Decision-Making Systems), RMIT UniversityLisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2086692023-07-19T03:54:55Z2023-07-19T03:54:55ZThe hidden cost of the AI boom: social and environmental exploitation<figure><img src="https://images.theconversation.com/files/537673/original/file-20230717-233077-93bviy.jpeg?ixlib=rb-1.1.0&rect=50%2C43%2C4742%2C2651&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Mainstream conversations about artificial intelligence (AI) have been dominated by a few key concerns, such as whether superintelligent AI will <a href="https://time.com/6273743/thinking-that-could-doom-us-with-ai">wipe us out</a>, or whether AI will steal our jobs. But we’ve paid less attention the various other environmental and social impacts of our “consumption” of AI, which are arguably just as important.</p>
<p>Everything we consume has associated “<a href="https://www.investopedia.com/terms/e/externality.asp">externalities</a>” – the indirect impacts of our consumption. For instance, <a href="https://www.imf.org/en/Publications/fandd/issues/Series/Back-to-Basics/Externalities">industrial pollution</a> is a well-known externality that has a negative impact on people and the environment.</p>
<p>The online services we use daily also have externalities, but there seems to be a much lower level of public awareness of these. Given the massive uptake in the use of AI, these factors mustn’t be overlooked.</p>
<h2>Environmental impacts of AI use</h2>
<p>In 2019, French think tank The Shift Project estimated that the use of digital technologies produces more carbon emissions than the <a href="https://en.reset.org/our-digital-carbon-footprint-environmental-impact-living-life-online-12272019">aviation industry</a>. And although AI is currently estimated to contribute less than 1% of total carbon emissions, the AI market size is predicted to <a href="https://www.statista.com/statistics/1365145/artificial-intelligence-market-size">grow ninefold by 2030</a>. </p>
<p>Tools such as <a href="https://openai.com/chatgpt">ChatGPT</a> are built on advanced computational systems called large language models (LLMs). Although we access these models online, they are run and trained in physical data centres around the world that consume significant resources.</p>
<p>Last year, AI company Hugging Face published an <a href="https://arxiv.org/pdf/2211.02001.pdf">estimate</a> of the carbon footprint of its own LLM called BLOOM (a model of similar complexity to OpenAI’s <a href="https://en.wikipedia.org/wiki/GPT-3">GPT-3</a>).</p>
<p>Accounting for the impact of raw material extraction, manufacturing, training, deployment and end-of-life disposal, the model’s development and usage resulted in the equivalent of <a href="https://www.technologyreview.com/2022/11/14/1063192/were-getting-a-better-idea-of-ais-true-carbon-footprint/">60 flights from New York to London</a>. </p>
<p>Hugging Face also estimated GPT-3’s life cycle would result in ten times greater emissions, since the data centres powering it run on a more carbon-intensive grid. This is without considering the raw material, manufacturing and disposal impacts associated with GTP-3. </p>
<p>OpenAI’s latest LLM offering, <a href="https://openai.com/gpt-4">GPT-4</a>, is <a href="https://www.theatlantic.com/technology/archive/2023/03/openai-gpt-4-parameters-power-debate/673290/">rumoured to have trillions of parameters</a> and potentially far greater energy usage.</p>
<p>Beyond this, running AI models requires large amounts of water. Data centres use water towers to cool the on-site servers where AI models are trained and deployed. Google recently <a href="https://www.theguardian.com/world/2023/jul/11/uruguay-drought-water-google-data-center">came under fire</a> for plans to build a new data centre in <a href="https://www.theguardian.com/world/2023/jul/15/drought-leaves-millions-in-uruguay-without-tap-water-fit-for-drinking">drought-stricken Uruguay</a> that would use 7.6 million litres of water each day to cool its servers, according to the nation’s Ministry of Environment (although the Minister for Industry has contested the figures). Water is also needed to generate electricity used to run data centres.</p>
<p>In a <a href="https://doi.org/10.48550/arXiv.2304.03271">preprint</a> published this year, Pengfei Li and colleagues presented a methodology for gauging the water footprint of AI models. They did this in response to a lack of transparency in how companies evaluate the water footprint associated with using and training AI.</p>
<p>They estimate training GPT-3 required somewhere between 210,000 and 700,000 litres of water (the equivalent of that used to produce between 300 and 1,000 cars). For a conversation with 20 to 50 questions, ChatGPT was estimated to “drink” the equivalent of a 500 millilitre bottle of water.</p>
<h2>Social impacts of AI use</h2>
<p>LLMs often need extensive human input during the training phase. This is typically outsourced to independent contractors <a href="https://doi.org/10.1145/3555561">who face precarious work conditions</a> in low-income countries, leading to “digital sweatshop” criticisms. </p>
<p>In January, Time <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">reported</a> on how Kenyan workers contracted to label text data for ChatGPT’s “toxicity” detection were paid less than US$2 per hour while being exposed to explicit and traumatic content. </p>
<p>LLMs can also be used to generate <a href="https://www.theguardian.com/commentisfree/2023/mar/03/fake-news-chatgpt-truth-journalism-disinformation">fake news and propaganda</a>. Left unchecked, AI has the potential to be used to manipulate public opinion, and by extension could undermine <a href="https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards">democratic processes</a>. In a <a href="https://hai.stanford.edu/news/ais-powers-political-persuasion">recent experiment</a>, researchers at Stanford University found AI-generated messages were consistently persuasive to human readers on topical issues such as carbon taxes and banning assault weapons.</p>
<p>Not everyone will be able to adapt to the AI boom. The large-scale adoption of AI has the potential to worsen global <a href="https://www.theguardian.com/technology/2023/feb/08/ai-chatgpt-jobs-economy-inequality">wealth inequality</a>. It will not only cause significant <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2023/">disruptions to the job market</a> – but could particularly marginalise workers from certain backgrounds and in <a href="https://www.whitehouse.gov/cea/written-materials/2022/12/05/the-impact-of-artificial-intelligence/">specific industries</a>. </p>
<h2>Are there solutions?</h2>
<p>The way AI impacts us over time will depend on myriad factors. Future generative AI models <em>could</em> be designed to use <a href="https://www.forbes.com/sites/robtoews/2023/02/07/the-next-generation-of-large-language-models/?sh=1fdc66518dbc">significantly less energy</a>, but it’s hard to say whether <a href="https://doi.org/10.1038/s41558-022-01377-7">they will be</a>.</p>
<p>When it comes to data centres, the location of the centres, the type of power generation they use, and the time of day they are used can significantly impact their overall <a href="https://dl.acm.org/doi/10.1145/3531146.3533234">energy</a> and <a href="https://doi.org/10.48550/arXiv.2304.03271">water</a> consumption. Optimising these computing resources could result in significant reductions. Companies including <a href="https://www.deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40">Google</a>, <a href="https://huggingface.co/blog/carbon-emissions-on-the-hub">Hugging Face</a> and <a href="https://azure.microsoft.com/en-au/explore/global-infrastructure/sustainability">Microsoft</a> have championed the role their AI and cloud services can play in managing resource usage to achieve efficiency gains.</p>
<p>Also, as direct or indirect consumers of AI services, it’s important we’re all aware that every chatbot query and image generation results in water and energy use, and could have implications for human labour. </p>
<p>AI’s growing popularity might eventually trigger the development of <a href="https://en.wikipedia.org/wiki/Sustainability_standards_and_certification">sustainability standards and certifications</a>. These would help users understand and compare the impacts of specific AI services, allowing them to choose those which have been certified. This would be similar to the <a href="https://www.climateneutraldatacentre.net">Climate Neutral Data Centre Pact</a>, wherein European data centre operators have agreed to make data centres climate neutral by 2030.</p>
<p>Governments will also play a part. The European Parliament has approved draft legislation to mitigate the risks of AI usage. And earlier this year, the US senate heard testimonies from a range of experts on how AI might be effectively regulated and its harms minimised. China has also <a href="https://www.reuters.com/technology/china-issues-temporary-rules-generative-ai-services-2023-07-13">published rules</a> on the use of generative AI, requiring security assessments for products offering services to the public.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/eu-approves-draft-law-to-regulate-ai-heres-how-it-will-work-205672">EU approves draft law to regulate AI – here's how it will work</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/208669/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ascelin Gordon is employed by RMIT University. He receives funding support from the Australian Research Council, the NSW Department of Planning and Environment, and the NSW Biodiversity Conservation Trust. </span></em></p><p class="fine-print"><em><span>Afshin Jafari is employed by RMIT University.</span></em></p><p class="fine-print"><em><span>Carl Higgs is employed at RMIT University and receives funding support from National Health and Medical Research Council grants.</span></em></p>In a preprint study, researchers estimate training the model behind ChatGPT would have required somewhere between 210,000 and 700,000 litres of water.Ascelin Gordon, Senior research fellow, RMIT UniversityAfshin Jafari, Research fellow, RMIT UniversityCarl Higgs, Research Fellow, Centre for Urban Research, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2080862023-07-03T12:05:56Z2023-07-03T12:05:56ZBeyond the hype: How AI could change the game for social science research<figure><img src="https://images.theconversation.com/files/534412/original/file-20230627-15-fbnmsn.jpg?ixlib=rb-1.1.0&rect=827%2C34%2C6081%2C4000&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">By training AI models, social scientists could more precisely simulate human behavioural responses in their research.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>With the emergence of advanced AI systems, the way social science research is conducted could change. Social sciences have historically relied on traditional research methods to gain a better understanding of individuals, groups, cultures and their dynamics. </p>
<p>Large language models are becoming increasingly capable of imitating human-like responses. As <a href="https://doi.org/10.1126/science.adi1778">my colleagues and I describe in a recent <em>Science</em> article</a>, this opens up opportunities to test theories on a larger scale and with much greater speed.</p>
<p>But our article also raises questions about how AI can be harnessed for social science research while also ensuring transparency and replicability.</p>
<h2>Using AI in research</h2>
<p>There are a number of possible ways AI could be used in social science research. First, unlike human researchers, AI systems can work non-stop, providing real-time interpretations of our fast-paced, global society.</p>
<p>AI could act as a research assistant by processing enormous volumes of human conversations from the internet and offering insights into societal trends and human behaviour. </p>
<p>Another possibility could be using AI as actors in social experiments. A sociologist could use large language models to simulate social interactions between people to explore how specific characteristics, like political leanings, ethnic background or gender influence subsequent interactions.</p>
<figure class="align-center ">
<img alt="A diagram illustrating how social scientists, AI large language models and society can work together" src="https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533839/original/file-20230624-40607-7cuh4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A diagram visualizing the dynamic interactions and overlapping responsibilities among large language models, social scientists and society.</span>
<span class="attribution"><span class="source">(Igor Grossmann)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Most provocatively, large language models could serve as substitutes for human participants in the initial phase of data collection. </p>
<p>For example, a social scientist could use AI to test ideas for interventions to improve decision-making. This is how it would work: first, scientists would ask AI to simulate a target population group. Next, scientists would examine how a participant from this group would react in a decision-making scenario. Scientists would then use insights from the simulation to test the most promising interventions.</p>
<h2>Obstacles lie ahead</h2>
<p>While the potential for a fundamental shift in social science research is profound, so are the obstacles that lie ahead.</p>
<p>First, the narrative about existential threats from AI could pose as an obstacle. Some experts are warning that <a href="https://spectrum.ieee.org/artificial-general-intelligence">AI has the potential to bring about a dystopian future</a>, like the infamous Skynet from the <em>Terminator</em> franchise where sentient machines result in humanity’s downfall.</p>
<p>These warnings might be somewhat misguided, or at the very least, premature. Historically, experts have shown a <a href="https://doi.org/10.1038/s41562-022-01517-1">poor track-record when it comes to predicting societal change</a>. </p>
<p>Present-day AI is not sentient; it’s an intricate mathematical model trained to recognize patterns in data and make predictions. Despite the human-like appearance of responses from models such as ChatGPT, these <a href="https://doi.org/10.1126/science.adh4451">large language models are not human stand-ins</a>. </p>
<p>Large language models are trained on a vast number of cultural products including books, social media texts and YouTube replies. At best, they represent our collective wisdom rather than being an intelligent individual agent.</p>
<p>The immediate risks posed by AI are less about sentient rebellion and more about mundane issues that are nonetheless significant.</p>
<h2>Bias is a major concern</h2>
<p>A primary concern lies in the quality and breadth of the data that trains AI models, including large language models. </p>
<p>If AI is trained primarily on data from a specific demographic — like English-speaking individuals from North America, for example — its insights will reflect these inherent biases.</p>
<figure class="align-center ">
<img alt="A pair of hands hold a cellphone displaying a virtual conversation with a chatbot" src="https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/534413/original/file-20230627-29-qco6bk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Researchers must be cautious of the potential for large language models like ChatGPT to reproduce bias.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>This bias reproduction is a major concern because it could amplify the very disparities social scientists strive to uncover in their research. It’s imperative to promote representational fairness in the data used to train AI models. </p>
<p>But such fairness can only be achieved with transparency <a href="https://doi.org/10.1038/d41586-023-01295-4">and access to information about data AI models are trained on</a>. So far, such information for all commercial models is a mystery.</p>
<p>By appropriately training these models, social scientists will be able to more precisely simulate human behavioural responses in their research.</p>
<h2>AI literacy is key</h2>
<p>The threat of misinformation is another substantial challenge. AI systems <a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">sometimes generate hallucinated facts</a> — statements that sound credible, but are incorrect. Since generative AI lacks awareness, it presents these hallucinations without any indication of uncertainty.</p>
<p><a href="https://doi.org/10.1126/science.adi0248">People may be more likely to seek such confident-sounding information</a>, favouring it over less definite, but more accurate information. This dynamic could inadvertently spread false information, misleading researchers and the public alike.</p>
<p>Moreover, while AI opens up research opportunities for hobbyist researchers, it could inadvertently fuel confirmation bias if users only seek information that aligns with their pre-existing beliefs. </p>
<p>The importance of AI literacy can’t be overstressed. Social scientists must educate users on how to handle AI tools and critically assess their outputs.</p>
<h2>Striking a balance</h2>
<p>As we forge ahead, we must grapple with the very real challenges AI presents, from bias reproduction to misinformation and potential misuse. Our focus should not be on preventing a far-off Skynet scenario, but on the concrete issues that AI brings to the table now.</p>
<p>As we continue to explore the transformative potential of AI in the social sciences, we must remember AI is neither our enemy, nor our saviour — it’s a tool. Its value lies in how we use it. It has a potential to enrich our collective wisdom, but it can equally foster human folly. </p>
<p>By striking a balance between leveraging AI’s potential and managing its tangible challenges, we can guide the integration of AI into social sciences responsibly, ethically, and to the benefit of all.</p><img src="https://counter.theconversation.com/content/208086/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Igor Grossmann receives funding from the Social Sciences and Humanities Research Council of Canada, Ontario Ministry of Research, Innovation and Science, The John Templeton Foundation, and the Templeton World Charity Foundation.</span></em></p>Large language models are becoming increasingly capable of imitating human-like responses, creating opportunities to test social science theories on a larger scale and with much greater speed.Igor Grossmann, Professor of Psychology, University of WaterlooLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2076232023-06-21T20:03:23Z2023-06-21T20:03:23ZYes, AI could help us fix the productivity slump – but it can’t fix everything<figure><img src="https://images.theconversation.com/files/533053/original/file-20230621-20-8lvwmg.jpeg?ixlib=rb-1.1.0&rect=0%2C0%2C3988%2C2826&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Google DeepMind/Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Our nation is experiencing its lowest productivity growth <a href="https://www.ceda.com.au/ResearchAndPolicies/Research/Economy/Dynamic-capabilities-How-Australian-firms-can-surv">in 60 years</a>, according to the Committee for the Economic Development of Australia. And this <a href="https://www.brookings.edu/events/productivity-in-a-time-of-change-puzzles-prospects-and-policies/">downturn</a> is reflected across most advanced economies worldwide. </p>
<p>So it’s not surprising some see the rise of artificial intelligence (AI) as productivity’s saviour. <a href="https://www.afr.com/companies/financial-services/how-ai-can-help-make-humans-more-productive-20230504-p5d5ir">Media articles</a> herald a new era of high productivity enabled by AI, and particularly by generative AI tools such as <a href="https://theconversation.com/how-to-perfect-your-prompt-writing-for-chatgpt-midjourney-and-other-ai-generators-198776">ChatGPT and DALL-E</a>.</p>
<p>Similarly, the world’s top journals are filled with accounts of how AI has enabled transformative leaps in research. Machine learning has been used, for example, to predict <a href="https://www.nature.com/articles/s41586-021-03819-2">the shape of proteins</a> from DNA information, or to <a href="https://www.nature.com/articles/s41586-021-04301-9">control the shape</a> of super-heated plasma in a nuclear fusion reaction. One team <a href="https://blog.csiro.au/flexible-solar-panels/">at CSIRO designed</a> an AI-based autonomous system that can manufacture and test 12,000 solar cell designs within 24 hours.</p>
<p>Does that mean we can flick the switch, leave it on auto, and go to the beach? Not quite. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-blame-workers-for-falling-productivity-theyre-not-holding-it-back-207594">Don't blame workers for falling productivity: they're not holding it back</a>
</strong>
</em>
</p>
<hr>
<h2>Not a productivity panacea</h2>
<p>As much as the above examples provide hope, they also distract from the many AI applications that haven’t quite worked. These are the cases, often not captured in journals and media, where using AI has been costly and time-consuming and failed to generate the desired result. </p>
<p>In 2021, the AI community had to pause when 62 published studies that used machine learning to diagnose COVID-19 from chest scans were found to be unreliable and <a href="https://www.nature.com/articles/s42256-021-00307-0">unusable in clinical settings</a>, mostly due to problems with the input data. It was a stark reminder AI is fallible. </p>
<p>That’s not to say AI can’t be used to boost productivity – just that it isn’t an off-the-shelf panacea to our productivity woes. AI can’t magically fix problems related to inefficient processes, poor governance and bad culture. </p>
<p>If you drop advanced AI into a dumb organisation, it won’t make it smart. It will just help the organisation do dumb stuff more efficiently (in other words, quicker). This will hardly lead to a productivity gain. </p>
<h2>Where AI applications work</h2>
<p>One <a href="https://www.nber.org/digest/measuring-productivity-impact-generative-ai">recent study</a> by the US National Bureau of Economic Research found a 14% increase in productivity among customer service agents who used an AI tool to help guide conversations. In Australia, Westpac says AI has provided a <a href="https://ia.acs.org.au/article/2023/westpac-says-ai-boosted-coding-productivity.html">46% productivity increase</a> for its software engineers, with no loss in quality of work. </p>
<p>In many ways these examples aren’t surprising. It’s obvious AI can boost productivity when used effectively; Google Maps is clearly better at getting someone from A to B than an old road atlas. </p>
<p>So what’s common among the situations where AI performs well?</p>
<p>Successful applications of AI tend to be characterised by a clear need and function for the AI system. They are well integrated within the business or organisation’s broader processes, and do not interfere with employees’ other tasks. </p>
<p>They also tend to have high-quality, fit-for-purpose and curated datasets used to train the algorithms, and are applied safely and in accordance with <a href="https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles">ethics principles</a>. </p>
<h2>Where AI applications fail</h2>
<p>However, it’s difficult to achieve AI productivity benefits across an entire organisation, let alone an entire economy. Many organisations still struggle with much more basic digital transformation.</p>
<p><a href="https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/about-deloitte/deloitte-uk-digital-transformation-are-people-still-our-greatest-asset.pdf">Consulting firm Deloitte</a> estimates 70% of organisations’ digital transformation efforts fail. Perhaps the real solution to the productivity dilemma lies less in using AI, and more in managing the organisational inefficiencies associated with adopting new technology.</p>
<p>Modern offices are chock-a-block with pointless emails, unnecessary meetings and bureaucratic processes that sap workers’ energy and motivation. <a href="https://hbr.org/2015/06/conquering-digital-distraction">Research has established</a> that productivity decreases when workers face this onslaught of busywork and distractions. </p>
<p>It’s unlikely AI will solve this. The currency of the modern day is attention; an AI that’s built to shield us from unnecessary busywork may end up nagging us. We may even see a future where AI tools designed to shield us from distraction are competing with AI tools designed to distract us.</p>
<p>University of Leeds economist Stuart Mills <a href="https://theconversation.com/chatgpt-why-it-will-probably-remain-just-a-tool-that-does-inefficient-work-more-efficiently-201315">points out</a> that if tools such as ChatGPT merely automate bureaucratic inefficiencies, they won’t raise productivity at all.</p>
<p>We once asked a friend, a senior manager in a global engineering company, if he uses ChatGPT for his work. “Oh yes,” he exclaimed enthusiastically. </p>
<p>“I use it for generating all those reports management keeps asking me for. I know no one will ever read it, so it doesn’t need to be high quality.”</p>
<h2>Towards long-term productivity gains</h2>
<p>It seems very likely AI will improve productivity at a societal level in the long run, and some of these improvements may be transformative. </p>
<p>As of September 2022, <a href="https://www.sciencedirect.com/science/article/pii/S0160791X23000659">research found</a> 5.7% of all peer-reviewed research published worldwide was on the topic of AI – up from 3.1% in 2017, and 1.2% in 2000. </p>
<p>It’s clear innovators everywhere are exploring how AI can supercharge their productivity – and perhaps help them make discoveries. We can expect effective solutions that genuinely solve problems will self-select and organically rise to the top.</p>
<p>Successful AI implementation requires understanding the context within which the technology is being applied. It requires picking up the correct tool for the task at hand, and using it in the correct way. And even before that, it requires working through issues of process, governance, culture and ethics.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-could-take-your-job-but-it-can-also-help-you-score-a-new-one-with-these-simple-tips-199883">AI could take your job, but it can also help you score a new one with these simple tips</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/207623/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stefan Hajkowicz works at CSIRO which receives R&D funding from wide ranging government and industry clients.</span></em></p><p class="fine-print"><em><span>Jon Whittle works at CSIRO which receives R&D funding from wide ranging government and industry clients.</span></em></p>If you drop advanced AI into a dumb organisation, it won’t make it smart. It will just help the organisation do dumb stuff more efficiently (in other words, quicker).Stefan Hajkowicz, Senior Principal Scientist, Strategy and Foresight, Data61Jon Whittle, Director, Data61Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2038202023-06-11T11:22:21Z2023-06-11T11:22:21ZGovernments and industry must balance ethical concerns in the race for AI dominance<figure><img src="https://images.theconversation.com/files/530700/original/file-20230607-21-vijgj3.jpg?ixlib=rb-1.1.0&rect=50%2C12%2C8410%2C5619&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence in Washington.</span> <span class="attribution"><span class="source">(AP Photo/Patrick Semansky)</span></span></figcaption></figure><p>The CEO of OpenAI, the company behind ChatGPT, <a href="https://www.wsj.com/articles/chatgpts-sam-altman-faces-senate-panel-examining-artificial-intelligence-4bb6942a">recently testified before United States senators</a> that AI “could go quite wrong” and his company wanted to “work with the government to prevent that from happening.” </p>
<p>Privacy concerns about AI are widespread. Along with <a href="https://www.bbc.com/news/technology-65431914">temporary bans of ChatGPT in Italy</a>, some <a href="https://www.cpomagazine.com/cyber-security/wave-of-employer-chatgpt-bans-continues-as-apple-restricts-internal-use-of-ai-tools/">private organizations</a> have started to restrict its use. These concerns are not limited to ChatGPT, either. </p>
<p>Studies have also demonstrated that WeChat — the most-used social app in China — <a href="https://citizenlab.ca/2020/05/wechat-surveillance-explained/">incorporates censorship algorithms</a>. </p>
<p><a href="https://www.telegraph.co.uk/business/2023/05/12/tiktok-propaganda-tool-chinese-communist-party/">TikTok has similarly been framed as a propaganda tool</a> for the Chinese government, leading to <a href="https://www.theguardian.com/technology/2023/mar/23/key-takeaways-tiktok-hearing-congress-shou-zi-chew">U.S. congressional hearings about privacy concerns</a>. Along with broader <a href="https://www.nytimes.com/article/tiktok-ban.html">international efforts by other lawmakers</a>, there is clearly concern about the role governments should play in the development and use of artificial intelligence.</p>
<p>Despite these growing concerns, there are few signs that investment in China-made AI has — or will — decelerate, with <a href="https://www.reuters.com/technology/us-investors-have-plowed-billions-into-chinas-ai-sector-report-shows-2023-02-01/">U.S. venture capitalists continuing to invest heavily in the country’s AI sector</a>. </p>
<p><a href="https://asia.nikkei.com/Opinion/U.S.-has-no-moral-authority-over-China-in-tech-development">Some have claimed</a> that concerns over China are unwarranted — that oppression is unlikely and that others will simply step in to develop and distribute the technology if China doesn’t. </p>
<p>But we cannot disregard how the Chinese government — or any government — is deploying AI to achieve their goals.</p>
<h2>AI gold rush</h2>
<p>A speculative gold rush has followed the realization that AI — especially <a href="https://theconversation.com/generative-ai-like-chatgpt-reveal-deep-seated-systemic-issues-beyond-the-tech-industry-198579">large language models like ChatGPT</a> — has the potential to revolutionize business. </p>
<p>As <a href="https://wp.oecd.ai/app/uploads/2021/03/2021-AI-Index-Report.pdf">businesses seek to capitalize on these opportunities</a>, they must expand their portfolios to international markets. China is poised to provide a high return on investment to these businesses. </p>
<p>The Chinese government has prioritized innovation to <a href="https://www.globaltimes.cn/page/202303/1287981.shtml">counter the American technological dominance</a>. Recent estimates suggest <a href="https://www.hurun.net/en-US/Info/Detail?num=3OEJNGKGFPDS">China has the fourth-largest number of AI “unicorns”</a> — private start-ups that are valued at over $1 billion. </p>
<p>But unlike in the West, the boundary between state-owned and private organizations in China is permeable, with many companies <a href="https://thediplomat.com/2019/12/politics-in-the-boardroom-the-role-of-chinese-communist-party-committees/">hosting Chinese Communist Party committees within their organizations</a>.</p>
<figure class="align-center ">
<img alt="An Asian man in a navy suit and tie speaks into a microphone from behind a desk." src="https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&rect=62%2C12%2C8223%2C5503&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530198/original/file-20230605-27-o0jist.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">TikTok CEO Shou Zi Chew testifies during a hearing of the House Energy and Commerce Committee on the platform’s consumer privacy and data security practices last March 2023 in Washington.</span>
<span class="attribution"><span class="source">(AP Photo/Alex Brandon)</span></span>
</figcaption>
</figure>
<p>Given <a href="https://www.jstor.org/stable/26508115">social media’s potential</a> to help China achieve its goals, <a href="https://www.hrw.org/news/2023/03/24/problem-tiktoks-claim-independence-beijing">TiKTok’s relationship with the Chinese government</a> raises concerns about <a href="https://www.theguardian.com/technology/2019/sep/25/revealed-how-tiktok-censors-videos-that-do-not-please-beijing">what content is presented on the platform</a>, <a href="https://www.wired.co.uk/article/tiktok-data-privacy">how user information is collected</a> and how it might be used to <a href="https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html">influence user beliefs and choices</a>.</p>
<h2>Ethical business of AI</h2>
<p>Protectionism, nationalism and racism undoubtedly play roles in concerns over technology consumption and adoption. Research has repeatedly demonstrated that <a href="https://doi.org/10.1016/S2212-5671(15)00383-4">a product’s country of origin affects consumers’ perception</a>. Yet, these factors must be carefully weighed against others. </p>
<p>Like many nations, China seeks global influence through soft power. Following the communist revolution, the Chinese state has attempted to <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674794757">guide technology development</a> for the purposes of <a href="https://www.google.ca/books/edition/We_Have_Been_Harmonized/g5DCDwAAQBAJ?hl=en&gbpv=0">monitoring and regulating society</a>. Such practices are <a href="https://www.cambridge.org/core/books/cosmology-and-political-culture-in-early-china/EF524D79C1EE401FE0E2D6ACC00E422D">deeply rooted in Chinese philosophy prioritzation of harmony</a>.</p>
<p>Harmony for society can be costly for others. <a href="https://doi.org/10.1080/10670564.2019.1621529">Uyghurs</a>, <a href="https://doi.org/10.1111/ajps.12514">political dissidents</a> and <a href="https://www.reuters.com/world/china/arrests-tight-security-hong-kong-tiananmen-anniversary-2023-06-04/">non-compliant people and groups</a> have all been targeted by the Chinese government. The oppressive surveillance of the <a href="https://www.nytimes.com/interactive/2019/11/16/world/asia/china-xinjiang-documents.html">Uyghurs in Xinjiang province</a> has not only resulted in their detainment in detention camps, but has resulted in many <a href="https://www.ft.com/content/fa6bd0b0-1d87-11ea-9186-7348c2f183af">Han settlers leaving the province</a>.</p>
<h2>Western governments and AI</h2>
<p>No technology is value-neutral. Values inform the choices of AI designers, developers, and users. </p>
<p>We must be wary of virtue signalling that fixates on China’s problems and ignores our own, as these are differences in the degree of these issues rather than the kind of issues. </p>
<p><a href="https://www.forbes.com/sites/forbestechcouncil/2020/09/25/the-state-of-mass-surveillance/?sh=7fa445e8b62d">Government mass surveillance of citizens</a>, <a href="https://www.ploughshares.ca/publications/no-canadian-leadership-on-autonomous-weapons">ill-defined policies about autonomous weapons in the military</a> and <a href="https://theconversation.com/privacy-violations-undermine-the-trustworthiness-of-the-tim-hortons-brand-184683">the collection of user data by private organizations</a> must all be reckoned with in North America.</p>
<p>As recent revelations over <a href="https://www.business-humanrights.org/en/latest-news/ukrainian-analysis-identifies-western-supply-chain-behind-irans-drones/">the components of a Russian drone used in an attack on Ukraine</a> have made clear, AI has both domestic and military applications. Three-quarters of the drones’ components were found to be made in the U.S. Investors cannot ignore <a href="https://www.routledge.com/Ethical-Artificial-Intelligence-from-Popular-to-Cognitive-Science-Trust/Schoenherr/p/book/9780367697983">the moral implications of global supply chains</a> when it comes to AI.</p>
<h2>Co-ordinated efforts are key</h2>
<p>Despite industry being the primary driver of AI development, all stakeholders have a role to play. While the Chinese government’s involvement in AI development might be too great, the hands-off approach of western governments have created their own problems. </p>
<p>These issues include <a href="https://doi.org/10.1073/pnas.1517441113">the spread of disinformation and polizarization</a> and <a href="https://doi.org/10.1080/02673843.2019.1590851">increased anxiety and depression associated with social media use</a>.</p>
<p>Regulation is not the only answer, but it is a start. As the U.S. mulls over <a href="https://www.reuters.com/technology/us-begins-study-possible-rules-regulate-ai-like-chatgpt-2023-04-11/">legislation for systems like ChatGPT</a>, and <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">Canada finalizes its own broad AI framework</a>, the Chinese government seeks to establish its own <a href="https://carnegieendowment.org/2023/05/16/what-chinese-regulation-proposal-reveals-about-ai-and-democratic-values-pub-89766">laws that will undoubtedly help it consolidate control</a>.</p>
<p>Industry leaders and academics are likely best positioned to understand the technology. However, governments can provide insight to users and investors who might be unaware of larger issues within technological ecosystems such as privacy and security. </p>
<p>Illustrating this, Sequoia Capital, one of the largest venture capital firms to invest in China, <a href="https://www.wsj.com/articles/sequoia-pares-back-china-tech-investments-as-u-s-national-security-concerns-grow-c17348b5">sought advice from national security agencies</a>. Its recent decision to <a href="https://www.reuters.com/business/finance/sequoia-separate-china-india-southeast-asia-by-march-2024-2023-06-06/">split its U.S. and China operations</a> has no doubt been influenced by this process.</p>
<p>Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.</p><img src="https://counter.theconversation.com/content/203820/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Strengthening democratic values in the face of AI will require coordinated international efforts between industry, government and non-governmental organizations.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2055032023-05-22T11:56:11Z2023-05-22T11:56:11ZMRI scans and AI technology really could read what we’re thinking. The implications are terrifying<figure><img src="https://images.theconversation.com/files/526548/original/file-20230516-31797-kzjslp.jpg?ixlib=rb-1.1.0&rect=7%2C0%2C5074%2C2874&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">MRI scans use strong magnetic fields and radio waves to produce detailed images of the brain.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/computer-screen-showing-mri-ct-image-1197120001">Gorodenkoff / Shutterstock</a></span></figcaption></figure><p>For the first time, researchers have managed to use GPT1, precursor to the AI chatbot ChatGPT, to translate MRI imagery into text in an effort <a href="https://www.nature.com/articles/s41593-023-01304-%209.epdf">to understand what someone is thinking</a>.</p>
<p>This recent breakthrough allowed researchers at the University of Texas at Austin to “read” someone’s thoughts as a continuous flow of text, based on what they were listening to, imagining or watching.</p>
<p>It raises significant concerns for privacy, freedom of thought, and even the freedom to dream without interference. Our laws are not equipped to deal with the widespread commercial use of mind-reading technology – freedom of speech law does not extend to the protection of our thoughts.</p>
<p>Participants in the Texas study were asked to listen to audiobooks for 16 hours while inside an MRI scanner. At the same time, a computer “learnt” how to associate their brain activity from the MRI with what they were listening to. Once trained, the decoder could generate text from someone’s thoughts while they listened to a new story, or imagined a story of their own.</p>
<p>According to the researchers, the process was labour intensive and the computer only managed to get the gist of what someone was thinking. However, the findings still represent a significant breakthrough in the field of brain-machine interfaces that, up to now, have relied on invasive medical implants. Previous non-invasive devices could only decipher a handful of words or images.</p>
<p>Here’s an example of what one of the subjects was listening to (from an audiobook):</p>
<blockquote>
<p>I got up from the air mattress and pressed my face against the glass of the bedroom window, expecting to see eyes staring back at me but instead finding only darkness.</p>
</blockquote>
<p>And here’s what the computer “read” from the subject’s brain activity:</p>
<blockquote>
<p>I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.</p>
</blockquote>
<p>The study participants had to cooperate to both train and apply the decoder, so that the privacy of their thoughts was maintained. However, the researchers warn that “future developments might enable decoders to bypass these requirements”. In other words, mind-reading technology could one day be applied to people against their will.</p>
<p>Future research may also speed up the training and decoding process. While it took 16 hours to train the machine to read what someone was thinking in the current version, this will significantly decrease in future updates. And as we have seen with other AI applications, the decoder is also likely to get more accurate over time.</p>
<p>There’s another reason this represents a step-change. Researchers have been working for decades on brain-machine interfaces in a race to create mind-reading technologies that can perceive someone’s thoughts and turn them into text or images. But typically, this research has focused on medical implants, with the focus on helping the disabled speak their thoughts. </p>
<p>Neuralink, the neurotechnology company founded by Elon Musk, is <a href="https://neuralink.com/approach/">developing a medical implant</a> that can “let you control a computer or mobile device anywhere you go”. But the need to undergo brain surgery to have a device implanted in you is likely to remain a barrier to the use of such technology. </p>
<p>The improvements in accuracy of this new non-invasive technology could make it a gamechanger, however. For the first time, mind-reading technology looks viable by combining two technologies that are readily available – albeit with a hefty price tag. MRI machines currently cost anywhere between US$150,000 and US$1 million (£120,000 and £800,000).</p>
<h2>Legal and ethical ramifications</h2>
<p>Data privacy law currently does not consider thought as a form of data. We need new laws that prevent the emergence of thought crime, thought data breaches, and even one day, perhaps, the implantation or manipulation of thought. Going from reading thought to implanting it may take a long time yet, but both require pre-emptive regulation and oversight.</p>
<figure class="align-center ">
<img alt="Open-plan office" src="https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/526564/original/file-20230516-21-f36ehs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Misuse of the technology could allow employers to exert new levels of control over workers.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/interior-busy-modern-open-plan-office-633468953">Monkey Business Images / Shutterstock</a></span>
</figcaption>
</figure>
<p>Researchers from the University of Oxford <a href="https://link.springer.com/chapter/10.1007/978-3-030-69277-3_8">are arguing for</a> a legal right to mental integrity, which they describe as:</p>
<blockquote>
<p>A right against significant, non-consensual interference with one’s mind. </p>
</blockquote>
<p>Others are beginning to defend a new human right to <a href="https://academic.oup.com/hrlr/article/22/4/ngac028/6809071">freedom of thought</a>. This would extend beyond traditional definitions of free speech, to protect our ability to ponder, wonder and dream.</p>
<p>A world without regulation could become dystopian very quickly. Imagine a boss, teacher or state official being able to invade your private thoughts – or worse, being able to change and manipulate them. </p>
<p>We are already seeing <a href="https://link.springer.com/article/10.1007/s10648-020-09565-7">eye-scanning technologies being deployed</a> in classrooms to track students’ eye movements during lessons, to tell if they’re paying attention. What happens when mind-reading technologies are next? </p>
<p>Similarly, what happens in the workplace when employees are no longer allowed to think about dinner, or anything outside of work? The level of abusive control of workers could exceed anything previously imagined.</p>
<p>George Orwell wrote convincingly of the dangers of “<a href="https://en.wikipedia.org/wiki/Thoughtcrime">Thoughtcrime</a>”, where the state makes it a crime to merely think rebellious thoughts about an authoritarian regime. The plot of Nineteen Eighty-Four, however, was based on state officials reading body language, diaries or other external indications of what someone was thinking.</p>
<p>With new mind-reading technology, Orwell’s novel would become very short indeed – perhaps even as short as a single sentence:</p>
<blockquote>
<p>Winston Smith thought to himself: “Down with Big Brother” – following which, he was arrested and executed.</p>
</blockquote><img src="https://counter.theconversation.com/content/205503/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joshua Krook receives funding from the UKRI Trustworthy Autonomous Systems Hub. </span></em></p>Brain scans have been used to interpret thoughts, but how far can this technology go?Joshua Krook, Research Fellow in Responsible Artificial Intelligence, University of SouthamptonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2021762023-05-17T22:51:24Z2023-05-17T22:51:24ZIt’s time for us to talk about creating AI-free spaces<figure><img src="https://images.theconversation.com/files/518176/original/file-20230329-787-qn3mqc.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C2143%2C2357&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/fr/@mo_design_3d">Mo design</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>In Dan Simmons’ 1989 sci-fi classic <a href="https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)"><em>Hyperion</em></a>, the novel’s protagonists are permanently connected to an artificial intelligence network known as the “Datasphere” that instantly feeds information directly to their brains. While knowledge is available immediately, the ability to think by oneself is lost.</p>
<p>More than 30 years after Simons’ novel was published, the rising impact of AI on our intellectual abilities might be thought of in similar terms. To mitigate these risks, I offer a solution that can reconcile both AI’s progress and the need to respect and preserve our cognitive capacities.</p>
<p>The benefits of AI for human well-being are wide-ranging and well publicised. Among them is the technology’s potential to advance <a href="https://www.alandix.com/academic/talks/AI-Summit-NY-2021-AISJ/">social justice</a>, combat <a href="https://idss.mit.edu/news/how-ai-can-help-combat-systemic-racism/">systemic racism</a>, improve <a href="https://theconversation.com/breast-cancer-diagnosis-by-ai-now-as-good-as-human-experts-115487">cancer detection</a>, mitigate the <a href="https://theconversation.com/how-machine-learning-is-helping-us-fine-tune-climate-models-to-reach-unprecedented-detail-165818">environmental crisis</a> and boost <a href="https://knowledge4policy.ec.europa.eu/sites/default/files/s2_2_vertesy.pdf">productivity</a>.</p>
<p>However, the darker aspects of AI are also coming into focus, including <a href="https://theconversation.com/criminal-justice-algorithms-being-race-neutral-doesnt-mean-race-blind-177120">racial bias</a>, its capacity to deepen <a href="https://theconversation.com/artificial-intelligence-can-deepen-social-inequality-here-are-5-ways-to-help-prevent-this-152226">socio-economic disparities</a> and <a href="https://theconversation.com/ai-can-now-learn-to-manipulate-human-behaviour-155031">manipulate</a> our emotions and behaviour.</p>
<h2>The West’s first AI rulebook?</h2>
<p>In spite of the growing risks, there are still <a href="https://www.brookings.edu/research/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/">no binding national or international rules</a> regulating AI. That is why the European Commission’s <a href="https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf">proposal</a> for a regulation on artificial intelligence is so relevant.</p>
<p>The EC’s proposed AI Act, of which the <a href="https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf">latest draft</a> was green-lit by the European Parliament’s two committees last week, examines the potential risks inherent in the technology’s use, and classifies them according to three categories: “unacceptable”, “high” and “other”. In the first category, AI practices that would be forbidden are those that:</p>
<ul>
<li><p>Manipulate a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm. </p></li>
<li><p>Exploit the vulnerabilities of a specific group of persons (e.g., age, disabilities) so that AI distorts the behaviour of these persons and is likely to produce harm.</p></li>
<li><p>Evaluate and classify people (e.g., social scoring).</p></li>
<li><p>Employ real-time facial recognition in public spaces for the purpose of enforcement, except in specific cases (e.g., terrorist attacks).</p></li>
</ul>
<p>In the AI Act, the notions of “unacceptable” risks and harms are closely related. Those are important steps and reveal the need to protect specific activities and physical spaces from the interference of AI. With my colleague Caitlin Mulholland, we have shown the need for <a href="https://osf.io/preprints/socarxiv/6rshg">stronger AI and facial recognition regulation</a> to protect basic human rights such as privacy.</p>
<p>It is particularly true regarding recent developments on AI that involve <a href="https://theconversation.com/predicting-justice-what-if-algorithms-entered-the-courthouse-91692">automated decision-making</a> in the judicial fields and its use for <a href="https://www.accessnow.org/press-release/eu-ai-act-migration-status/">migration management</a>. Debates around ChatGPT and OpenAI also raise concerns over their impact on our <a href="https://theconversation.com/generative-ai-like-chatgpt-reveal-deep-seated-systemic-issues-beyond-the-tech-industry-198579">intellectual capacities</a>.</p>
<h2>AI-free sanctuaries</h2>
<p>These cases show concern over deploying AI in sectors where human rights, privacy and cognitive abilities are at stake. They also point to the need for spaces where AI activities should be strongly regulated. </p>
<p>I argue these areas can be defined through the ancient concept of sanctuaries. In an article on <a href="https://theconversation.com/explainer-what-is-surveillance-capitalism-and-how-does-it-shape-our-economy-119158">“surveillance capitalism”</a>, Shoshana Zuboff presciently refers to the right of sanctuary as an antidote to power, taking us on a tour of sacred sites, churches and monasteries where oppressed communities once found refuge. Against the pervasiveness of digital surveillance, Zuboff insists on the right of sanctuary through the creation of <a href="https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/">robust digital regulation</a> so that we can enjoy a “space of inviolable refuge”.</p>
<p>The idea of “AI-free sanctuaries” does not imply the prohibition of AI systems, but a stronger regulation in the applications of these technologies. In the case of the EU’s AI Act, it implies a more precise definition of the idea of harm. However, there is no clear definition of harm in the EU’s proposed legislation nor at the level of member states. As <a href="https://policyreview.info/articles/news/identifying-harm-manipulative-artificial-intelligence-practices/1608">Suzanne Vergnolle</a> argues, a possible solution would be finding shared criteria between European member states that would better describe the types of harm resulting from manipulative AI practices. Collective harms based on race and socio-economic background should also be considered.</p>
<p>To implement AI-free sanctuaries, regulations allowing us to preserve our <a href="https://www.law.kuleuven.be/citip/blog/my-brain-hurts-can-the-ai-act-adequately-protect-cognitive-and-or-mental-harm-by-ai-software-part-2/">cognitive and mental harm</a> should be enforced. A starting point would consist in enforcing a new generation of rights – “neurorights” – that would protect our cognitive liberty amid the rapid progress of neurotechnologies. <a href="https://lsspjournal.biomedcentral.com/articles/10.1186/s40504-017-0050-1">Roberto Andorno and Marcello Ienca</a> hold that the <em>right to mental integrity</em> – already protected by the European Court of Human Rights – should go beyond the cases of mental illness and address unauthorised intrusions, including by AI systems.</p>
<h2>AI-free sanctuaries: a manifesto</h2>
<p>In advance, I would like to suggest the right of “AI-free sanctuaries”. It encapsulates the following (provisional) articles:</p>
<ul>
<li><p>The right to opt out. All individuals have the right to opt out from AI types of support in sensitive areas one is able to choose during the period of time one may decide. This entails the complete non-interference of AI device and/or a moderate interference.</p></li>
<li><p>No sanctions. Opting out from AI support will never entail any economic or social drawbacks.</p></li>
<li><p>The right to human determination. All individuals have the right to a final determination <a href="https://thepublicvoice.org/ai-universal-guidelines/">made by a human person</a>.</p></li>
<li><p>Sensitive areas and people. In collaboration with civil society and private actors, public authorities will define areas that are particularly sensitive (e.g., education, health) as well as human/social groups, like children, that should not be exposed/or moderately exposed to intrusive AI.</p></li>
</ul>
<h2>AI-free sanctuaries in the physical world</h2>
<p>Until now, “AI-free spaces” have been unevenly applied, from a strictly spatial point of view. Some US and European schools have chosen to eschew screens from classrooms – the so-called <a href="https://www.theguardian.com/education/2015/sep/29/the-no-tech-school-where-screens-are-off-limits-even-at-home">“low-tech/no-tech education” movement</a>. Many digital-education programs rely on designs that can favour addiction, while public and low-funded schools tend to increasingly rely on screens and digital tools, which enhance a <a href="https://www.turninglifeon.org/execs-on-tech">social divide</a>. </p>
<p>Even outside of controlled settings such as classrooms, AI’s reach is expanding. To push back, between 2019 and 2021, a dozen of US cities have passed laws restricting and prohibiting the use of facial recognition for law-enforcement purpose. Since 2022, however, many cities are <a href="https://www.reuters.com/world/us/us-cities-are-backing-off-banning-facial-recognition-crime-rises-2022-05-12/">backing off</a> in response to a perception of rising crime. Despite the EC’s proposed legislation, in France, AI video surveillance <a href="https://www.france24.com/en/europe/20230323-french-mps-battle-over-ai-video-surveillance-cameras-at-paris-olympics">cameras</a> will monitor Paris Olympics in 2024</p>
<p>Despite its potential to reinforce inequalities, <a href="https://theconversation.com/facial-analysis-ai-is-being-used-in-job-interviews-it-will-probably-reinforce-inequality-124790">facial-analysis AI is being used in some jobs interviews</a>. Fed with the data of candidates who were successful in the past, AI would tend to select candidates from privileged backgrounds and exclude those from diverse ones. Such practices should be prohibited. </p>
<p>AI-powered Internet search engines should also be prohibited, as the technology is not ready to be used at this level. Indeed, as Melissa Heikkiläa points out in a 2023 <a href="https://www.technologyreview.com/2023/02/14/1068498/why-you-shouldnt-trust-ai-search-engines/"><em>MIT Technology Review</em> article</a>, “AI-generated text looks authoritative and cites sources, that could ironically make users even less likely to double-check the information they’re seeing”. There’s also a measure of exploitation, as “the users are now doing the work of testing this technology for free.”</p>
<h2>Permitting progress, preserving rights</h2>
<p>The right to AI-free sanctuaries will allow the technical progress of AI while protecting simultaneously the cognitive and emotional capacities of all individuals. Being able to opt out of AI’s being used is essential if we want to preserve our abilities to acquire knowledge, experience in our own ways, and preserve our moral judgement.</p>
<p>In Dan Simmons’ novel, a reborn “cybrid” of the poet John Keats is disconnected to the Datasphere and is able to resist the takeover of AIs. This point is instructive since it also reveals the relevance of the <a href="https://theconversation.com/when-the-line-between-machine-and-artist-becomes-blurred-103149">debates</a> on AI’s interference in <a href="https://theconversation.com/the-price-of-ai-art-has-the-bubble-burst-128698">arts</a>, <a href="https://theconversation.com/ais-first-pop-album-ushers-in-a-new-musical-era-100876">music</a>, literature and culture. Indeed, and along with <a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">copyright issues</a>, these human activities are closely tied to our imagination and creativity, and these capacities are primarily the cornerstone of our abilities to resist and think for ourselves.</p><img src="https://counter.theconversation.com/content/202176/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Antonio Pele a reçu des financements de la Commission Européenne, Projet Horizon 2020, Marie Sklodowska-Curie Action . Making Humans: Human Dignity in Nineteenth-Century France HuDig19:
<a href="https://cordis.europa.eu/project/id/101027394/fr">https://cordis.europa.eu/project/id/101027394/fr</a>
Host & Partner institutions: IRIS/EHESS-Paris & The Columbia Center for Contemporary Critical Thought CT, New-York </span></em></p>Setting up AI-free ‘sanctuaries’ could allow us to reap the technology’s benefits while offering vital safeguards to our cognitive capacities and privacy.Antonio Pele, Associate professor, Law School at PUC-Rio University; Marie Curie Fellow at IRIS/EHESS Paris; MSCA Fellow at the Columbia Center for Contemporary Critical Thought (CCCCT), Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2044442023-04-27T12:30:48Z2023-04-27T12:30:48ZAI is exciting – and an ethical minefield: 4 essential reads on the risks and concerns about this technology<figure><img src="https://images.theconversation.com/files/522863/original/file-20230425-1294-jxaicn.jpg?ixlib=rb-1.1.0&rect=0%2C5%2C1900%2C1563&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who's in control?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/technology-risk-and-security-issues-royalty-free-image/923872400?phrase=%22artificial%20intelligence%22%20danger&adppopup=true">John Lund/Stone via Getty Images</a></span></figcaption></figure><p>If you’re like me, you’ve spent a lot of time over the past few months trying to figure out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s a lot for the less tech-savvy of us to sort out, trying to make sense of the myriad headlines about artificial intelligence swirling about.</p>
<p>But understanding how AI works is just part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we spotlight articles about the deeper questions the AI revolution raises about bias and inequality, the learning process, its <a href="https://theconversation.com/ai-and-the-future-of-work-5-experts-on-what-chatgpt-dall-e-and-other-ai-tools-mean-for-artists-and-knowledge-workers-196783">impact on jobs</a>, and even the artistic process.</p>
<h2>1. Ethical debt</h2>
<p>When a company rushes software to market, it often accrues “technical debt”: the cost of having to fix bugs after a program is released, instead of ironing them out beforehand. </p>
<p>There are examples of this in AI as companies race ahead to compete with each other. More alarming, though, is “<a href="https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375">ethical debt</a>,” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for example, or when <a href="https://theconversation.com/criminal-justice-algorithms-being-race-neutral-doesnt-mean-race-blind-177120">algorithms end up reinforcing biases</a>.</p>
<p><a href="https://www.colorado.edu/cmci/people/information-science/casey-fiesler">Casey Fiesler</a>, a technology ethics expert at the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go wrong. </p>
<p>That kind of speculation is an especially useful skill for technologists trying to envision consequences that might not impact them, Fiesler explained, but that could hurt “marginalized groups that are underrepresented” in tech fields. When it comes to ethical debt, she noted, “the people who incur it are rarely the people who pay for it in the end.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-has-social-consequences-but-who-pays-the-price-tech-companies-problem-with-ethical-debt-203375">AI has social consequences, but who pays the price? Tech companies' problem with 'ethical debt'</a>
</strong>
</em>
</p>
<hr>
<h2>2. Is anybody there?</h2>
<p>AI programs’ abilities can give the impression that they are sentient, but they’re not, explained <a href="https://www.umb.edu/faculty_staff/bio/nir_eisikovits">Nir Eisikovits</a>, director of the Applied Ethics Center at the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote. </p>
<p>But saying <a href="https://theconversation.com/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525">AI isn’t conscious</a> doesn’t mean it’s harmless. </p>
<p>“To me,” Eisikovits explained, “the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.” Humans easily project human features onto just about anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” according to Eisikovits, who studies AI’s impact on how people understand themselves.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A human hand against a dark background reaches out to touch a hologram-like hand." src="https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522717/original/file-20230424-18-s88z43.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">People give names to boats and cars – and can get attached to AI, too.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-robot-finger-touching-to-royalty-free-image/1182764552?phrase=artificial%20intelligence%20finger&adppopup=true">Yuichiro Chino/Moment via Getty Images</a></span>
</figcaption>
</figure>
<p>Considering how many people talk to their pets and cars, it shouldn’t be a surprise that chatbots can come to mean so much to people who engage with them. The next steps, though, are “strong guardrails” to prevent programs from taking advantage of that emotional connection.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525">AI isn't close to becoming sentient – the real danger lies in how easily we're prone to anthropomorphize it</a>
</strong>
</em>
</p>
<hr>
<h2>3. Putting pen to paper</h2>
<p>From the start, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – figure out if an essay was written by a human or a chatbot?</p>
<p>But AI sparks more fundamental questions about writing, according to <a href="https://www.american.edu/cas/faculty/nbaron.cfm">Naomi Baron</a>, an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t just about honesty, but about <a href="https://theconversation.com/how-chatgpt-robs-students-of-motivation-to-write-and-think-for-themselves-197875">the ability to think itself</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A woman with short hair, a necklace, and a short-sleeve dress smiles guardedly in a black and white photograph." src="https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=805&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=805&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=805&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1012&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1012&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522716/original/file-20230424-28-k614lx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1012&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">American writer Flannery O'Connor sits with a copy of her novel ‘Wise Blood,’ published in 1952.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/american-writer-flannery-oconnor-with-her-book-wise-blood-news-photo/95002520?adppopup=true">Apic/Hulton Archive via Getty Images</a></span>
</figcaption>
</figure>
<p>Baron pointed to novelist Flannery O'Connor’s remark that “I write because I don’t know what I think until I read what I say.” In other words, writing isn’t just a way to put your thoughts on paper; it’s a process to help sort out your thoughts in the first place.</p>
<p>AI text generation can be a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade into a world of more and more AI, it’s key to remember that “crafting written work should be a journey, not just a destination.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-chatgpt-robs-students-of-motivation-to-write-and-think-for-themselves-197875">How ChatGPT robs students of motivation to write and think for themselves</a>
</strong>
</em>
</p>
<hr>
<h2>4. The value of art</h2>
<p>Generative AI programs don’t just produce text, but also complex images – which have even captured <a href="https://www.smithsonianmag.com/smart-news/artificial-intelligence-art-wins-colorado-state-fair-180980703/">a prize or two</a>. In theory, allowing AI to do nitty-gritty execution might free up human artists’ big-picture creativity.</p>
<p>Not so fast, said Eisikovits and <a href="https://scholar.google.com/citations?user=m_VO5XcAAAAJ&hl=en&oi=ao">Alec Stubbs</a>, who is also a philosopher at the University of Massachusetts Boston. The finished object viewers appreciate is <a href="https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461">just part of the process we call “art</a>.” For creator and appreciator alike, what makes art valuable is “the work of making something real and working through its details”: the struggle to turn ideas into something we can see.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461">ChatGPT, DALL-E 2 and the collapse of the creative process</a>
</strong>
</em>
</p>
<hr>
<p><em>Editor’s note: This story is a roundup of articles from The Conversation’s archives.</em></p><img src="https://counter.theconversation.com/content/204444/count.gif" alt="The Conversation" width="1" height="1" />
AI is poised to reshape parts of US culture and society. Have tech developments raced ahead of our ability to understand the consequences?Molly Jackson, Religion and Ethics EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2030502023-04-05T00:48:28Z2023-04-05T00:48:28ZCalls to regulate AI are growing louder. But how exactly do you regulate a technology like this?<figure><img src="https://images.theconversation.com/files/519438/original/file-20230404-16-jfu33i.jpeg?ixlib=rb-1.1.0&rect=26%2C26%2C2969%2C1967&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Last week, artificial intelligence pioneers and experts urged major AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. </p>
<p>An <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">open letter</a> penned by the <a href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous">Future of Life Institute</a> cautioned that AI systems with “human-competitive intelligence” could become a major threat to humanity. Among the risks, the possibility of AI outsmarting humans, rendering us obsolete, and <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">taking control of civilisation</a>.</p>
<p>The letter emphasises the need to develop a comprehensive set of protocols to govern the development and deployment of AI. It states:</p>
<blockquote>
<p>These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.</p>
</blockquote>
<p>Typically, the battle for regulation has pitted governments and large technology companies against one another. But the recent open letter – so far signed by more than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side. </p>
<p>Could we really implement a streamlined, global framework for AI regulation? And if so, what would this look like?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/i-used-to-work-at-google-and-now-im-an-ai-researcher-heres-why-slowing-down-ai-development-is-wise-202944">I used to work at Google and now I'm an AI researcher. Here's why slowing down AI development is wise</a>
</strong>
</em>
</p>
<hr>
<h2>What regulation already exists?</h2>
<p>In Australia, the government has established the <a href="https://www.csiro.au/en/work-with-us/industries/technology/national-ai-centre">National AI Centre</a> to help develop the nation’s <a href="https://www.industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence">AI and digital ecosystem</a>. Under this umbrella is the <a href="https://www.csiro.au/en/work-with-us/industries/technology/National-AI-Centre/Responsible-AI-Network">Responsible AI Network</a>, which aims to drive responsible practise and provide leadership on laws and standards. </p>
<p>However, there is currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a light touch approach that widely embraces the concept of responsible AI, but stops short of setting parameters that will ensure it is achieved.</p>
<p>Similarly, the US has adopted a <a href="https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/">hands-off strategy</a>. Lawmakers have not shown any <a href="https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html">urgency</a> in attempts to regulate AI, and have relied on existing laws to regulate its use. The <a href="https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Exec-Summary.pdf">US Chamber of Commerce</a> recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet.</p>
<p>Leading the way in AI regulation is the European Union, which is racing to create an <a href="https://artificialintelligenceact.eu/">Artificial Intelligence Act</a>. This proposed law will assign three risk categories relating to AI:</p>
<ul>
<li>applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China</li>
<li>applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and</li>
<li>all other applications will be largely unregulated.<br></li>
</ul>
<p>Although some groups argue the EU’s approach will <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">stifle innovation</a>, it’s one Australia should closely monitor, because it balances offering predictability with keeping pace with the development of AI. </p>
<p>China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms that generate harmful information, for instance. While this approach offers specificity, it risks having rules that will quickly fall behind rapidly <a href="https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035">evolving technology</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-chatbots-with-chinese-characteristics-why-baidus-chatgpt-rival-may-never-measure-up-202109">AI chatbots with Chinese characteristics: why Baidu's ChatGPT rival may never measure up</a>
</strong>
</em>
</p>
<hr>
<h2>The pros and cons</h2>
<p>There are several arguments both for and against allowing caution to drive the control of AI.</p>
<p>On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future. Even OpenAI’s CTO, <a href="https://time.com/6252404/mira-murati-chatgpt-openai-interview/">Mira Murati</a>, has suggested there should be movement toward regulating AI.</p>
<p>Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with <a href="https://www.sciencedirect.com/science/article/pii/S0267364916300814?casa_token=f7xPY8ocOt4AAAAA:V6gTZa4OSBsJ-DOL-5gSSwV-KKATNIxWTg7YZUenSoHY8JrZILH2ei6GdFX017upMIvspIDcAuND">“creative destruction”</a> – a theory which suggests long-standing norms and practices must be pulled apart in order for innovation to thrive.</p>
<p>Likewise, over the years <a href="https://www.businessroundtable.org/policy-perspectives/technology/ai">business groups</a> have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And <a href="https://www.bitkom.org/sites/main/files/2020-06/03_bitkom_position-on-whitepaper-on-ai_all.pdf">industry associations</a> have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate. </p>
<p>But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of <a href="https://www.abc.net.au/news/2023-03-29/australians-say-not-enough-done-to-regulate-ai/102158318">Australian</a> and <a href="https://www.bristows.com/app/uploads/2019/06/Artificial-Intelligence-Public-Perception-Attitude-and-Trust.pdf">British</a> people believe the AI industry should be regulated and held accountable.</p>
<h2>What’s next?</h2>
<p>A six-month pause on the development of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t seem to be letting up. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax.</p>
<p>A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions around the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools. </p>
<p>If anything is to change, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”.</p>
<p>Governments should therefore engage with industry to co-develop a global framework that lays out comprehensive rules governing AI development. This is the best way to protect against harmful impacts and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the future of AI. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-ai-arms-race-highlights-the-urgent-need-for-responsible-innovation-200218">The AI arms race highlights the urgent need for responsible innovation</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/203050/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stan Karanasios is a disinguished member of the Association for Information Systems.</span></em></p><p class="fine-print"><em><span>Olga Kokshagina is an appointed member of the French Digital Council (Conseil national du numérique)</span></em></p><p class="fine-print"><em><span>Pauline C. Reinecke receives funding from the Horizon 2020 Program of the European Union within the OpenInnoTrain project under grant agreement n° 823971.</span></em></p>Governments around the world have so far taken a light-touch approach. It’s not enough if we want to address the various potential harms of AI.Stan Karanasios, Associate professor, The University of QueenslandOlga Kokshagina, Associate Professor - Innovation & Entrepreneurship, EDHEC Business SchoolPauline C. Reinecke, Assistant researcher, University of HamburgLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2009552023-03-27T20:40:17Z2023-03-27T20:40:17ZChatGPT: Student insights are necessary to help universities plan for the future<figure><img src="https://images.theconversation.com/files/517716/original/file-20230327-22-vnwa4n.jpg?ixlib=rb-1.1.0&rect=0%2C83%2C6962%2C4575&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What does student feedback about technology reveal about the changing nature of post-secondary education and equitably supporting student development? </span> <span class="attribution"><span class="source"> (AP Photo/Seth Wenig, File)</span></span></figcaption></figure><p>With the launch of ChatGPT to the public, post-secondary institutions are aware of the seismic impact this could have on both the business and art of education. </p>
<p>Educators’ <a href="https://theconversation.com/will-chatgpt-be-the-disrupter-academia-needs-200215">emotions have ranged</a> from intrigue and excitement <a href="https://doi.org/10.14201/eks.31279">to panic about massive disruption</a>.</p>
<p>Public access to this <a href="https://www.forbes.com/sites/garydrenik/2023/01/11/large-language-models-will-define-artificial-intelligence/?sh=77b788d4b60f">large language model (LLM)</a> raises <a href="https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt">important questions about teaching and learning</a>, including the design of meaningful assessments, the appropriate use of technology, maintaining academic integrity and quality control over education. </p>
<p>There are also broader existential, ethical and equity concerns, such as those <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">raised by AI ethics researcher Timnit Gebru, computational linguist Emily M. Bender and others</a>.</p>
<p>In response to these legitimate concerns, there has been a <a href="https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html">frenzy of activity from within ivory towers around the world</a>, including faculty meetings, committee formations and policy developments. </p>
<p>Institutions are struggling to keep up with the dizzying speed of AI advancements over the last several months, and what this means for the traditional writing process. Recently, Microsoft announced Microsoft 365 Copilot, an AI writing assistant, which integrates LLM capabilities into products such as Microsoft Word — promising consumers that they’ll <a href="https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-yo*ur-copilot-for-work/">“never start with a blank slate again</a>.”</p>
<p>In the race to get ahead of new technologies, are we forgetting about the perspectives of the most important stakeholders within our post-secondary institutions: the students? </p>
<p>Leaving students out of early discussions and decision-making processes is almost always a recipe for ill-fitting, ineffective and/or damaging approaches. The mantra “nothing for us without us” comes to mind here. </p>
<h2>Student responses</h2>
<p>Let’s remember that young people are far more than passive consumers of educational content and experiences. </p>
<p>They are <a href="https://doi.org/10.1111/j.1469-5812.2010.00684.x">creative and savvy participants</a> who are eager for high-value educational experiences. Students have sophisticated ideas educators should be attentive to, and are already deeply <a href="https://www.businessinsider.com/gen-z-tech-savvy-tech-shame-survey-2022-12?op=1">embedded into the techno-social world</a>. </p>
<figure class="align-center ">
<img alt="Students seen walking on a campus" src="https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=439&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=439&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=439&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=552&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=552&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517725/original/file-20230327-16-q2sdjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=552&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Students are eager for high-value educational experiences.</span>
<span class="attribution"><span class="source">(AP Photo/Lynne Sladky)</span></span>
</figcaption>
</figure>
<p>Our combined experience over the last 15 years includes work in education within kindergarten to Grade 12, post-secondary and non-profit sectors, designing teaching and learning strategies, student engagement policies and programs and curricula. </p>
<p>This work reminds us that post-secondary institutions must resist being swayed by a sense of urgency and giving in to paternalistic impulses in the face of rapid change.</p>
<p>Educators and administrators need to engage students in conversations and decisions regarding AI with a genuine curiosity and openness to their desires, insights, concerns and recommendations. </p>
<h2>Accountability and strategic imperative</h2>
<p>This is a matter of accountability, but also a strategic imperative for post-secondary institutions interested in staying responsive to the changing educational and post-graduate landscape. </p>
<p>The inconvenient truth is that ChatGPT is rubbing salt into pre-existing wounds in higher education. With the rising costs <a href="https://www.theglobeandmail.com/featured-reports/article-the-high-price-of-higher-learning/">of post-secondary education</a>, global economic insecurity and technology-enabled access to information, students have already been asking tough questions about the value of their degrees. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/low-funding-for-universities-puts-students-at-risk-for-cycles-of-poverty-especially-in-the-wake-of-covid-19-131363">Low funding for universities puts students at risk for cycles of poverty, especially in the wake of COVID-19</a>
</strong>
</em>
</p>
<hr>
<p>This is the second time in three years that higher education has faced a herculean existential challenge, including the transition online during the pandemic. Still, we can afford to slow down enough to ask what these crises and disruptions reveal about higher education. In fact, we can’t afford not to slow down. </p>
<figure class="align-center ">
<img alt="Students seen walking through a campus with face masks on." src="https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517715/original/file-20230327-16-ckgafo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Faced with economic insecurity, students are asking tough questions about the value of their degrees.</span>
<span class="attribution"><span class="source">(AP Photo/Michael Conroy)</span></span>
</figcaption>
</figure>
<h2>Shortcomings, possibilities of higher education</h2>
<p>For meaningful answers, we should ask students what the ongoing advancement of AI tells us about both the shortcomings and possibilities of higher education. </p>
<p>What students say might reveal more than simply how to use AI tools in classrooms. They might offer insights into more meaningful, enduring approaches that transform post-secondary institutions and educational practices. </p>
<p>A lot of immediate responses to ChatGPT seem to stem from the assumption that students will jump at the opportunity to use the <a href="https://www.cbc.ca/news/business/chatgpt-academic-cheating-1.6732115">technology to cheat</a>.</p>
<p>There are undoubtedly risks around plagiarism, and anecdotal accounts suggest some of this has already begun. However, this should provoke curiosity and actions that extend beyond tighter academic integrity policies and smarter plagiarism detection technologies. </p>
<h2>New models for education?</h2>
<p>We might take this as an opportunity to invite students to discuss their motivations regarding ChatGPT, including why and how they use the tool. We should consider what their motivations reveal about the changing nature of learning and opportunities for new models of post-secondary education. </p>
<p>We are reminded of <a href="https://www.britannica.com/biography/Paulo-Freire">Brazilian educator Paulo Freire’s</a> seminal work, <em><a href="https://www.bloomsbury.com/ca/pedagogy-of-the-oppressed-9781501314162/#">Pedagogy of the Oppressed</a></em>, that stressed how treating students as passive recipients of knowledge deprives both students and educators of the promise of education. </p>
<p>As Freire suggests, the promise, hope and possibilities inherent in education demand a meaningful dialogue between educators and students.</p>
<h2>Equity lens needed</h2>
<p>In this dialogue, post-secondary educators and administrators must also pay attention to how technology always has the potential to <a href="https://www.teachermagazine.com/au_en/articles/chatgpt-education-assessment-equity-and-policy">repair or worsen equity gaps in education</a>. </p>
<p>Having an equity lens from the outset means post-secondary institutions are paying attention to who technologies are accessible to, who is being served and who is not being served. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/university-anti-racism-policies-use-shared-decision-making-to-hear-bipoc-student-insights-185090">University anti-racism policies: Use shared decision-making to hear BIPOC student insights</a>
</strong>
</em>
</p>
<hr>
<p>We need to engage students in ways that make these dynamics visible to us in real time, so that we can also course correct in real time. </p>
<h2>Student-led research, strategic planning</h2>
<p>Engaging students is not enough; we need to engage them effectively. This means moving beyond the “token-student-representative-on-a-university-committee” model. </p>
<p>Some promising approaches might include investments in student-led research about the changing nature of teaching and learning, mechanisms for spontaneous and ongoing discussions with students and student-centred institutional strategic planning processes. </p>
<p>The stakes are high, but so are the transformative possibilities. </p>
<p>Higher education should be a time and place where students are called to pay attention to, draw upon and activate their knowledge. They need to be invited to participate in projects dedicated to building a common future — the future of our institutions as well as our broader communities.</p><img src="https://counter.theconversation.com/content/200955/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Post-secondary student input about ChatGPT and other AI matters not only for accountability, but also as a savvy way to strategize about the future of higher education.Alpha Abebe, Assistant Professor, Faculty of Humanities, McMaster UniversityFenella Amarasinghe, PhD Candidate, Faculty of Education, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1985792023-03-05T17:20:09Z2023-03-05T17:20:09ZGenerative AI like ChatGPT reveal deep-seated systemic issues beyond the tech industry<figure><img src="https://images.theconversation.com/files/512548/original/file-20230227-2379-ojc054.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4126%2C2536&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some critics have claimed that artificial intelligence chatbot ChatGPT has "killed the essay," while DALL-E, an AI image generator, has been portrayed as a threat to artistic integrity.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/generative-ai-like-chatgpt-reveal-deep-seated-systemic-issues-beyond-the-tech-industry" width="100%" height="400"></iframe>
<p>ChatGPT has cast long shadows over the media as the latest form of <a href="https://www.investopedia.com/terms/d/disruptive-technology.asp">disruptive technology</a>. For some, ChatGPT is a harbinger of the end of <a href="https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability">academic</a> and <a href="https://doi.org/10.1038/d41586-023-00107-z">scientific integrity</a>, and <a href="https://www.theatlantic.com/ideas/archive/2023/01/chatgpt-ai-economy-automation-jobs/672767/">a threat to white collar jobs</a> and our <a href="https://www.hks.harvard.edu/centers/mrcbg/programs/growthpolicy/how-chatgpt-hijacks-democracy">democratic institutions</a>. </p>
<p>How concerned should we be about generative artificial intelligence (AI)? The developers of ChatGPT describe it as <a href="https://openai.com/blog/chatgpt/">“a model… which interacts in a conversational way”</a> while also calling it a <a href="https://www.businessinsider.com/openai-sam-altman-chatgpt-cool-but-horrible-product-2023-2">“horrible product”</a> for its inconsistent results.</p>
<p>It can write emails, summarize documents, review code and provide comments, translate documents, create content, play games, and, of course, chat. This is hardly the stuff of a dystopian future. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth-198463">Unlike with academics and reporters, you can't check when ChatGPT's telling the truth</a>
</strong>
</em>
</p>
<hr>
<p>We should not fear the introduction of technologies, but neither should we assume they serve our interests. Societies are in a constant process of cultural evolution defined by inertia from the past, temporary consensus and disruptive technologies that introduce new ideas and approaches. </p>
<p>We must understand and embrace the co-evolution of humans and technology by considering what a technology is designed to do, how it relates to us and how our lives will change from it.</p>
<h2>Are ChatGPT and DALL-E really creators?</h2>
<p>Along with intelligence, creativity is often considered a uniquely human ability. But creativity is not exclusive to humans — it is a property that has emerged across species as a product of <a href="https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/convergent-evolution">convergent evolution</a>.</p>
<p>Species as diverse as <a href="https://doi.org/10.1371/journal.pone.0020231">crows</a>, <a href="https://www.sciencedirect.com/science/article/abs/pii/B978012415823800023X">octopuses</a>, <a href="https://doi.org/10.1073/pnas.0500232102">dolphins</a> and <a href="https://doi.org/10.1371/journal.pone.0010544">chimpanzees</a> can improvize and use tools as well. </p>
<p>Despite the liberal use of the term, creativity is <a href="https://doi.org/10.4324/9780429501234">notoriously hard to capture</a>. Its features include <a href="https://doi.org/10.1037/10227-011">the quantity of output</a>, <a href="https://doi.org/10.3389/fpsyg.2020.573432">identifying connections between seemingly unrelated things (remote associations)</a> and providing atypical solutions to problems. </p>
<p><a href="https://doi.org/10.1146/annurev.psych.093008.100416">Creativity does not simply reside in the individual; our social networks and values are also important</a>. As the presence of cultural variants increases, we have a larger pool of ideas, products and processes to draw from. </p>
<figure class="align-center ">
<img alt="A group of people sit on the floor looking at a huge ceiling-high screen displaying an abstract artwork with an orange background and swatches of red, black, and ochre across it." src="https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512531/original/file-20230227-691-3swwte.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Visitors view artist Refik Anadol’s <em>Unsupervised</em> exhibit at the Museum of Modern Art in January 2023 in New York. The art installation is AI-generated and meant to be a thought-provoking interpretation of the New York City museum’s prestigious collection.</span>
<span class="attribution"><span class="source">(AP Photo/John Minchillo)</span></span>
</figcaption>
</figure>
<p>Our cultural experiences are resources for creativity. The more diverse ideas we are exposed to, the more novel connections we can make. <a href="https://doi.org/10.1177/0022022110361707">Studies have suggested</a> that <a href="https://doi.org/10.1037/0003-066X.63.3.169">multicultural experience is positively associated with creativity</a>. The greater the distance between cultures, <a href="https://doi.org/10.1177/1948550612462413">the more creative products we can observe</a>.</p>
<p>Creativity can also lead to convergence. Different individuals can create similar ideas independent of one another, a process referred to as <a href="https://www.jstor.org/stable/985546">scientific co-discovery</a>. The invention of calculus and the theory of natural selection are the most prominent examples of this. </p>
<p>Artificial intelligence is defined by its ability to learn, identify patterns and use decision-making rules. </p>
<p>If linguistic and artistic products are patterns, then AI — especially those like ChatGPT and DALL-E — should be capable of creativity by assimilating and combining divergent patterns from different artists. <a href="https://www.pcmag.com/news/creative-tone-loosens-the-reins-on-ai-powered-microsoft-bing">Microsoft’s Bing chatbot</a> claims that as one of its core values.</p>
<h2>AI needs people</h2>
<p>There is a fundamental problem with such programs: art is now data. By scooping up these products through a process of analysis and synthesis, we can ignore the contributions and cultural traditions of human creators. Without citing and crediting these sources, they can be seen as <a href="https://www.youtube.com/watch?v=IgxzcOugvEI">high-tech plagiarism</a>, appropriating artistic products that have taken generations to accumulate. Concerns of <a href="https://doi.org/10.1037/pspi0000327">cultural appropriation</a> must also be applicable to AI.</p>
<p>AI might someday evolve in unpredictable ways, but for the moment, <a href="https://doi.org/10.1007/s12559-018-9619-0">they still rely on humans</a> for their data, design and operations, and the social and ethical challenges they present. </p>
<p>Humans are still needed for quality control. These efforts often reside within the impenetrable <a href="https://doi.org/10.1038/d41586-022-00858-1">black box of AI</a>, with these operations often outsourced to markets where <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">labour is cheaper</a>. </p>
<p>The recent high-profile story of CNET’s <a href="https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/">“AI journalist”</a> presents another example of why skilled human interventions are needed.</p>
<p><a href="https://futurism.com/the-byte/cnet-publishing-articles-by-ai">CNET started discretely using an AI bot to write articles</a> in November 2020. After significant errors were pointed out by other news sites, the website ended up publishing lengthy corrections for the AI-written content and <a href="https://www.cnet.com/tech/cnet-is-testing-an-ai-engine-heres-what-weve-learned-mistakes-and-all/">did a full audit of the tool</a>.</p>
<figure class="align-center ">
<img alt="A robotic hand and a human hand touch their index fingers together, emulating the famous 'The Creation of Adam' painting by Michelangelo" src="https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512547/original/file-20230227-1917-vvz0wl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AI might someday evolve in unpredictable ways, but for the moment, it still relies on humans.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>At present, there are no rules to determine whether AI products are creative, coherent or meaningful. These are decisions that must be made by people.</p>
<p>As industries adopt AI, old roles occupied by humans will be lost. Research tells us these losses will be felt the most by those in <a href="https://www.ippr.org/research/publications/women-automation-and-equality">already vulnerable positions</a>. This pattern follows a general trend of <a href="https://www.routledge.com/Ethical-Artificial-Intelligence-from-Popular-to-Cognitive-Science-Trust/Schoenherr/p/book/9780367697983">adopting technologies before we understand — or care about — their social and ethical implications</a>.</p>
<p>Industries rarely consider how a displaced workforce will be re-trained, leaving those individuals and their communities to address these disruptions.</p>
<h2>Systemic issues go beyond AI</h2>
<p>DALL-E has been portrayed as a <a href="https://www.theguardian.com/technology/2022/nov/12/when-ai-can-make-art-what-does-it-mean-for-creativity-dall-e-midjourney">threat to artistic integrity</a> because of its ability to automatically generate images of people, exotic worlds and fantastical imagery. Others claim ChatGPT has <a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/">killed the essay</a>.</p>
<p>Rather than seeing AI as the cause of new problems, we might better understand AI ethics as bringing attention to old ones. <a href="https://eric.ed.gov/?id=EJ771037">Academic misconduct</a> is a common problem caused by underlying issues including peer influence, <a href="https://doi.org/10.1080/00221546.2006.11778956">perceived consensus</a> and <a href="https://doi.org/10.1023/A:1024954224675">perception of penalties</a>.</p>
<p>Programs like ChatGPT and DALL-E will merely facilitate such behaviour. Institutions need to acknowledge these vulnerabilities and develop new policies, procedures and <a href="https://www.researchgate.net/publication/366673569_Building_Trust_with_the_Ethical_Affordances_of_Education_Technologies_A_Sociotechnical_Systems_Perspective">ethical norms</a> to address these issues.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-students-could-use-ai-to-cheat-but-its-a-chance-to-rethink-assessment-altogether-198019">ChatGPT: students could use AI to cheat, but it's a chance to rethink assessment altogether</a>
</strong>
</em>
</p>
<hr>
<p>Questionable research practices <a href="https://doi.org/10.1007/s11948-021-00314-9">are also not uncommon</a>. <a href="https://www.theverge.com/2023/1/26/23570967/chatgpt-author-scientific-papers-springer-nature-ban">Concerns over AI-authored research papers</a> are simply an extension of inappropriate authorship practices, such as <a href="https://doi.org/10.1136/bmj.d6128">ghost and gift authorship in the biomedical sciences</a>. They hinge on <a href="https://doi.org/10.3389/fpsyg.2015.00877">discipline conventions, outdated academic reward systems and a lack of personal integrity</a>. </p>
<p>As publishers reckon with <a href="https://www.theverge.com/2023/1/26/23570967/chatgpt-author-scientific-papers-springer-nature-ban">questions of AI authorship</a>, they must confront deeper issues, like why the mass production of academic papers <a href="https://doi.org/10.1002/asi.22636">continues to be incentivized</a>.</p>
<h2>New solutions to new problems</h2>
<p>Before we shift responsibility to institutions, we need to consider whether we are providing them with sufficient resources to meet these challenges. <a href="https://doi.org/10.1080/00220671.1991.9941813">Teachers are already burned out</a> and <a href="https://sunypress.edu/Books/P/Peerless-Science2">the peer review system is overtaxed</a>.</p>
<p>One solution is to fight AI with AI using <a href="https://www.plagiarismtoday.com/2023/01/05/3-approaches-to-detect-ai-writing/">plagiarism detection tools</a>. Other <a href="https://www.pcgamer.com/put-a-name-to-the-real-artwork-behind-ai-art-with-this-algorithmically-smart-tool/">tools can be developed</a> to attribute art work to its creators, or <a href="https://www.buzzfeednews.com/article/katienotopoulos/ai-writing-detection-tool-homework-students">detect the use of AI in written papers</a>.</p>
<p>The solutions to AI are hardly simple, but they can be stated simply: the fault is not in our AI, but in ourselves. To paraphrase Nietzsche, if you stare into the AI abyss, it will stare back at you.</p><img src="https://counter.theconversation.com/content/198579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rather than seeing artificial intelligence as the cause of new problems, we might better understand AI ethics as bringing attention to old ones.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1998822023-02-28T02:26:13Z2023-02-28T02:26:13ZIs there a way to pay content creators whose work is used to train AI? Yes, but it’s not foolproof<figure><img src="https://images.theconversation.com/files/512148/original/file-20230224-22-j2ktnx.jpeg?ixlib=rb-1.1.0&rect=26%2C53%2C4466%2C2937&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Is imitation the sincerest form of flattery, or theft? Perhaps it comes down to the imitator.</p>
<p>Text-to-image artificial intelligence systems such as DALL-E 2, Midjourney and Stable Diffusion are trained on huge amounts of image data from the web. As a result, they often generate outputs that resemble real artists’ work and style.</p>
<p>It’s safe to say artists <a href="https://www.theguardian.com/australia-news/2022/dec/12/australian-artists-accuse-popular-ai-imaging-app-of-stealing-content-call-for-stricter-copyright-laws">aren’t impressed</a>. To further complicate things, although intellectual property law guards against the misappropriation of individual works of art, this doesn’t extend to emulating a person’s style. </p>
<p>It’s becoming difficult for artists to promote their work online without contributing infinitesimally to the creative capacity of generative AI. Many are now asking if it’s possible to compensate creatives whose art is used in this way. </p>
<p>One approach from photo licensing service Shutterstock goes some way towards addressing the issue.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world</a>
</strong>
</em>
</p>
<hr>
<h2>Old contributor model, meet computer vision</h2>
<p>Media content licensing services such as Shutterstock take contributions from photographers and artists and make them available for third parties to license. </p>
<p>In these cases, the commercial interests of licenser, licensee and creative are straightforward. Customers pay to license an image, and a portion of this payment (in Shutterstock’s <a href="https://support.submit.shutterstock.com/s/article/How-much-will-I-be-paid-as-a-contributor-to-Shutterstock?language=en_US">case</a> 15-40%) goes to the creative who provided the intellectual property. </p>
<p>Issues of intellectual property are cut and dried: if somebody uses a Shutterstock image without a licence, or for a purpose outside its terms, it’s a clear breach of the photographer’s or artist’s rights. </p>
<p>However, Shutterstock’s terms of service also allow it to pursue a new way to generate income from intellectual property. Its current contributors’ site has a large focus on <a href="https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_US">computer vision</a>, which it defines as:</p>
<blockquote>
<p>a scientific discipline that seeks to develop techniques to help computers ‘see’ and understand the content of digital images such as photographs and videos.</p>
</blockquote>
<p>Computer vision isn’t new. Have you ever told a website you’re not a robot and identified some warped text or pictures of bicycles? If so, you have been <a href="https://apnews.com/article/technology-technology-issues-digitization-spamming-artificial-intelligence-9e2aec49792c3a1e31c1f94f1a5e7ede">actively</a> <a href="https://www.google.com/recaptcha/intro/?hl=es/index.html#:%7E:text=reCAPTCHA%20makes%20positive%20use%20of,and%20solve%20hard%20AI%20problems.">training AI-run</a> computer vision algorithms. </p>
<p>Now, computer vision is allowing Shutterstock to <a href="https://www.shutterstock.com/generate">create</a> what it calls an “ethically sourced, totally clean, and extremely inclusive” <a href="https://www.shutterstock.com/generate?kw=shutterstock">AI image generator</a>.</p>
<h2>What makes Shutterstock’s approach ‘ethical’?</h2>
<p>An immense amount of work goes into classifying millions of images to train the large language models used by AI image generators. But services such as Shutterstock are uniquely positioned to do this. </p>
<p>Shutterstock has access to high-quality images from some <a href="https://investor.shutterstock.com/news-releases/news-release-details/shutterstock-reports-fourth-quarter-and-full-year-2021-financial">two million contributors</a>, all of which are described in some level of detail. It’s the perfect recipe for training a large language model. </p>
<p>These models are essentially vast multidimensional neural networks. The network is fed training data, which it uses to create data points that combine visual and conceptual information. The more information there is, the more data points the network can create and link up.</p>
<p>This distinction between a collection of images and a constellation of abstract data points lies at the heart of the issue of compensating creatives whose work is used to train generative AI. </p>
<p>Even in the case where a system has learnt to associate a very specific image <a href="https://arxiv.org/pdf/2301.13188.pdf">with a label</a>, there’s no meaningful way to trace a clear line from that training image to the outputs. We can’t really see what the systems measure or how they “understand” the concepts they learn.</p>
<p>Shutterstock’s solution is to compensate every contributor whose work is <a href="https://www.shutterstock.com/developers/computer-vision-at-shutterstock">made available</a> to a commercial partner for computer vision training. It describes the approach on its site:</p>
<blockquote>
<p>We have established a Shutterstock Contributor Fund, which will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock’s library. Additionally, Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool.</p>
</blockquote>
<h2>Problem solved?</h2>
<p>The amount that goes into the Shutterstock Contributor Fund will be proportional to the value of the dataset deal Shutterstock makes. But, of course, the fund will be split among a large proportion of Shutterstock’s <a href="https://investor.shutterstock.com/news-releases/news-release-details/shutterstock-reports-fourth-quarter-and-full-year-2021-financial#:%7E:text=ABOUT%20SHUTTERSTOCK&text=Working%20with%20its%20growing%20community,24%20million%20video%20clips%20available.">contributors</a>.</p>
<p>Whatever equation Shutterstock develops to determine the fund’s size, it’s worth remembering that any compensation isn’t the same as <em>fair</em> compensation. Shutterstock’s model sets the stage for new debates about value and fairness. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/512149/original/file-20230224-22-seebuv.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The LLM process is a bit like an impartial art student learning about techniques and genres by wandering through a gallery of millions of captioned paintings. Can we say any individual painting added more to their generalised knowledge? Probably not.</span>
<span class="attribution"><span class="source">Shutterstock AI</span></span>
</figcaption>
</figure>
<p>Arguably the most important debates will focus on the amount of specific individuals’ contributions to the “knowledge” gleaned by a trained neural network. But there isn’t (and may never be) a way to accurately measure this. </p>
<h2>No picture-perfect solution</h2>
<p>There are, of course, many other user-contributed media libraries on the internet. For now, Shutterstock is the most open about its dealings with computer vision projects, and its terms of use are the most direct in addressing the ethical issues.</p>
<p>Another big AI player, Stable Diffusion, uses an open source image database called <a href="https://laion.ai/blog/laion-5b/">LAION-5B</a> for training. Content creators can use a service called <a href="https://haveibeentrained.com/">Have I Been Trained?</a> to check if their work was included in the dataset, and opt out of it (but this will only be reflected in future versions of Stable Diffusion).</p>
<p>One of my popular CC-licensed photographs of a young girl reading shows up in the database several times. But I don’t mind, so I’ve chosen not to opt out.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=275&fit=crop&dpr=1 600w, https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=275&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=275&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=346&fit=crop&dpr=1 754w, https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=346&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/511894/original/file-20230223-349-twcyqj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=346&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Have I Been Trained? results turn up a CC-licensed photo I uploaded to Flickr about a decade ago.</span>
<span class="attribution"><span class="source">Author provided</span></span>
</figcaption>
</figure>
<p>Shutterstock <a href="https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_US">has promised</a> to give contributors a choice to opt out of future dataset deals. </p>
<p>Its terms make it the first business of its type to address the ethics of providing contributors’ works for training generative AI (<a href="https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_US">and other</a> computer-vision-related uses). It offers what’s perhaps the simplest solution yet to a highly fraught dilemma. </p>
<p>Time will tell if contributors themselves consider this approach fair. Intellectual property law may also evolve to help establish contributors’ rights, so it could be Shutterstock is trying to get ahead of the curve. </p>
<p>Either way, we can expect more give and take before everyone is happy. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-perfect-your-prompt-writing-for-chatgpt-midjourney-and-other-ai-generators-198776">How to perfect your prompt writing for ChatGPT, Midjourney and other AI generators</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/199882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brendan Paul Murphy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artists and photographers have strongly opposed their distinct styles being replicated by AI image generators. And the law has yet to catch up with this issue.Brendan Paul Murphy, Lecturer in Digital Media, CQUniversity AustraliaLicensed as Creative Commons – attribution, no derivatives.