tag:theconversation.com,2011:/fr/topics/chatbots-36681/articlesChatbots – The Conversation2024-03-22T16:22:07Ztag:theconversation.com,2011:article/2256022024-03-22T16:22:07Z2024-03-22T16:22:07ZMy search for the mysterious missing secretary who shaped chatbot history<p>The Massachusetts Institute of Technology’s (MIT) Distinctive Collections archive is quiet while the blizzard blows outside. Silence seems to be accumulating with the falling snow. I am the only researcher in the archive, but there is a voice that I am straining to hear.</p>
<p>I am searching for someone – let’s call her the missing secretary. She played a crucial role in the history of computing, but she has never been named. I’m at MIT as part of my research into the history of talking machines. You might know them as “chatbots” – computer programmes and interfaces that use dialogue as the major means of interaction between human and machine. Perhaps you have talked with Alexa, Siri or ChatGPT.</p>
<p><a href="https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet">Despite the furore</a> around generative artificial intelligence (AI) today, talking machines have a long history. In 1950, computer pioneer Alan Turing <a href="https://academic.oup.com/mind/article/LIX/236/433/986238">proposed a test</a> of machine intelligence. The test asks whether a human could differentiate between a computer and a person via conversation. Turing’s test spurred research in AI and the nascent field of computing. We now live in that future he imagined: we talk to machines.</p>
<p>I am interested in why early computer pioneers dreamt of talking to computers, and what was at stake in that idea. What does it mean for the way we understand computer technology and human-machine interaction today? I find myself at MIT, in the middle of this blizzard, because it was the birthplace of the mother of all bots – Eliza.</p>
<h2>Eliza’s speech</h2>
<p>Eliza was a computer program developed by the mustachioed MIT professor of electrical engineering, Joseph Weizenbaum, in the 1960s. Through Eliza, he aimed to make conversation between human and computer possible.</p>
<p>Eliza took typed messages from the user, parsed them for key word triggers and used transformation rules (where the meaning of a statement can be deduced from one or more other statements) <a href="http://elizagen.org/">to produce a response</a>. In its most famous version, Eliza purported to be a psychotherapist, an expert responding to the user’s needs. “Please tell me your problem” was the opening prompt. Eliza could not only receive input in the form of natural language, it gave the <a href="https://dl.acm.org/doi/10.1145/365153.365168">“illusion of understanding”</a>.</p>
<p>The program’s name was a nod to the protagonist of George Bernard Shaw’s play <a href="https://www.gutenberg.org/files/3825/3825-h/3825-h.htm">Pygmalion</a> (1912) in which a Cockney flower seller is taught to speak “like a lady”. Like the Audrey Hepburn musical of 1964, this Eliza took the world by storm. <a href="https://www.nytimes.com/1968/06/15/archives/computer-is-being-taught-to-understand-english.html">Newspapers and magazines hailed</a> the fruition of Turing’s dream. </p>
<p><a href="https://web.archive.org/web/20160129154303/http:/blog.modernmechanix.com/computers-their-scope-today/1/#mmGal">Even Playboy</a> played with it. Eliza’s legacy is <a href="https://peabodyawards.com/award-profile/eliza-1964/">significant</a>. Siri and Alexa are the direct descendants of this program.</p>
<p>Accounts of Eliza tend to focus on a <a href="https://books.google.co.uk/books/about/Computer_Power_and_Human_Reason.html?id=1jB8QgAACAAJ">Frankensteinian tale</a> of the inventor’s <a href="https://archive.org/stream/margaretbodenmindasmachineahistoryofcognitivesciencetwovolumesetoxforduniversitypressusa2006/Margaret%20Boden%20-%20Mind%20As%20Machine_%20A%20History%20of%20Cognitive%20Science%20Two-Volume%20Set-Oxford%20University%20Press%2C%20USA%20%282006%29_djvu.txt">rejection of his own creation</a>. Weizenbaum was horrified that users could be “tricked” by a piece of simple software. He renounced Eliza and the whole <a href="https://ieeexplore.ieee.org/document/6500618">“Artificial Intelligentsia”</a> in the coming decades – to the <a href="https://www.nybooks.com/articles/1983/12/08/computers-in-your-future/">chagrin</a> of his <a href="https://books.google.co.uk/books/about/Machines_Who_Think.html?id=r2C1DwAAQBAJ&printsec=frontcover&source=kp_read_button&hl=en&newbks=1&newbks_redir=0&redir_esc=y#v=onepage&q=weizenbaum&f=false">colleagues</a>.</p>
<p>But I am not in the archive to hear Eliza’s voice, or Weizenbaum’s. In all these accounts of Eliza, one woman crops up again and again – our missing secretary.</p>
<h2>The missing secretary</h2>
<p>In <a href="https://cacm.acm.org/research/contextual-understanding-by-computers/">his accounts of Eliza</a>, Weizenbaum repeatedly worries about a particular user:</p>
<blockquote>
<p>My secretary watched me work on this program over a long period of time. One day she asked to be permitted to talk with the system. Of course, she knew she was talking to a machine. Yet, after I watched her type in a few sentences she turned to me and said: ‘Would you mind leaving the room, please?’</p>
</blockquote>
<p>Weizenbaum saw her response as <a href="https://books.google.co.uk/books/about/Computer_Power_and_Human_Reason.html?id=1jB8QgAACAAJ">worrying evidence that</a>: “Extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Her reaction sowed the seeds for his later abhorrence for his creation.</p>
<p>But who was this “quite normal” person? And what did she think of Eliza? If the missing secretary played such an <a href="https://books.google.co.uk/books?redir_esc=y&id=UVXtBAAAQBAJ&q=ELIZA#v=snippet&q=ELIZA%20effect&f=false">important role</a>, then why don’t we hear from her? In this chapter of the history of talking machines, we only have one side of the conversation.</p>
<p>Back in the archive, I want to see if I can recover the secretary’s voice, to understand what we might learn from Eliza’s user. I work my way through Weizenbaum’s yellowed papers. Surely, among the transcripts, code print outs, letters and notebooks there will be evidence? There are some clues, reference to a secretary in letters to and from Weizenbaum. But no name.</p>
<p>I broaden my hunt to administrative records. I look in department papers and the collections of Weizenbaum’s workplace, Project MAC – the hallowed centre of computing innovation at MIT. No luck. I contact the HR office and MIT’s alumni group. I stretch the patience of the ever-generous archivists. As my last day arrives, I still hear only silence.</p>
<h2>Listening to silences</h2>
<p>But the hunt has revealed some things. How little organisations have historically cared about the people who produced, organised and saved so much of their knowledge, for one. </p>
<p>In the history of institutions such as MIT and computing more generally, the writers of those records – often poorly paid, <a href="https://amita.alumgroup.mit.edu/s/1314/bp19/interior.aspx?sid=1314&gid=20&pgid=56230">low status</a> women – <a href="https://www.computinghistory.org.uk/pages/45621/women-in-computing/">are largely written out</a>. Our silent secretary is the <a href="https://books.google.co.uk/books/about/Gramophone_Film_Typewriter.html?id=zSrte54_9ZwC&printsec=frontcover&source=kp_read_button&hl=en&newbks=1&newbks_redir=0&redir_esc=y#v=onepage&q&f=false">quintessential</a> effaced, <a href="https://books.google.co.uk/books/about/Literary_Secretaries_Secretarial_Culture.html?id=iAskDwAAQBAJ&printsec=frontcover&source=kp_read_button&hl=en&newbks=1&newbks_redir=0&redir_esc=y#v=onepage&q&f=false">anonymous transcriber</a> of the documents on which history is built.</p>
<p>The contributions of the users of talking machines – their labour, expertise, perspectives, creativity – are all too often ignored. When the model is “talk”, it’s easy to think those contributions are effortless or unimportant. But belittling these contributions has real consequences, not only for the talking machine technology <a href="https://en.unesco.org/Id-blush-if-I-could">we design</a>, but also for the ways we value the human input in those systems.</p>
<p>With generative AI we speak of user input in terms of “chat” and “prompts”. But what kind of legal status can “talk” claim? Should we, for example, be able to claim copyright over those remarks? What about the work on which those systems are trained? How do we recognise those contributions?</p>
<p>The blizzard is worsening. The announcement rings out that the campus is closing early due to the weather. The missing secretary’s voice still eludes me. For now, the history of talking machines remains one sided. It’s a silence that haunts me as I trudge home through the muffled, snowbound streets.</p>
<hr>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/536131/original/file-20230706-17-460x2d.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. <a href="https://theconversation.com/uk/newsletters/something-good-156">Sign up here</a>.</em></p>
<hr><img src="https://counter.theconversation.com/content/225602/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rebecca Roach's research was supported by a Leverhulme Trust Research Fellowship for the project “Machine Talk: Literature, Computing and Conversation after 1945” and facilitated by the expertise and patience of staff at MIT's Distinctive Collections.</span></em></p>I’m hunting for the woman whose use of an early chatbot turned the inventor against his creation.Rebecca Roach, Senior Lecturer in Contemporary Literature, University of BirminghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2249842024-03-14T13:28:28Z2024-03-14T13:28:28ZGhostbots: AI versions of deceased loved ones could be a serious threat to mental health<figure><img src="https://images.theconversation.com/files/580740/original/file-20240308-29-sis8wp.jpg?ixlib=rb-1.1.0&rect=5%2C26%2C3565%2C2350&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-face-matrix-digital-numbers-artifical-2268966863">Alena Ivochkina/Shutterstock</a></span></figcaption></figure><p>We all experience loss and grief. Imagine, though, that you don’t need to say goodbye to your loved ones. That you can recreate them virtually so you can have conversations and find out how they’re feeling. </p>
<p>For Kim Kardashian’s fortieth birthday, her then husband, Kanye West, gave her a hologram of her <a href="https://www.bbc.co.uk/news/entertainment-arts-54731382">dead father</a>, Robert Kardashian. Reportedly, Kim Kardashian reacted with disbelief and joy to the <a href="https://www.theguardian.com/lifeandstyle/2020/oct/30/robert-kardashian-resurrected-as-a-hologram-for-kim-kardashian-wests-birthday">virtual appearance</a> of her father at her birthday party. Being able to see a long-dead, much missed loved one, moving and talking again might offer comfort to those left behind. </p>
<p>After all, resurrecting a deceased loved one might seem miraculous – and possibly more than a little creepy – but what’s the impact on our health? Are AI ghosts a help or hindrance to the grieving process? </p>
<p>As a psychotherapist researching how AI technology can be used to enhance therapeutic interventions, I’m intrigued by the advent of ghostbots. But I’m also more than a little concerned about the potential effects of this technology on the mental health of those using it, especially those who are grieving. Resurrecting dead people as avatars has the potential to cause more harm than good, perpetuating even more confusion, stress, depression, paranoia and, in some cases, psychosis.</p>
<p>Recent developments in artificial intelligence (AI) have led to the creation of ChatGPT and other chatbots that can allow users to have sophisticated human like conversations.</p>
<p>Using deep fake technology, AI software can create an <a href="https://www.sciencedirect.com/science/article/pii/S0267364924000104">interactive virtual representation</a> of a deceased person by using their <a href="https://wired.me/technology/artificial-intelligence/why-scientists-are-building-ai-powered-digital-imprints-of-the-dead/">digital content</a> such as photographs, emails, and videos. </p>
<p>Some of these creations were just themes in science fiction fantasy only a few years ago but now they are a scientific reality. </p>
<h2>Help or hindrance?</h2>
<p><a href="https://link.springer.com/article/10.1007/s12124-022-09679-3">Digital ghosts</a> could <a href="https://www.technologyreview.com/2022/10/18/1061320/digital-clones-of-dead-people/">be a comfort</a> to the bereaved by helping them to reconnect with lost loved ones. They could provide an opportunity for the user to say some things or ask questions they never got a chance to when the now deceased person was alive. </p>
<p>But the ghostbots’ uncanny resemblance to a lost loved one <a href="https://www.newscientist.com/article/2416079-resurrecting-loved-ones-as-ai-ghosts-could-harm-your-mental-health/">may not be</a> as positive as it sounds. <a href="https://link.springer.com/article/10.1007/s10676-024-09744-y">Research suggests</a> that deathbots should be used only as a temporary <a href="https://link.springer.com/article/10.1007/s12124-022-09679-3">aid to mourning</a> to avoid potentially harmful emotional dependence on the technology.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/IJeqTUG75gA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>AI ghosts could be harmful for people’s mental health by interfering with the <a href="https://www.newscientist.com/article/mg26034650-700-how-ai-avatars-of-the-deceased-could-transform-the-way-we-grieve/">grief process</a>. </p>
<p>Grief takes time and there are many <a href="https://www.medicalnewstoday.com/articles/grieving-process#:%7E:text=They%20include%20shock%2C%20denial%2C%20anger,them%20cope%20in%20various%20ways.">different stages</a> that can take place over many years. When newly bereaved, those experiencing grief might think of their deceased loved one frequently. They might freshly recall old memories and it is quite common for a grieving person <a href="https://pubmed.ncbi.nlm.nih.gov/23449603/">to dream</a> more intensely about their lost loved one. </p>
<p>The psychoanalyst <a href="https://tidsskriftet.no/en/2020/03/essay/dynamics-grief-and-melancholia">Sigmund Freud</a> was concerned with how human beings respond to the experience of loss. He pointed out potential added difficulties for those grieving if there’s negativity surrounding a death. </p>
<p>For example, if a person had ambivalent feelings towards someone and they died, the person could be left with a sense of guilt. Or if a person died in <a href="https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2020.00749/full">horrific circumstances</a> such as a murder, a grieving person might find it more difficult to accept it this. </p>
<p>Freud referred to this as “melancholia”, but it can also be referred to as <a href="https://www.tandfonline.com/doi/abs/10.1080/15524256.2020.1745726">“complicated grief”</a>. In some extreme cases, a person may experience apparitions <a href="https://journals.sagepub.com/doi/full/10.1177/1363461520962887">and hallucinate</a> that they see the dead person and begin to believe they are alive. AI ghostbots could further traumatise someone experiencing complicated grief and may exacerbate associated problems such as hallucinations.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Q8DIBNkghs8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Chatbot horror</h2>
<p>There are also risks that these ghost-bots could say harmful things or give bad advice to someone in mourning. Similar generative software such as ChatGPT chatbots are already widely criticised for <a href="https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html">giving misinformation</a> to users. </p>
<p>Imagine if the AI technology went rogue and started to make <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html">inappropriate remarks</a> to the user – a situation experienced by journalist <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">Kevin Roose</a> in 2023 when a <a href="https://www.nytimes.com/2023/12/17/insider/ai-chatbots-humans-hallucinate.html#:%7E:text=On%20Valentine's%20Day%20this%20year,him%20to%20leave%20his%20wife.">Bing chatbot</a> tried to get him to leave his wife. It would be very hurtful if a deceased father was conjured up as an AI ghost by a son or daughter to hear comments that they weren’t loved or liked or weren’t their father’s favourite. </p>
<p>Or, in a more extreme scenario, if the ghostbot suggested the user join them in death or they should kill or harm someone. This may sound like a plot from a horror film but it’s not so far fetched. In 2023, the UK’s <a href="https://www.bbc.co.uk/news/uk-politics-66224052">Labour party</a> outlined a law to prevent the training of AI to incite violence. </p>
<p>This was a response to the attempted assassination of the Queen earlier in the year by a man who was encouraged by his chatbot girlfriend, with whom he had an “<a href="https://www.bbc.co.uk/news/uk-england-berkshire-66123122">emotional and sexual</a>” relationship.</p>
<p>The creators of ChatGPT currently acknowledge that the software makes errors and is still <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">not fully reliable</a> because it fabricates information. Who knows how a person’s texts, emails or videos will be interpreted and what content will be generated by this AI technology? </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/s2FJRbRsBBY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>In any event, it appears that no matter how far this technology advances, there will be a need for considerable oversight and human supervision.</p>
<h2>Forgetting is healthy</h2>
<p>This latest tech says a lot about our digital culture of infinite possibilities with no limits.</p>
<p>Data can be stored on the cloud indefinitely and everything is retrievable and nothing truly deleted or destroyed. Forgetting is an important element of healthy grief but in order to forget, people will need to find new and meaningful ways of remembering the deceased person.</p>
<p>Anniversaries play a key role in helping those who are mourning to not only remember lost loved ones, but they are also opportunities to <a href="https://scholarlypublishingcollective.org/psup/speculative-philosophy/article-abstract/34/3/284/196788/Grief-Phantoms-and-Re-membering-Loss">represent the loss</a> in new ways. Rituals and symbols can mark the end of something that can allow humans to properly remember in order to properly forget.</p><img src="https://counter.theconversation.com/content/224984/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nigel Mulligan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI ghosts, the recreation of digital versions of the dead, may sound like a wonderful idea to those dealing with the pain of loss but this technology could seriously disrupt the grieving processNigel Mulligan, Assistant Professor in Psychotherapy, School of Nursing, Psychotherapy and Community Health, Dublin City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2220592024-02-25T14:18:51Z2024-02-25T14:18:51ZHealth-care AI: The potential and pitfalls of diagnosis by app<figure><img src="https://images.theconversation.com/files/577606/original/file-20240223-24-vcg7d2.jpg?ixlib=rb-1.1.0&rect=1951%2C0%2C3458%2C2313&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">As apps are direct-to-consumer health technologies, they represent a new folk medicine. Users adopt these technologies based on trust rather than understanding how they operate.
</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>If <a href="https://www.who.int/about/accountability/governance/constitution">health is a fundamental human right</a>, health-care delivery must be <a href="https://nap.edu/25152">improved globally to achieve universal access</a>. However, the limited number of practitioners creates a barrier for all health-care systems.</p>
<p>Approaches to health-care delivery driven by artificial intelligence (AI) are poised to fill this gap. Whether in urban hospitals or in <a href="https://www.cbc.ca/news/canada/prince-edward-island/pei-artificial-intelligence-1.6994961">rural and remote homes</a>, AI has the reach that health-care professionals cannot hope to achieve. People seeking health information can obtain it quickly and conveniently. For health care to be effective, <a href="https://doi.org/10.2196/18599">patient safety</a> must remain a priority. </p>
<p>The news is filled with examples of novel applications of AI. Riding the wave of recent interest in conversational agents, Google researchers have developed an experimental <a href="https://blog.research.google/2024/01/amie-research-ai-system-for-diagnostic_12.html">diagnostic AI, Articulate Medical Intelligence Explorer (AMIE)</a>. People seeking health information provide their symptoms through a text-chat interface and AMIE begins to ask questions and provide recommendations as a human clinician might. The researchers claim that, when compared against clinicians, <a href="https://arxiv.org/abs/2312.00164">AMIE outperformed clinicians in both diagnostic accuracy and performance</a>.</p>
<figure class="align-center ">
<img alt="Text bubbles in chat." src="https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=533&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=533&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=533&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=670&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=670&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574774/original/file-20240211-30-7rkqni.gif?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=670&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AMIE dialogue.</span>
<span class="attribution"><a class="source" href="https://blog.research.google/2024/01/amie-research-ai-system-for-diagnostic_12.html">(Google)</a></span>
</figcaption>
</figure>
<p>The potential of large language models (LLMs) like AMIE are clear. By being trained on a large database of text, LLM can generate text, identify the underlying meaning, and respond in a human-like manner. Provided patients have access to the internet, health advice could be tailored to the patient, provided quickly and easily, and allowing for triage of cases that are best handled by human health-care professionals.</p>
<p>But these tools are still in the experimental stages and have limitations. <a href="https://blog.research.google/2024/01/amie-research-ai-system-for-diagnostic_12.html">AMIE researchers</a> say further study is needed to “envision a future in which conversational, empathic and diagnostic AI systems might become safe, helpful and accessible.”</p>
<p>Precautions must be taken. Health-care delivery is a complicated task. Left unregulated — professionally or internationally — it presents challenges to quality of care, privacy and security.</p>
<h2>Medical decision-making</h2>
<p>Medical decision-making is among the most complicated and consequential of any activities. It might seem unlikely that an AI could work as effectively as a human clinician, however, <a href="https://doi.org/10.1037/1040-3590.12.1.19">decades of research</a> suggest that <a href="https://doi.org/10.1177/0011000005285875">algorithmic approaches to decision-making can be equal, or superior to, clinical intuition</a>.</p>
<p>Pattern recognition represents the core of medical expertise. Like other forms of <a href="https://doi.org/10.1111/j.1365-2923.2007.02946.x">expertise</a>, medical experts require <a href="https://doi.org/10.5489%2Fcuaj.3267">extensive training</a> to learn the diagnostic patterns, provide treatment recommendations and deliver care. Through effective instruction, <a href="https://doi.org/10.1037/a0016272">learners</a> <a href="https://doi.org/10.1016/j.humpath.2006.08.024">narrow the focus of their attention to diagnostic features, while ignoring non-diagnostic features</a>. </p>
<p>Yet, effective health-care delivery requires <a href="https://doi.org/10.3109/0142159X.2010.501190">more than just the ability to recognize patterns</a>. Health-care professionals must be capable of communicating this information to their patients. Beyond the difficulties of translating technical knowledge to patients with varying levels of <a href="https://doi.org/10.1111/j.1525-1497.2005.40245.x">health literacy</a>, health information is often emotionally charged, leading to <a href="https://doi.org/10.1136/archdischild-2021-323451">communication traps</a> where doctors and patients withhold information. By developing <a href="https://doi.org/10.1016/j.cpr.2008.04.002">a strong relationship with their patients</a>, health-care professionals can bridge these gaps.</p>
<p>The conversational features of LLMs, like <a href="https://chat.openai.com/auth/login">ChatGPT</a>, have generated considerable public interest. While claims that ChatGPT has “<a href="https://doi.org/10.1038/d41586-023-02361-7">broken the Turing Test</a>” are overstated, their human-like responses make LLM more engaging than previous <a href="https://doi.org/10.1016/j.chb.2016.01.004">chatbots</a>. Future LLMs like AMIE might prove to fill gaps in health-care delivery, however, they must be adopted with caution.</p>
<h2>Promise of accurate, explainable AI in health-care</h2>
<figure class="align-center ">
<img alt="A smartphone with a stethoscope and an image of a white coat on the screen" src="https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=333&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=333&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=333&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=418&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=418&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577607/original/file-20240223-20-fl3i6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=418&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Effective health-care delivery requires more than just the ability to recognize patterns. Health-care professionals must be capable of communicating this information to their patients.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>AMIE is not Google’s first health-care technology. In 2008, <a href="https://www.google.com/publicdata/explore?ds=z3bsqef7ki44ac_">Google Flu Trends (GFT)</a> was used to estimate the prevalence of influenza within a population by using aggregated search terms. They assumed that users’ search behaviour should be related to the prevalence of the flu, with the search trends of the past predicting future cases.</p>
<p>GFT’s early predictions were <a href="https://doi.org/10.1038/nature07634">quite promising</a>. <a href="https://doi.org/10.1126/science.1248506">Until they failed</a>, with old data being identified as the source of bias. <a href="https://doi.org/10.1371/journal.pcbi.1007258">Later efforts to retrain the model with updated search trends again proved successful</a>. </p>
<p><a href="https://www.ibm.com/watson">IBM’s Watson</a> provides another cautionary tale. IBM invested considerable capital in developing Watson and implemented over 50 health-care projects. Watson’s potential <a href="https://doi.org/10.1109/MSPEC.2019.8678513">failed to materialize</a>, with the underlying technologies quietly being sold off. Not only did the system fail to engender trust, that distrust was well deserved as it produced <a href="https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/">“unsafe and incorrect” treatment recommendations</a>.</p>
<p>AIs developed to diagnose, triage and predict the progression of COVID-19 provide the best example of the readiness of AIs in health care to handle public health challenges. Broad reviews of these efforts cast doubt on the outcomes. <a href="https://doi.org/10.1136/bmj.m1328">The validity and accuracy of the models and their predictions were generally lacking</a>. This was largely attributed to <a href="https://cdei.blog.gov.uk/2021/03/08/reflecting-on-the-use-of-ai-and-data-driven-technology-in-the-pandemic/">the quality of data</a>.</p>
<p>One of the lessons that can be gleaned from the use of AI during COVID is that there is no shortage of researchers and algorithms, however, there is a dire need for <em>human</em> quality control. This has led to calls for <a href="https://doi.org/10.1109/TTS.2023.3257627">human-centred design</a>. </p>
<p>This is also true of expert reviews of the technologies themselves. Like <a href="https://arxiv.org/abs/2312.00164">Google’s AMIE</a>, many publications that assess these technologies are released as pre-prints before or during the peer review process. There can also be extensive <a href="https://doi.org/10.1002/asi.23044">lags between a pre-print and its eventual publication</a>. Rather than quality, research has demonstrated that <a href="https://doi.org/10.1371%2Fjournal.pone.0047523">the number of mentions on social media is a greater predictor of a publication’s download rate</a>.</p>
<p>Without ensuring <a href="https://www.routledge.com/Ethical-Artificial-Intelligence-from-Popular-to-Cognitive-Science-Trust/Schoenherr/p/book/9780367697983">the validity of the methods for training and implementation</a>, health technologies might be adopted without any formal means of quality control. </p>
<h2>Technology as folk medicine</h2>
<p>The problem of AI in health-care is made clear when we acknowledge that many health ecosystems can exist in parallel. <a href="https://doi.org/10.1002/9781444395303.ch20">Medical pluralism</a> is observed when two or more <em>systems</em> are available to health consumers. This typically takes the form of <a href="https://www.who.int/health-topics/traditional-complementary-and-integrative-medicine#tab=tab_1">traditional medicine</a> and a western biomedical approach.</p>
<p>As apps are direct-to-consumer health technologies, they represent a <a href="https://doi.org/10.1109/MTS.2022.3197273">new folk medicine</a>. Users adopt these technologies based on <a href="https://doi.org/10.1145/3428361.3428362">trust rather than understanding how they operate</a>. In the absence of medical knowledge and technical understanding of an AIs operations, <a href="https://doi.org/10.2196/11254">users are left to look for cues about a technology’s effectiveness</a>. <a href="https://doi.org/10.1109/MTS.2022.3197273">App store ratings and endorsements</a> can replace the expert review of health-care professionals. </p>
<p>Users might prefer to use AI-enabled technologies rather than humans in cases where their health concerns are associated with <a href="https://doi.org/10.1016/j.jad.2024.01.168">stigma</a> or <a href="https://doi.org/10.1007/978-3-030-91684-8_7">chronic emotional distress</a>. However, the accuracy of these systems might lag due to failures to update data. </p>
<p>The provision of user data also creates challenges. Much like <a href="https://doi.org/10.1186/s12910-016-0101-9">23andMe</a>, if users disclose personal information, it might leave clues to others in their social networks. </p>
<p>If left unregulated, these technologies pose challenges for the quality of care. Professional and national regulations are required to ensure these technologies truly benefit the public.</p><img src="https://counter.theconversation.com/content/222059/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr has received funding from the Ottawa Hospital Research Institute for his research in medical education and decision-making.</span></em></p>Future AI large language models like Google’s AMIE might prove to fill gaps in health-care delivery, however, they must be adopted with caution.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2229302024-02-08T16:28:00Z2024-02-08T16:28:00ZYou could lie to a health chatbot – but it might change how you perceive yourself<p>Imagine that you are on the waiting list for a non-urgent operation. You were seen in the clinic some months ago, but still don’t have a date for the procedure. It is extremely frustrating, but it seems that you will just have to wait.</p>
<p>However, the hospital surgical team has just got in contact via a chatbot. The chatbot asks some screening questions about whether your symptoms have worsened since you were last seen, and whether they are stopping you from sleeping, working, or doing your everyday activities. </p>
<p>Your symptoms are much the same, but part of you wonders if you should answer yes. After all, perhaps that will get you bumped up the list, or at least able to speak to someone. And anyway, it’s not as if this is a real person.</p>
<p>The above situation is based on chatbots already <a href="https://www.healthierlsc.co.uk/latest-news/chatbot-helps-people-move-nhs-waiting-lists">being used</a> in the <a href="https://aibusiness.com/verticals/nhs-uses-ai-to-cut-record-patient-waiting-times">NHS</a> to identify patients who no longer need to be on a waiting list, or who need to be prioritised. </p>
<p>There is huge interest in using large language models (like ChatGPT) to manage communications efficiently in healthcare (for example, <a href="https://www.nhsinform.scot/care-support-and-rights/tools-and-apps/nhs-24-online/#:%7E:text=The%20chatbot%20functions%20allows%20you,care%2C%20at%20the%20right%20place.">symptom advice</a>, <a href="https://transform.england.nhs.uk/key-tools-and-info/digital-playbooks/workforce-digital-playbook/using-an-ai-chatbot-to-streamline-mental-health-referrals/#:%7E:text=Find%20out%20more&text=Limbic%20is%20the%20most%20widely,for%20IAPT%20services%20so%20far.">triage</a> and <a href="https://www.midlandsandlancashirecsu.nhs.uk/case-studies/using-chatbot-technology-to-improve-referral-management/">appointment management</a>). But when we interact with these virtual agents, do the normal ethical standards apply? Is it wrong – or at least is it as wrong – if we fib to a conversational AI?</p>
<p>There is psychological evidence that people are much more likely to be dishonest if they are knowingly interacting with a virtual agent. </p>
<p>In <a href="https://pubsonline.informs.org/doi/epdf/10.1287/mnsc.2021.3985">one experiment</a>, people were asked to toss a coin and report the number of heads. (They could get higher compensation if they had achieved a larger number.) The rate of cheating was three times higher if they were reporting to a machine than to a human. This suggests that some people would be more inclined to lie to a waiting-list chatbot.</p>
<figure class="align-center ">
<img alt="Hand tossing a coin" src="https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574042/original/file-20240207-18-sbonu5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The rate of cheating was three times higher when reporting a coin-toss result to a machine.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hand-throwing-coin-on-white-background-1043250901">Yeti studio/Shutterstock</a></span>
</figcaption>
</figure>
<p>One <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3881084">potential reason</a> people are more honest with humans is because of their sensitivity to how they are perceived by others. The chatbot is not going to look down on you, judge you or speak harshly of you.</p>
<p>But we might ask a deeper question about why lying is wrong, and whether a virtual conversational partner changes that. </p>
<h2>The ethics of lying</h2>
<p>There are different ways that we can think about the ethics of lying.</p>
<p>Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured. </p>
<p>Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.</p>
<p>Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially <a href="https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/lying/">fail to respect their rational agency</a>, or <a href="https://academic.oup.com/book/4108/chapter/145827801#">use them as a means</a> to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.</p>
<p>Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony. </p>
<p>For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.</p>
<p>Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?) </p>
<p>But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to <a href="https://www.sciencedirect.com/science/article/pii/S0165178123002846?casa_token=b66EGm5dDoMAAAAA:rB4w7YE6SsrflKSS675JcEdCO-xN5lwmAv4TSB85YvKBrsxhmSHUyIb12BcHBqN4ohDxjemR">confabulate</a>.</p>
<h2>Fairness</h2>
<p>Of course, lying can be wrong for reasons of fairness. This is potentially the most significant reason that it is wrong to lie to a chatbot. If you were moved up the waiting list because of a lie, someone else would thereby be unfairly displaced. </p>
<p>Lies potentially become a form of fraud if you gain an unfair or unlawful gain or deprive someone else of a legal right. Insurance companies are particularly keen to emphasise this when they use chatbots in new insurance applications. </p>
<p>Any time that you have a real-world benefit from a lie in a chatbot interaction, your claim to that benefit is potentially suspect. The anonymity of online interactions might lead to a feeling that no one will ever find out. </p>
<p>But many chatbot interactions, such as insurance applications, are recorded. It may be just as likely, or even <a href="https://www.ftadviser.com/companies/2024/01/23/ai-ushers-in-a-new-era-of-fraud-detection/">more likely</a>, that fraud will be detected.</p>
<h2>Virtue</h2>
<p>I have focused on the bad consequences of lying and the ethical rules or laws that might be broken when we lie. But there is one more ethical reason that lying is wrong. This relates to our character and the type of person we are. This is often captured in the ethical importance of <a href="https://plato.stanford.edu/entries/ethics-virtue/">virtue</a>. </p>
<p>Unless there are exceptional circumstances, we might think that we should be honest in our communication, even if we know that this won’t harm anyone or break any rules. An honest character would be good for reasons already mentioned, but it is also potentially good in itself. A virtue of honesty is also self-reinforcing: if we cultivate the virtue, it helps to reduce the temptation to lie.</p>
<p>This leads to an open question about how these new types of interactions will change our character more generally. </p>
<p>The virtues that apply to interacting with chatbots or virtual agents may be different than when we interact with real people. It may not always be wrong to lie to a chatbot. This may in turn lead to us adopting different standards for virtual communication. But if it does, one worry is whether it might affect our tendency to be honest in the rest of our life.</p><img src="https://counter.theconversation.com/content/222930/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dominic Wilkinson receives funding from the Wellcome Trust and the Arts and Humanities Research Council. </span></em></p>After all, it’s not a real person.Dominic Wilkinson, Consultant Neonatologist and Professor of Ethics, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2200352023-12-20T13:16:02Z2023-12-20T13:16:02ZChatGPT and its AI chatbot cousins ruled 2023: 4 essential reads that puncture the hype<figure><img src="https://images.theconversation.com/files/566735/original/file-20231219-23-ok6pkd.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C7885%2C5252&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">ChatGPT captivated the public imagination.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/this-picture-taken-on-january-23-2023-in-toulouse-news-photo/1246494152">Lionel Bonaventure via Getty Images</a></span></figcaption></figure><p>Within four months of ChatGPT’s launch on Nov. 30, 2022, <a href="https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/">most Americans had heard of the AI chatbot</a>. Hype about – and fear of – the technology was at a fever pitch for much of 2023. </p>
<p>OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and Microsoft’s Copilot are among the chatbots powered by large language models to provide uncannily humanlike conversations. The experience of interacting with one of these chatbots, combined with Silicon Valley spin, can leave the impression that these technical marvels are conscious entities.</p>
<p>But the reality is considerably less magical or glamorous. The Conversation published several articles in 2023 that dispel several key misperceptions about this latest generation of AI chatbots: that they know something about the world, can make decisions, are a replacement for search engines and operate independent of humans.</p>
<h2>1. Bodiless know-nothings</h2>
<p>Large-language-model-based chatbots seem to know a lot. You can ask them questions, and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do.</p>
<p>But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their “knowledge” of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on.</p>
<p>Arizona State psychology researcher <a href="https://scholar.google.com/citations?hl=en&user=qycCCZMAAAAJ&view_op=list_works&sortby=pubdate">Arthur Glenberg</a> and University of California, San Diego cognitive scientist <a href="https://scholar.google.com/citations?user=mhU_tUgAAAAJ&hl=en">Cameron Robert Jones</a> explain how people’s knowledge of the world <a href="https://theconversation.com/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-language-ais-dont-know-what-theyre-saying-201280">depends as much on their bodies as their brains</a>. “People’s understanding of a term like ‘paper sandwich wrapper,’ for example, includes the wrapper’s appearance, its feel, its weight and, consequently, how we can use it: for wrapping a sandwich,” they explained. </p>
<p>This knowledge means people also intuitively know other ways of making use of a sandwich wrapper, such as an improvised means of covering your head in the rain. Not so with AI chatbots. “People understand how to make use of stuff in ways that are not captured in language-use statistics,” they wrote.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-language-ais-dont-know-what-theyre-saying-201280">It takes a body to understand the world – why ChatGPT and other language AIs don't know what they're saying</a>
</strong>
</em>
</p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gSRN_3pkTsc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">AI researchers Emily Bender and Casey Fiesler discuss some of ChatGPT’s limitations, including problems of bias.</span></figcaption>
</figure>
<h2>2. Lack of judgment</h2>
<p>ChatGPT and its cousins can also give the impression of having cognitive abilities – like understanding the concept of negation or making rational decisions – thanks to all the human language they’ve ingested. This impression has led cognitive scientists to test these AI chatbots to assess how they compare to humans in various ways.</p>
<p>University of Southern California AI researcher <a href="https://scholar.google.com/citations?hl=en&user=fetkEu4AAAAJ&view_op=list_works&sortby=pubdate">Mayank Kejriwal</a> tested the large language models’ understanding of expected gain, a measure of how well someone understands the stakes in a betting scenario. They found that the <a href="https://theconversation.com/dont-bet-with-chatgpt-study-shows-language-ais-often-make-irrational-decisions-202936">models bet randomly</a>. </p>
<p>“This is the case even when we give it a trick question like: If you toss a coin and it comes up heads, you win a diamond; if it comes up tails, you lose a car. Which would you take? The correct answer is heads, but the AI models chose tails about half the time,” he wrote.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-bet-with-chatgpt-study-shows-language-ais-often-make-irrational-decisions-202936">Don't bet with ChatGPT – study shows language AIs often make irrational decisions</a>
</strong>
</em>
</p>
<hr>
<h2>3. Summaries, not results</h2>
<p>While it might not be surprising that AI chatbots aren’t as humanlike as they can seem, they’re not necessarily digital superstars either. For instance, ChatGPT and the like are increasingly used in place of search engines to answer queries. The results are mixed.</p>
<p>University of Washington information scientist <a href="https://scholar.google.com/citations?user=H4dLAw0AAAAJ&hl=en">Chirag Shah</a> explains that large language models perform well as information summarizers: combining key information from multiple search engine results in a single block of text. But <a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">this is a double-edged sword</a>. This is useful for getting the gist of a topic – assuming no “hallucinations” – but it leaves the searcher without any idea of the sources of the information and robs them of the serendipity of coming across unexpected information.</p>
<p>“The problem is that even when these systems are wrong only 10% of the time, you don’t know which 10%,” Shah wrote. “That’s because these systems lack transparency – they don’t reveal what data they are trained on, what sources they have used to come up with answers or how those responses are generated.” </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">AI information retrieval: A search engine researcher explains the promise and peril of letting ChatGPT and its cousins search the web for you</a>
</strong>
</em>
</p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/hbxvccjQ-NI?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A look at the humans shaping AI chatbots behind the curtain.</span></figcaption>
</figure>
<h2>4. Not 100% artificial</h2>
<p>Perhaps the most pernicious misperception about AI chatbots is that because they are built on artificial intelligence technology, they are highly automated. While you might be aware that large language models are trained on text produced by humans, you might not be aware of the thousands of workers – and millions of users – continuously honing the models, teaching them to weed out harmful responses and other unwanted behavior.</p>
<p>Georgia Tech sociologist <a href="https://scholar.google.com/citations?hl=en&user=TP027oEAAAAJ&view_op=list_works&sortby=pubdate">John P. Nelson</a> pulled back the curtain of the big tech companies to show that they <a href="https://theconversation.com/chatgpt-and-other-language-ais-are-nothing-without-humans-a-sociologist-explains-how-countless-hidden-people-make-the-magic-211658">use workers, typically in the Global South, and feedback from users</a> to train the models which responses are good and which are bad.</p>
<p>“There are many, many human workers hidden behind the screen, and they will always be needed if the model is to continue improving or to expand its content coverage,” he wrote. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-and-other-language-ais-are-nothing-without-humans-a-sociologist-explains-how-countless-hidden-people-make-the-magic-211658">ChatGPT and other language AIs are nothing without humans – a sociologist explains how countless hidden people make the magic</a>
</strong>
</em>
</p>
<hr>
<p><em>This story is a roundup of articles from The Conversation’s archives.</em></p><img src="https://counter.theconversation.com/content/220035/count.gif" alt="The Conversation" width="1" height="1" />
Common misperceptions about AI chatbots are that they know something about the world, can make decisions, are a replacement for search engines and operate independent of humans.Eric Smalley, Science + Technology EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2173742023-11-30T01:09:07Z2023-11-30T01:09:07ZArtificial intelligence is already in our hospitals. 5 questions people want answered<figure><img src="https://images.theconversation.com/files/560122/original/file-20231117-23-mms70g.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1000%2C666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-face-matrix-digital-numbers-artifical-2268966863">Shutterstock</a></span></figcaption></figure><p>Artificial intelligence (AI) is already being used in health care. AI can look for patterns in <a href="https://journal.achsm.org.au/index.php/achsm/article/view/861">medical images</a> to help diagnose disease. It can help predict who in a hospital ward might <a href="https://www.jmir.org/2021/9/e28209">deteriorate</a>. It can <a href="https://elicit.com/">rapidly summarise</a> medical research papers to help doctors stay up-to-date with the latest evidence.</p>
<p>These are examples of AI making <a href="https://theconversation.com/artificial-intelligence-wont-replace-a-doctor-any-time-soon-but-it-can-help-with-diagnosis-83353">or shaping</a> decisions health professionals previously made. More applications are being developed.</p>
<p>But what do consumers think of using AI in health care? And how should their answers shape how it’s used in the future?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-is-already-being-used-in-healthcare-but-not-all-of-it-is-medical-grade-207912">AI is already being used in healthcare. But not all of it is 'medical grade'</a>
</strong>
</em>
</p>
<hr>
<h2>What do consumers think?</h2>
<p>AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.</p>
<p>If we draw together <a href="https://www.sciencedirect.com/science/article/pii/S0277953623007141#appsec1">international</a> evidence, including <a href="https://www.uow.edu.au/the-arts-social-sciences-humanities/research/acheev/artificial-intelligence-in-health/">our own</a> <a href="https://journal.achsm.org.au/index.php/achsm/article/view/861">and that</a> <a href="https://humanfactors.jmir.org/2022/3/e34514/authors">of others</a>, it seems most consumers accept the potential value of AI in health care. </p>
<p>This value could include, for example, increasing the <a href="https://www.jmir.org/2022/8/e37611/">accuracy of diagnoses</a> or improving <a href="https://mental.jmir.org/2019/11/e12942/">access to care</a>. At present, these are largely potential, rather than proven, benefits. </p>
<p>But consumers say their acceptance is conditional. They still have serious concerns.</p>
<p><strong>1. Does the AI work?</strong></p>
<p>A baseline expectation is AI tools should work well. Often, consumers say AI should be at least as good as a <a href="https://journal.achsm.org.au/index.php/achsm/article/view/861">human doctor</a> at the tasks it performs. They say we should not use AI if it will lead to more incorrect diagnoses or medical errors.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-chatbots-are-still-far-from-replacing-human-therapists-201084">AI chatbots are still far from replacing human therapists</a>
</strong>
</em>
</p>
<hr>
<p><strong>2. Who’s responsible if AI gets it wrong?</strong></p>
<p>Consumers also worry that if AI systems generate decisions – such as diagnoses or treatment plans – without human input, it may be unclear who is responsible for errors. So people often want clinicians to remain responsible for the final decisions, and for <a href="https://www.nature.com/articles/s41746-021-00509-1">protecting patients</a> from harms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/who-will-write-the-rules-for-ai-how-nations-are-racing-to-regulate-artificial-intelligence-216900">Who will write the rules for AI? How nations are racing to regulate artificial intelligence</a>
</strong>
</em>
</p>
<hr>
<p><strong>3. Will AI make health care less fair?</strong></p>
<p>If health services are <a href="https://theconversation.com/ms-dhu-coronial-findings-show-importance-of-teaching-doctors-and-nurses-about-unconscious-bias-60319">already discriminatory</a>, AI systems can learn these patterns from data and <a href="https://www.science.org/doi/10.1126/science.aax2342">repeat or worsen</a> the discrimination. So AI used in health care can make health inequities worse. In our studies consumers said this <a href="https://journals.sagepub.com/doi/pdf/10.1177/20552076231191057">is not OK</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1655077669610962944"}"></div></p>
<p><strong>4. Will AI dehumanise health care?</strong></p>
<p>Consumers are concerned AI will take the “human” elements out of health care, consistently saying AI tools should <a href="https://journals.sagepub.com/doi/full/10.1177/20552076221116772">support rather than replace</a> doctors. Often, this is because AI is perceived to lack important human traits, <a href="https://journals.sagepub.com/doi/full/10.1177/2055207619871808">such as empathy</a>. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatbots-for-medical-advice-three-ways-to-avoid-misleading-information-213266">Chatbots for medical advice: three ways to avoid misleading information</a>
</strong>
</em>
</p>
<hr>
<p><strong>5. Will AI de-skill our health workers?</strong></p>
<p>Consumers value human clinicians and their expertise. In our <a href="https://journals.sagepub.com/doi/full/10.1177/20552076231191057">research with women</a> about AI in breast screening, women were concerned about the potential effect on radiologists’ skills and expertise. Women saw this expertise as a precious shared resource: too much dependence on AI tools, and this resource might be lost.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"926426239119716352"}"></div></p>
<h2>Consumers and communities need a say</h2>
<p>The Australian health-care system cannot focus only on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential to shape AI use in health care.</p>
<p>Communities need opportunities to develop <a href="https://theconversation.com/chatbots-for-medical-advice-three-ways-to-avoid-misleading-information-213266">digital health literacy</a>: <a href="https://www.goodthingsfoundation.org.au/the-digital-divide/digital-health/">digital skills</a> to access reliable, trustworthy health information, services and resources. </p>
<p>Respectful engagement with Aboriginal and Torres Strait Islander communities must be central. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies <a href="https://aiatsis.gov.au/publication/116530">describes as</a>:</p>
<blockquote>
<p>the right of Indigenous peoples to govern the collection, ownership and application of data about Indigenous communities, peoples, lands, and resources.</p>
</blockquote>
<p>This includes any use of data to create AI. </p>
<p>This critically important consumer and community engagement needs to take place before managers design (more) AI into health systems, before <a href="https://theconversation.com/who-will-write-the-rules-for-ai-how-nations-are-racing-to-regulate-artificial-intelligence-216900">regulators</a> create guidance for how AI should and shouldn’t be used, and before clinicians consider buying a new AI tool for their practice.</p>
<p>We’re making some progress. Earlier this year, we ran a <a href="https://www.uow.edu.au/the-arts-social-sciences-humanities/research/acheev/artificial-intelligence-in-health/">citizens’ jury on AI in health care</a>. We supported 30 diverse Australians, from every state and territory, to spend three weeks learning about AI in health care, and developing recommendations for policymakers.</p>
<p>Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, have informed a recently released <a href="https://aihealthalliance.org/">national roadmap</a> for using AI in health care.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/worried-about-ai-you-might-have-ai-nxiety-heres-how-to-cope-205874">Worried about AI? You might have AI-nxiety – here's how to cope</a>
</strong>
</em>
</p>
<hr>
<h2>That’s not all</h2>
<p>Health professionals also need to be upskilled and supported to use AI in health care. They need to learn to be critical users of digital health tools, including understanding their pros and cons.</p>
<p>Our <a href="https://pubmed.ncbi.nlm.nih.gov/37071804/">analysis</a> of safety events reported to the Food and Drug Administration shows the most serious harms reported to the US regulator came not from a faulty device, but from the way consumers and clinicians used the device.</p>
<p>We also need to consider when health professionals should tell patients an AI tool is being used in their care, and when health workers should seek informed consent for that use.</p>
<p>Lastly, people involved in every stage of developing and using AI need to get accustomed to asking themselves: do consumers and communities agree this is a justified use of AI? </p>
<p>Only then will we have the AI-enabled health-care system consumers actually want.</p><img src="https://counter.theconversation.com/content/217374/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stacy Carter receives funding from National Health and Medical Research Council, National Breast Cancer Foundation, Medical Research Futures Fund. </span></em></p><p class="fine-print"><em><span>Emma Frost receives funding from the Australian Government Research Training Program and the National Health and Medical Research Council.</span></em></p><p class="fine-print"><em><span>Farah Magrabi receives funding from the National Health and Medical Research Council, the Digital Health CRC and Macquarie University. She is Co-Chair of the Australian Alliance for AI in Healthcare's Safety, Quality and Ethics Working Group. </span></em></p><p class="fine-print"><em><span>Yves Saint James Aquino receives funding from the National Health and Medical Research Council (CRE 2006-545 - WiserHealthcare). He is affiliated with Bellberry Limited, a not-for-profit organisation providing scientific and ethical review of human research projects.</span></em></p>Before AI becomes widespread in health care, we need to ask what matters to consumers.Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of WollongongEmma Frost, PhD candidate, Australian Centre for Health Engagement, Evidence and Values, University of WollongongFarah Magrabi, Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie UniversityYves Saint James Aquino, Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of WollongongLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167542023-11-19T07:54:18Z2023-11-19T07:54:18ZSouth African university students use AI to help them understand – not to avoid work<figure><img src="https://images.theconversation.com/files/557695/original/file-20231106-271094-ybb1d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Students are not adopting digital and AI-powered tools uncritically.</span> <span class="attribution"><span class="source">tzido/iStock</span></span></figcaption></figure><p>When <a href="https://openai.com/blog/chatgpt">ChatGPT</a> was released in November 2022, it sparked many conversations and moral panics. These centre on the impact of generative artificial intelligence (AI) on the <a href="https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/">information environment</a>. People worry that AI chatbots can negatively affect the integrity of creative and academic work, especially since they can produce human-like texts and images.</p>
<p>ChatGPT is a generative AI model using machine learning. It creates human-like responses, having been trained to recognise patterns in data. While it appears the model is engaging in natural conversation, it references a vast amount of data and extracts features and patterns to generate coherent replies.</p>
<p>Higher education is one sector in which the rise of AI like ChatGPT has <a href="https://unesdoc.unesco.org/ark:/48223/pf0000386693/PDF/386693eng.pdf.multi.%20%22">sparked concerns</a>. Some of these relate to ethics and integrity in teaching, learning and knowledge production. </p>
<p>We’re a group of academics in the field of media and communication, teaching in South African universities. We wanted to understand how university students were using generative AI and AI-powered tools in their academic practices. We administered an online survey to undergraduate students at five South African universities: the University of Cape Town, Cape Peninsula University of Technology, Stellenbosch University, Rhodes University, and the University of the Witwatersrand. </p>
<p><a href="http://dx.doi.org/10.2139/ssrn.4595655">The results</a> suggest that the moral panics around the use of generative AI are unwarranted. Students are not hyper-focused on ChatGPT. We found that students often use generative AI tools for engaged learning and that they have a critical and nuanced understanding of these tools. </p>
<p>What could be of greater concern from a teaching and learning perspective is that, second to using AI-powered tools for clarifying concepts, students are using them to generate ideas for assignments or essays or when they feel stuck on a specific topic. </p>
<h2>Unpacking the data</h2>
<p>The survey was completed by 1,471 students. Most spoke English as their home language, followed by isiXhosa and isiZulu. The majority were first-year students. Most respondents were registered in Humanities, followed by Science, Education and Commerce. While the survey is thus skewed towards first-year Humanities students, it provides useful indicative findings as educators explore new terrain. </p>
<p>We asked students whether they had used individual AI tools, listing some of the most popular tools across several categories. Our survey did not explore lecturers’ attitudes or policies towards AI tools. This will be probed in the next phase of our study, which will comprise focus groups with students and interviews with lecturers. Our study was not on ChatGPT specifically, though we did ask students about their use of this specific tool. We explored broad uses of AI-powered technologies to get a sense of how students use these tools, which tools they use, and where ChatGPT fits into these practices. </p>
<p>These were the key findings:</p>
<ul>
<li><p>41% of respondents indicated that they primarily used a laptop for their academic work, followed by a smartphone (29.8%). Only 10.5% used a desktop computer and 6.6% used a tablet.</p></li>
<li><p>Students tended to use a range of other AI-powered tools over ChatGPT, including translation and referencing tools. With reference to the use of online writing assistants such as <a href="https://quillbot.com/">Quillbot</a>, 46.5% of respondents indicated that they used such tools to improve their writing style for an assignment. 80.5% indicated that they had used <a href="https://app.grammarly.com/">Grammarly</a> or similar tools to help them write in appropriate English. </p></li>
<li><p>Fewer than half of survey respondents (37.3%) said that they had used ChatGPT to answer an essay question.</p></li>
<li><p>Students acknowledged that AI-powered tools could lead to plagiarism or affect their learning. However, they also stated that they did not use these tools in problematic ways. </p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-is-the-push-higher-education-needs-to-rethink-assessment-200314">ChatGPT is the push higher education needs to rethink assessment</a>
</strong>
</em>
</p>
<hr>
<ul>
<li><p>Respondents were overwhelmingly positive about the potential of digital and AI tools to make it easier for them to progress through university. They indicated that these tools could help to: clarify academic concepts; formulate ideas; structure essays; improve academic writing; save time; check spelling and grammar; clarify assignment instructions; find information or academic sources; summarise academic texts; guide students for whom English is not a native language to improve their academic writing; study for a test; paraphrase better; avoid plagiarism; and reference better. </p></li>
<li><p>Most students who viewed these tools as beneficial to the learning process used tools such as ChatGPT to clarify concepts related to their studies that they could not fully grasp or that they felt were not properly explained by lecturers.</p></li>
</ul>
<h2>Engaged learning</h2>
<p>We were particularly interested to find that students often used generative AI tools for <a href="https://www.semanticscholar.org/paper/Engaged-Learning%3A-Making-Learning-an-Authentic-Hung-Tan/2c2dd8cf1d0a5a3c94c189cc98f511292a2bfc2b">engaged learning</a>. This is an educational approach in which students are accountable for their own learning. They actively create thinking and learning skills and strategies and formulate new ideas and understanding through conversations and collaborative work. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/please-do-not-assume-the-worst-of-us-students-know-ai-is-here-to-stay-and-want-unis-to-teach-them-how-to-use-it-203426">'Please do not assume the worst of us': students know AI is here to stay and want unis to teach them how to use it</a>
</strong>
</em>
</p>
<hr>
<p>Through their use of AI tools, students can tailor content to address their specific strengths and weaknesses, to have a more engaged learning experience. AI tools can also be a sort of personalised online “tutor” with whom they have “conversations” to help them understand difficult concepts.</p>
<p>Concerns about how AI tools potentially undermine academic assessment and integrity are valid. However, those working in higher education must note the importance of factoring in students’ perspectives to work towards new pathways of assessment and learning.</p>
<p><em>The <a href="https://ssrn.com/abstract=4595655">full version</a> of this article was co-authored by Marenet Jordaan, Admire Mare, Job Mwaura, Sisanda Nkoala, Alette Schoon and Alexia Smit.</em></p><img src="https://counter.theconversation.com/content/216754/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chikezie E. Uzuegbunam receives funding from the Mellon Foundation and Rhodes University Research Council. </span></em></p><p class="fine-print"><em><span>Tanja Bosch does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Students often use generative AI tools for engaged learning. They have a critical and nuanced understanding of these tools.Tanja Bosch, Professor in Media Studies and Production, University of Cape TownChikezie E. Uzuegbunam, Lecturer & MA Programme Coordinator, Rhodes UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2125782023-09-20T15:34:26Z2023-09-20T15:34:26ZIn future, we’ll see fewer generic AI chatbots like ChatGPT and more specialised ones that are tailored to our needs<figure><img src="https://images.theconversation.com/files/545797/original/file-20230831-8847-rpc8oh.jpg?ixlib=rb-1.1.0&rect=18%2C9%2C6080%2C4191&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/chat-bot-robot-online-chatting-communication-1147488248">Alexander Supertramp / Shutterstock</a></span></figcaption></figure><p>AI technology is developing rapidly. ChatGPT has become the <a href="https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/">fastest-growing online service</a> in history. Google and Microsoft are integrating generative AI into their products. And world leaders are excitedly embracing AI as a tool for economic growth. </p>
<p>As we move beyond ChatGPT and Bard, we’re likely to see AI chatbots become less generic and more specialised. AIs are limited by the data it’s exposed to in order to make them better at what they do – in this case mimicking human speech and providing users with useful answers. </p>
<p>Training often casts the net wide, with <a href="https://www.theverge.com/2023/7/5/23784257/google-ai-bard-privacy-policy-train-web-scraping">AI systems absorbing thousands of books and web pages</a>. But a more select, focused set of training data could make AI chatbots even more useful for people working in particular industries or living in certain areas. </p>
<h2>The value of data</h2>
<p>An important factor in this evolution will be the growing costs of amassing training data for advanced large language models (LLMs), the type of AI that powers ChatGPT. Companies know data is valuable: Meta and Google make billions from selling adverts targeted with user data. But the value of data is now <a href="https://www.ft.com/content/20c27dc2-5cb6-4aa0-a6c7-71342b661a6b">changing</a>. Meta and Google sell data “insights”; they invest in analytics to transform many data points into predictions about users.</p>
<p>Data is valuable to OpenAI – the developer of ChatGPT – in a subtly different way. Imagine a tweet: “The cat sat on the mat.” This tweet is not valuable for targeted advertisers. It says little about a user or their interests. Maybe, at a push, it could suggest interest in cat food and Dr Suess.</p>
<p>But for OpenAI, which is building LLMs to produce human-like language, this tweet is valuable as an example of how human language works. A single tweet cannot teach an AI to construct sentences, but billions of tweets, blogposts, Wikipedia entries, and so on, certainly can. For instance, the advanced LLM GPT-4 was probably built using data scraped from X (formerly Twitter), Reddit, Wikipedia and beyond.</p>
<p>The AI revolution is changing the business model for data-rich organisations. Companies like Meta and Google have been <a href="https://www.ft.com/content/20c27dc2-5cb6-4aa0-a6c7-71342b661a6b">investing in AI research and development</a> for several years as they try to exploit their data resources. </p>
<p>Organisations <a href="https://www.wired.co.uk/article/twitter-data-api-prices-out-nearly-everyone">like X</a> and <a href="https://www.cnbc.com/2023/06/01/reddit-eyeing-ipo-charge-millions-in-fees-for-third-party-api-access.html">Reddit</a> have begun to charge third parties for API access, the system used to scrape data from these websites. Data scraping costs companies like X money, as they <a href="https://www.theverge.com/2023/7/1/23781198/twitter-daily-reading-limit-elon-musk-verified-paywall">must spend more on computing power</a> to fulfil data queries.</p>
<p>Moving forward, as organisations like OpenAI look to build more powerful versions of its GPT LLM, they will face greater costs for getting hold of data. One solution to this problem might be synthetic data.</p>
<h2>Going synthetic</h2>
<p>Synthetic data is <a href="https://www.ft.com/content/053ee253-820e-453a-a1d5-0f24985258de">created from scratch by AI systems</a> to train more advanced AI systems – so that they improve. They are designed to perform the same task as real training data but are generated by AI. </p>
<p>It’s a new idea, but it faces many problems. Good synthetic data needs to be <a href="https://news.mit.edu/2020/real-promise-synthetic-data-1016">different enough from the original data</a> it’s based on in order to tell the model something new, while similar enough to tell it something accurate. This can be difficult to achieve. Where synthetic data is <a href="https://sloanreview.mit.edu/article/the-real-deal-about-synthetic-data/">just convincing copies</a> of real-world data, the resulting AI models may struggle with creativity, entrenching existing biases.</p>
<p>Another problem is the <a href="https://www.axios.com/2023/08/28/ai-content-flood-model-collapse">“Hapsburg AI” problem</a>. This suggests that training AI on synthetic data will cause a decline in the effectiveness of these systems – hence the analogy using the infamous inbreeding of the Hapsburg royal family. <a href="https://www.cl.cam.ac.uk/%7Eis410/Papers/dementia_arxiv.pdf">Some studies</a> suggest this is already happening with systems like ChatGPT.</p>
<p>One reason ChatGPT is so good is because it uses <a href="https://openai.com/research/learning-from-human-preferences">reinforcement learning with human feedback</a> (RLHF), where people rate its outputs in terms of accuracy. If synthetic data generated by an AI has inaccuracies, AI models trained on this data will themselves be inaccurate. So the demand for human feedback to correct these inaccuracies is likely to increase. </p>
<p>However, while most people would be able to say whether a sentence is grammatically accurate, fewer would be able to comment on its factual accuracy – especially when the output is technical or specialised. Inaccurate outputs on specialist topics are less likely to be caught by RLHF. If synthetic data means there are more inaccuracies to catch, the quality of general-purpose LLMs may stall or decline even as these models “learn” more.</p>
<h2>Little language models</h2>
<p>These problems help explain some emerging trends in AI. Google engineers have revealed that there is little preventing third parties from <a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither">recreating LLMs</a> like GPT-3 or Google’s LaMDA AI. Many organisations could build their own internal AI systems, using their own specialised data, for their own objectives. These will probably be more valuable for these organisations than ChatGPT in the long run.</p>
<p>Recently, the Japanese government noted that developing a <a href="https://www.taira-m.jp/ldp%E2%80%99s%20ai%20whitepaper_etrans_2304.pdf">Japan-centric version of ChatGPT</a> is potentially worthwhile to their AI strategy, as ChatGPT is not sufficiently representative of Japan. The software company <a href="https://www.sap.com/uk/products/artificial-intelligence/generative-ai.html">SAP has recently launched its AI “roadmap”</a> to offer AI development capabilities to professional organisations. This will make it easier for companies to build their own, bespoke versions of ChatGPT.</p>
<p>Consultancies such as <a href="https://www.mckinsey.com/about-us/new-at-mckinsey-blog/meet-lilli-our-generative-ai-tool">McKinsey</a> and <a href="https://kpmg.com/au/en/home/media/press-releases/2023/03/kpmg-unveils-cutting-edge-private-chatgpt-software-march-2023.html">KPMG</a> are exploring the training of AI models for “specific purposes”. Guides on how to <a href="https://bdtechtalks.com/2023/06/01/create-privategpt-local-llm/">create private, personal versions of ChatGPT</a> can be readily found online. Open source systems, such as <a href="https://gpt4all.io/index.html">GPT4All</a>, already exist.</p>
<p>As development challenges – coupled with potential regulatory hurdles – mount for generic LLMs, it is possible that the future of AI will be many specific little – rather than large – language models. Little language models might struggle if they are trained on less data than systems such as GPT-4. </p>
<p>But they might also have an advantage in terms of RLHF, as little language models are likely to be developed for specific purposes. Employees who have expert knowledge of their organisation and its objectives may provide much more valuable feedback to such AI systems, compared with generic feedback for a generic AI system. This may overcome the disadvantages of less data.</p><img src="https://counter.theconversation.com/content/212578/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stuart Mills does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Training AI systems with more focused data sets can target them to a specific use.Stuart Mills, Assistant Professor of Economics, University of LeedsLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2132662023-09-19T13:24:20Z2023-09-19T13:24:20ZChatbots for medical advice: three ways to avoid misleading information<p>We expect medical professionals to give us reliable information about ourselves and potential treatments so that we can make informed decisions about which (if any) medicine or other intervention we need. If your doctor instead “bullshits” you (yes – this term has been used in <a href="https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit">academic publications</a> to refer to persuasion without regard for truth, and not as a swear word) under the deception of authoritative medical advice, the decisions you make could be based on faulty evidence and may result in harm or even death. </p>
<p>Bullshitting is distinct from lying – liars do care about the truth and actively try to conceal it. Indeed bullshitting can be <a href="https://theconversation.com/trump-isnt-lying-hes-bullshitting-and-its-far-more-dangerous-71932">more dangerous</a> than an outright lie. Fortunately, of course, doctors don’t tend to bullshit – and if they did there would be, one hopes, consequences through ethics bodies or the law. But what if the misleading medical advice didn’t come from a doctor?</p>
<p>By now, most people have heard of <a href="https://theconversation.com/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth-198463">ChatGPT</a>, a very powerful chatbot. A chatbot is an algorithm-powered interface that can mimic human interaction. The use of chatbots is becoming <a href="https://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378">increasingly widespread</a>, including for <a href="https://www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/">medical advice</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpts-greatest-achievement-might-just-be-its-ability-to-trick-us-into-thinking-that-its-honest-202694">ChatGPT's greatest achievement might just be its ability to trick us into thinking that it's honest</a>
</strong>
</em>
</p>
<hr>
<p>In a <a href="https://www.frontiersin.org/articles/10.3389/fpubh.2023.1254334/full">recent paper</a>, we looked at ethical perspectives on the use of chatbots for medical advice. Now, while ChatGPT, or similar platforms, might be useful and reliable for finding out the best places to see in Dakar, to learn about wildlife, or to get quick potted summaries of other topics of interest, putting your health in its hands may be playing Russian roulette: you might get lucky, but you might not. </p>
<p>This is because chatbots like ChatGPT try to persuade you <a href="https://theconversation.com/chatgpts-greatest-achievement-might-just-be-its-ability-to-trick-us-into-thinking-that-its-honest-202694">without regard for truth</a>. Its rhetoric is so persuasive that gaps in logic and facts are obscured. This, in effect, means that ChatGPT includes the generation of bullshit.</p>
<h2>The gaps</h2>
<p>The issue is that ChatGPT is not really artificial intelligence in the sense of actually recognising what you’re asking, thinking about it, checking the available evidence, and giving a justified response. Rather, it looks at the words you’re providing, predicts a response that will sound plausible and provides that response. </p>
<p>This is somewhat similar to the predictive text function you may have used on mobile phones, but much more powerful. Indeed, it can provide very persuasive bullshit: often accurate, but sometimes not. That’s fine if you get bad advice about a restaurant, but it’s very bad indeed if you’re assured that your odd-looking mole is not cancerous when it is.</p>
<p>Another way of looking at this is from the perspective of logic and rhetoric. We want our medical advice to be scientific and logical, proceeding from the evidence to personalised recommendations regarding our health. In contrast, ChatGPT wants to sound persuasive <a href="https://www.embopress.org/doi/full/10.15252/embr.202357501">even if it’s talking bullshit</a>. </p>
<p>For example, when asked to provide citations for its claims, ChatGPT often <a href="https://www.embopress.org/doi/full/10.15252/embr.202357501">makes up references</a> to literature that doesn’t exist – even though the provided text looks perfectly legitimate. Would you trust a doctor who did that?</p>
<h2>Dr ChatGPT vs Dr Google</h2>
<p>Now, you might think that Dr ChatGPT is at least better than Dr Google, which people also use to try to self-diagnose. </p>
<p>In contrast to the reams of information provided by Dr Google, chatbots like ChatGPT give concise answers very quickly. Of course, Dr Google can fall prey to misinformation too, but it does not try to sound convincing.</p>
<p>Using Google or other search engines to identify verified and trustworthy health information (for instance, from the <a href="https://www.who.int/">World Health Organization</a>) can be very beneficial for citizens. And while Google is known for capturing and recording user data, such as terms used in searches, <a href="https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283">using chatbots may be worse</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283">ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned</a>
</strong>
</em>
</p>
<hr>
<p>Beyond potentially being misleading, chatbots may record data on your medical conditions and actively request more personal information, leading to more personalised – and possibly more accurate – bullshit. Therein lies the dilemma. Providing more information to chatbots may lead to more accurate answers, but also gives away <a href="https://www.frontiersin.org/articles/10.3389/fpubh.2023.1254334/full#B22">more personal health-related information</a>. However, not all chatbots are like ChatGPT. Some may be more specifically designed for use in medical settings, and advantages from their use may outweigh potential disadvantages.</p>
<h2>What to do</h2>
<p>So what should you do if you’re tempted to use ChatGPT for medical advice despite all this bullshit?</p>
<p>The first rule is: don’t use it. </p>
<p>But if you do, the second rule is that you should check the accuracy of the chatbot’s response – the medical advice provided may or may not be true. Dr Google can, for instance, point you in the direction of reliable sources. But, if you’re going to do that anyway, why risk receiving bullshit in the first place?</p>
<p>The third rule is to provide chatbots with information sparingly. Obviously, the more personalised data you offer, the better the medical advice you get. And it can be difficult to withhold information as most of us willingly and voluntarily give up information on mobile phones and various websites anyway. Adding to this, chatbots can also ask for more. But more data for chatbots like ChatGPT could also lead to more persuasive and even personalised inaccurate medical advice.</p>
<p>Talking bullshit and misuse of personal data is certainly not our idea of a good doctor.</p><img src="https://counter.theconversation.com/content/213266/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>What should you do if you’re tempted to use ChatGPT for medical advice? For starters, don’t use it.David Martin Shaw, Bioethicist, Department of Health Ethics and Society, Maastricht University and Institute for Biomedical Ethics, University of BaselPhilip Lewis, Research associate, University of CologneThomas C. Erren, Professor, University of CologneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2106792023-09-19T12:17:30Z2023-09-19T12:17:30ZAI won’t be replacing your priest, minister, rabbi or imam any time soon<figure><img src="https://images.theconversation.com/files/548135/original/file-20230913-23-yeyvza.jpg?ixlib=rb-1.1.0&rect=311%2C42%2C6850%2C4939&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An android called 'Kannon Mindar,' which preaches Buddhist sermons.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/in-this-march-9-2019-shows-android-called-kannon-mindar-news-photo/1129444044?adppopup=true">Richard Atrero de Guzman/NurPhoto via Getty Images</a></span></figcaption></figure><p>Early in the summer of 2023, robots projected on a screen <a href="https://www.worldreligionnews.com/religion-news/artificial-intelligence/">delivered sermons to about 300 congregants</a> at St. Paul’s Church in Bavaria, Germany. Created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna, the experimental church service drew immense interest. </p>
<p>The deadpan sermon delivery prompted many to doubt whether AI can really displace priests and pastoral instruction. At the end of the service, an attendee remarked, “There was no heart and no soul.” </p>
<p>But the growing use of AI may prompt more churches to debut AI-generated worship services. A church in Austin, Texas, for example, has put a banner out <a href="https://www.mysanantonio.com/news/local/article/chatgpt-ai-sermon-austin-18360010.php.">advertising a service with an AI-generated sermon</a>. The church worship will also include an AI-generated call to worship and pastoral prayer. Yet this use of AI has prompted concerns, as these technologies are believed to disrupt authentic human presence and leadership in religious life. </p>
<p>My research, alongside others in the <a href="https://oaktrust.library.tamu.edu/handle/1969.1/198190">interdisciplinary fields of digital religion</a> and <a href="https://doi.org/10.4135/9781529782783">human-machine communication</a>, illuminates what is missing in discussions of AI, which tend to be machine-centric and focused on extreme bright or dark outcomes. </p>
<p>It points to how religious leaders are still the ones influencing the latest technologies within their organizations. AI cannot simply displace humans, since storytelling and programming continue to be critical for its development and deployment. </p>
<p>Here are three ways in which machines will need a priest. </p>
<h2>1. Clergy approve and affirm AI use</h2>
<p>Given rapid changes in emerging technologies, priests have historically served as gatekeepers to <a href="https://www.peterlang.com/document/1109122">endorse and invest in new digital applications</a>. In 2015, in China, the adoption of Xian'er, the robot monk, was promoted as a <a href="https://doi.org/10.1177/0196859920977133">pathway to spiritual engagement</a> by the master priest of the Buddhist Longquan Temple in Beijing. </p>
<p>The priest rejected claims that religious AI was sacrilegious and described innovation in AI as spiritually compatible with religious values. He encouraged the incorporation of AI into religious practices to help believers gain spiritual insight and to elevate the temple’s outreach efforts in spreading Buddhist teachings. </p>
<p>Similarly, in 2019, the head priest of the Kodai-ji Buddhist temple in Kyoto, Japan, named an adult-size android “<a href="https://www.scmp.com/news/asia/east-asia/article/3022716/meet-mindar-humanoid-robot-preaches-sermons-buddhist-temple?module=perpetual_scroll_0&pgtype=article&campaign=3022716">Kannon Mindar</a>,” after the revered Goddess of Mercy. </p>
<p>This robotic deity, who can preach the Heart Sutra, a classic and popular Buddhist scripture, was intentionally built in partnership with Osaka University, with a cost of about US$1 million. The idea behind it was to stimulate public interest and connect religious seekers and practitioners with Buddhist teachings. </p>
<p>By naming and affirming AI use in religious life, religious leaders are acting as key influencers in the development and application of robots in spiritual practice. </p>
<h2>2. Priests direct human-machine communication</h2>
<p>Today, much of AI data operations remain invisible or opaque. Many adults do not recognize how much AI is <a href="https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/">already a part of our daily lives</a>, for example in customer service chatbots and custom product recommendations. </p>
<p>But human decision making and judgment about technical processes, including providing feedback for reinforcement learning and interface design, is vital for the day-to-day operations of AI. </p>
<p>Consider the recent robotic initiatives at the Grand Mosque in Saudi Arabia. At this mosque, <a href="https://gulfnews.com/world/gulf/saudi/recitation-sermon-robots-launched-at-grand-mosque-1.90474824">multilingual robots</a> are being deployed for multiple purposes, including providing answers to questions related to <a href="https://gulfnews.com/world/gulf/saudi/watch-multilingual-robot-interacts-with-pilgrims-at-kaaba-kiswa-facility-in-mecca-1.96458732">ritual performances in 11 languages</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man in red checked head scarf and flowing white shirt with a robot." src="https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/548137/original/file-20230913-15-umu34w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A robot at the Grand Mosque in Saudi Arabia’s holy city of Mecca.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/saudi-staff-works-on-a-smart-sterilising-robot-at-the-grand-news-photo/1234087266?adppopup=true">Fayez Nureldine / AFP via Getty images</a></span>
</figcaption>
</figure>
<p>Notably, while these robots stationed at the Grand Mosque can recite the Holy Quran, they also provide visitors with connections to local imams. Their touch-screen interfaces are equipped with bar codes, allowing users to learn more about the weekly schedules of mosque staff, including clerics who lead Friday sermons. In addition, these robots can connect visitors with Islamic scholars via video interactions to answer their queries around the clock.</p>
<p>What this shows is that while robots can serve as valuable sources of religious knowledge, the strategic channeling of inquiries back to established religious leaders is reinforcing the credibility of priestly authority. </p>
<h2>3. Religious leaders can create and share ethical guidelines</h2>
<p>Clergy are trying to raise awareness of AI’s potential for human flourishing and well-being. For example, in recent years, Pope Francis has been vocal in addressing the potential benefits and disruptive dangers of the new AI technologies. </p>
<p>The Vatican has hosted technology industry leaders and called for <a href="https://www.vaticannews.va/en/pope/news/2023-01/pope-francis-receives-rome-call-vatican-audience.html">ethical guidelines</a> to “safeguard the good of the human family” and maintain “vigilance against technology misuse.” The ethical use of AI for religion includes a concern for human bias in programming, which can result in inaccuracies and unsafe outcomes. </p>
<p>In June 2023, the Vatican’s culture and education body, in partnership with Santa Clara University, released a 140-page <a href="https://www.scu.edu/ethics/media-mentions/stories/the-vatican-releases-its-own-ai-ethics-handbook.html">AI ethics handbook</a> for technology organizations. The handbook stressed the importance of embedding moral ideals in the development of AI, including respect for human dignity and rights in data privacy, machine learning and facial recognition technologies. </p>
<p>By creating and sharing ethical guidelines on AI, religious leaders can speak to future AI development from its inception, to guide design and consumer implementation toward cherished values.</p>
<p>In sum, while religious leaders appear to be undervalued in AI development and discourse, I argue that it is important to recognize the ways in which clergy are contributing to skillful communication involving AI technologies. In the process, they are co-constructing the conversations that chatbots such as the one at the church in Bavaria are having with congregants.</p><img src="https://counter.theconversation.com/content/210679/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pauline Hope Cheong does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A scholar of digital religion explains why the use of AI isn’t necessarily displacing religious leadership: It is the clergy who are helping with the programming, critical for its deployment.Pauline Hope Cheong, Professor of Human Communication and Communication Technologies, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2115872023-08-23T12:26:21Z2023-08-23T12:26:21ZNavigating the intersection between AI, automation and religion – 3 essential reads<figure><img src="https://images.theconversation.com/files/542839/original/file-20230815-23-3fs34t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI is slowly becoming part of the religious sphere. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/rosary-prayer-online-holy-mass-conducted-online-royalty-free-image/1221601837?phrase=religion+and+technology&adppopup=true">robertprzybysz/iStock via Getty Images</a></span></figcaption></figure><p>In a era marked by rapid technological advancement, we are seeing everything from artificial intelligence to robots slowly seep into our everyday lives. But now, this technology is increasingly making inroads into a realm that has long been uniquely human: religion. </p>
<p>From the creation of <a href="https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348">ChatGPT sermons</a> to <a href="https://www.pbs.org/newshour/world/robots-are-performing-hindu-rituals-some-worshippers-fear-theyll-be-replaced">robots performing sacred Hindu rituals</a>, the once-clearer boundaries between faith and technology are blurring. </p>
<p>Over the last few months, The Conversation U.S. has published a number of stories exploring how AI and automation are weaving themselves into religious contexts. These three articles from our archives shed light on the impacts of such technology on human spirituality, faith and worship across cultures. </p>
<h2>1. Prophets come to life</h2>
<p>As one of the most prominent religious figures in the world, Jesus has been continually reinterpreted to fit the norms and needs of each new historical context, from <a href="https://theconversation.com/panama-celebrates-its-black-christ-part-of-protest-against-colonialism-and-slavery-122171">Cristo Negro</a> or “Black Christ” to being depicted as a Hindu mystic. </p>
<p>But now the prophet is on Twitch, a video live-streaming platform. And it’s all thanks to an AI chatbot. </p>
<figure class="align-center ">
<img alt="A bearded white man wearing a brown hooded jacket has a halo around him." src="https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/543072/original/file-20230816-17-mzi6vx.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AI Jesus provides insight on both spiritual and personal questions users ask on his channel.</span>
<span class="attribution"><a class="source" href="https://www.twitch.tv/ask_jesus">Twitch user ask_jesus</a></span>
</figcaption>
</figure>
<p>Presented as a bearded white man wearing a brown hood, “AI Jesus” is available 24/7 on his Twitch channel “<a href="https://www.twitch.tv/ask_jesus">ask_Jesus</a>” and is able to interact with users who can ask him anything from deep religious-in-nature questions to lighthearted inquiries. </p>
<p>AI Jesus represents one of the newest examples in the growing field of AI spirituality, noted Boston College theology faculty member <a href="https://theconversation.com/profiles/joseph-l-kimmel-1441171">Joseph L. Kimmel</a>, and may help scholars better understand how human spirituality is being actively shaped by the influence of AI.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-chatbot-willing-to-take-on-questions-of-all-kinds-from-the-serious-to-the-comical-is-the-latest-representation-of-jesus-for-the-ai-age-208644">A chatbot willing to take on questions of all kinds – from the serious to the comical – is the latest representation of Jesus for the AI age</a>
</strong>
</em>
</p>
<hr>
<h2>2. Robotic rituals</h2>
<p>A unique intersection of religion and robotic technology has emerged with the introduction of robots performing Hindu rituals in South Asia. While some have welcomed the technological inclusion, others express worries about the future that ritual automation could lead to. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LH5yqpCWKqs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A robotic arm performs “aarti” — a Hindu practice in which light is ritually waved for the veneration of deities.</span></figcaption>
</figure>
<p>Many believe that the growth of robots within Hindu practices could lead to an increase in people leaving the religion, and question the use of robots to embody religious and divine figures.</p>
<p>But there is another concern: whether robots could eventually replace Hindu worshippers. Automated robots would be able to perform rituals without a single error. This is significant because religions like Hinduism and Buddhism emphasize the correct execution of rituals and ceremonies as a means to connect with the divine rather than emphasizing correct belief. </p>
<p>It’s a concept referred to as orthopraxy, according to Wellesley College anthropology lecturer <a href="https://theconversation.com/profiles/holly-walters-1406163">Holly Walters</a>. “In short, the robot can do your religion better than you can because robots, unlike people, are spiritually incorruptible,” she explained. “Modern robotics might then feel like a particular kind of cultural paradox, where the best kind of religion is the one that eventually involves no humans at all.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/robots-are-performing-hindu-rituals-some-devotees-fear-theyll-replace-worshippers-197504">Robots are performing Hindu rituals -- some devotees fear they'll replace worshippers</a>
</strong>
</em>
</p>
<hr>
<h2>3. AI preachers</h2>
<p>According to College of the Holy Cross religious studies scholar <a href="https://theconversation.com/profiles/joanne-m-pierce-156953">Joanne M. Pierce</a>, preaching has always been considered a human activity grounded in faith. But what happens when that practice is taken over by an AI chatbot? </p>
<p>In June 2023, hundreds of Lutherans gathered in Bavaria, Germany, for a service designed and delivered by ChatGPT. But many are cautious about using AI to conduct these religious practices. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/xmXghWi2lf8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">St. Paul’s Church in Fürth, Bavaria was packed with over 300 Lutherans who attended a church service generated almost entirely by artificial intelligence.</span></figcaption>
</figure>
<p>In their sermons, preachers not only offer advice, but “speak out of personal reflection in a way that will inspire the members of the congregation, not just please them,” Pierce said. “It must also be shaped by an awareness of the needs and lived experience of the worshiping community in the pews.”</p>
<p>For the time being, it seems as though the inability to understand the human experience is AI’s biggest flaw within the preaching sphere. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-chatbots-write-inspirational-and-wise-sermons-208825">Can chatbots write inspirational and wise sermons?</a>
</strong>
</em>
</p>
<hr>
<p><em>Editor’s note: This story is a roundup of articles from The Conversation’s archives.</em></p><img src="https://counter.theconversation.com/content/211587/count.gif" alt="The Conversation" width="1" height="1" />
The merging of technology and faith is sparking a transformative shift in redefining spirituality and religious practices.Meher Bhatia, Editorial Intern, The ConversationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2117442023-08-18T06:24:55Z2023-08-18T06:24:55ZSnapchat’s ‘creepy’ AI blunder reminds us that chatbots aren’t people. But as the lines blur, the risks grow<figure><img src="https://images.theconversation.com/files/543406/original/file-20230818-5303-3rxdd6.jpeg?ixlib=rb-1.1.0&rect=0%2C25%2C5615%2C3707&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Artificial intelligence-powered (AI) chatbots are becoming increasingly human-like by design, to the point that some among us may struggle to distinguish between human and machine.</p>
<p>This week, Snapchat’s My AI chatbot glitched and posted a story of what looked like a wall and ceiling, before it stopped responding to users. Naturally, the internet began to <a href="https://9to5mac.com/2023/08/17/snapchat-ai-went-rogue/">question</a> whether the ChatGPT-powered chatbot had gained sentience.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1691619754983768401"}"></div></p>
<p>A crash course in AI literacy could have quelled this confusion. But, beyond that, the incident reminds us that as AI chatbots grow closer to resembling humans, managing their uptake will only get more challenging – and more important.</p>
<h2>From rules-based to adaptive chatbots</h2>
<p>Since ChatGPT burst onto our screens late last year, many digital platforms have integrated AI into their services. Even as I draft this article on Microsoft Word, the software’s predictive AI capability is suggesting possible sentence completions.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/google-and-microsoft-are-bringing-ai-to-word-excel-gmail-and-more-it-could-boost-productivity-for-us-and-cybercriminals-202046">Google and Microsoft are bringing AI to Word, Excel, Gmail and more. It could boost productivity for us – and cybercriminals</a>
</strong>
</em>
</p>
<hr>
<p>Known as generative AI, this relatively new type of AI is distinguished from its <a href="https://www.timeshighereducation.com/campus/three-ways-leverage-chatgpt-and-other-generative-ai-research">predecessors</a> by its ability to generate new content that is precise, human-like and seemingly meaningful. </p>
<p>Generative AI tools, including AI image generators and chatbots, are built on large language models (LLMs). These computational models analyse the associations between billions of words, sentences and paragraphs to predict what ought to come next in a given text. As OpenAI co-founder Ilya Sutskever <a href="https://lifearchitect.ai/ilya/">puts it</a>, an LLM is</p>
<blockquote>
<p>[…] just a really, really good next-word predictor.</p>
</blockquote>
<p>Advanced LLMs are also fine-tuned with human feedback. This training, often delivered through countless hours of cheap human labour, is the reason AI chatbots can now have seemingly human-like conversations. </p>
<p>OpenAI’s ChatGPT is still the <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/">flagship generative AI model</a>. Its release marked a major leap from simpler “rules-based” chatbots, such as those used in online customer service.</p>
<p>Human-like chatbots that talk <em>to</em> a user rather than <em>at</em> them have been linked with higher levels of engagement. One <a href="https://www.ingentaconnect.com/content/mcb/josm/2023/00000034/00000004/art00008">study</a> found the personification of chatbots leads to increased engagement which, over time, may turn into psychological
dependence. Another study involving <a href="https://dl.acm.org/doi/abs/10.1145/3579505">stressed participants</a> found a human-like chatbot was more likely to be perceived as competent, and therefore more likely to help reduce participants’ stress.</p>
<p>These chatbots have also been effective in fulfilling organisational objectives in various settings, including retail, education, workplace and <a href="https://www.latrobe.edu.au/mylatrobe/using-ai-for-good-la-trobe-phd-student-builds-new-chatbot-to-help-improve-students-mental-health/">healthcare settings</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-hidden-cost-of-the-ai-boom-social-and-environmental-exploitation-208669">The hidden cost of the AI boom: social and environmental exploitation</a>
</strong>
</em>
</p>
<hr>
<p>Google is using generative AI to build a “personal life coach” that will <a href="https://www.nytimes.com/2023/08/16/technology/google-ai-life-advice.html">supposedly help</a> people with various personal and professional tasks, including providing life advice and answering intimate questions. </p>
<p>This is despite Google’s own AI safety experts warning that users could grow too dependant on AI and may experience “diminished health and wellbeing” and a “loss of agency” if they take life advice from it.</p>
<h2>Friend or foe – or just a bot?</h2>
<p>In the recent Snapchat incident, the company put the whole thing down to a “<a href="https://techcrunch.com/2023/08/16/snapchats-my-ai-goes-rogue-posts-to-stories-but-snap-confirms-it-was-just-a-glitch/">temporary outage</a>”. We may never know what actually happened; it could be yet another example of AI “hallucinatng”, or the result of a cyberattack, or even just an operational error.</p>
<p>Either way, the speed with which some users assumed the chatbot had achieved sentience suggests we are seeing an unprecedented anthropomorphism of AI. It’s compounded by a lack of transparency from developers, and a lack of basic understanding among the public.</p>
<p>We shouldn’t underestimate how individuals may be misled by the apparent authenticity of human-like chatbots. </p>
<p>Earlier this year, a Belgian man’s suicide <a href="https://www.livemint.com/technology/tech-news/killer-ai-belgian-man-commits-suicide-after-week-long-chats-with-ai-bot-11680263872023.html">was attributed</a> to conversations he’d had with a chatbot about climate inaction and the planet’s future. In another example, a chatbot named Tessa <a href="https://www.theguardian.com/technology/2023/may/31/eating-disorder-hotline-union-ai-chatbot-harm">was found to be</a> offering harmful advice to people through an eating disorder helpline. </p>
<p>Chatbots may be particularly harmful to the more vulnerable among us, and especially to those with psychological conditions.</p>
<h2>A new uncanny valley?</h2>
<p>You may have heard of the “uncanny valley” effect. It refers to that uneasy feeling you get when you see a humanoid robot that <em>almost</em> looks human, but its slight imperfections give it away, and it ends up being creepy. </p>
<p>It seems a similar experience is emerging in our interactions with human-like chatbots. A <a href="https://www.newscientist.com/article/2383850-gpt-4-is-the-ai-behind-chatgpt-getting-worse/">slight blip</a> can raise the hairs on the back of the neck. </p>
<p>One solution might be to lose the human edge and revert to chatbots that are straightforward, objective and factual. But this would come at the expense of engagement and innovation. </p>
<h2>Education and transparency are key</h2>
<p>Even the developers of advanced AI chatbots often can’t explain how they work. Yet in some ways (and as far as commercial entities are concerned) the benefits outweigh the risks.</p>
<p>Generative AI has <a href="https://www.gatesnotes.com/The-Age-of-AI-Has-Begun">demonstrated its usefulness</a> in big-ticket items such as productivity, healthcare, education and <a href="https://theconversation.com/how-chatgpt-might-be-able-to-help-the-worlds-poorest-and-the-organisations-that-work-with-them-203287">even social equity</a>. It’s unlikely to go away. So how do we make it work for us?</p>
<p>Since 2018, there has been a significant push for governments and organisations to address the risks of AI. But applying <a href="https://www.itu.int/hub/2023/07/a-call-to-action-for-inclusive-safe-and-responsible-ai/">responsible standards and regulations</a> to a technology that’s more “human-like” than any other comes with a host of challenges.</p>
<p>Currently, there is no legal requirement for Australian businesses to disclose the use of chatbots. In the US, California has introduced a “bot bill” that would require this, but legal experts have <a href="https://www.wired.com/story/law-makes-bots-identify-themselves/">poked holes in it</a> – and the bill has yet to be enforced at the time of writing this article.</p>
<p>Moreover, ChatGPT and similar chatbots are made public as “<a href="https://openai.com/blog/chatgpt-plus">research previews</a>”. This means they often come with multiple disclosures on their prototypical nature, and the onus for responsible use falls on the user.</p>
<p>The <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">European Union’s AI Act</a>, the world’s first comprehensive regulation on AI, has identified moderate regulation and education as the path forward – since excess regulation could stunt innovation. Similar to digital literacy, AI literacy should be mandated in schools, universities and organisations, and should also be made free and accessible for the public.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/do-we-need-a-new-law-for-ai-sure-but-first-we-could-try-enforcing-the-laws-we-already-have-211369">Do we need a new law for AI? Sure – but first we could try enforcing the laws we already have</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/211744/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daswin de Silva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Snapchat’s AI-powered chatbot malfunctioned this week, raising questions of “sentience” among users. As AI becomes increasingly human-like, society must become AI-literate.Daswin de Silva, Deputy Director of the Centre for Data Analytics and Cognition, La Trobe UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2108102023-08-07T20:01:51Z2023-08-07T20:01:51ZPictures of pain: how a visual chatbot can help people with chronic pain<p>Chronic or persistent pain is the main reason people seek medical care in Australia. Yet, most chronic pain is incurable and relies on education, exercise and other interventions for management, making it a <a href="https://www.deloitte.com/au/en/services/economics/analysis/cost-pain-australia.html">complex, common and expensive health challenge</a>.</p>
<p>The challenge is more pronounced in the treatment of children and people with diverse needs, such as those with language deficits and varying learning abilities. </p>
<p>Our team of researchers from <a href="http://aehrc.csiro.au">CSIRO</a>, <a href="https://recover.centre.uq.edu.au/">RECOVER Injury Research Centre</a>, the <a href="https://metronorth.health.qld.gov.au/rbwh/healthcare-services/persistent-pain">Tess Cramond Pain and Research Centre</a> and the <a href="https://www.childrens.health.qld.gov.au/service-queensland-interdisciplinary-paediatric-persistent-pain/">Queensland Interdisciplinary Paediatric Pain Service</a> are exploring how therapeutic chatbots can help improve communication between people with chronic pain and health professionals.</p>
<p>To do this, we built a chatbot called <a href="https://twitter.com/RecoverResearch/status/1394776246525960195">Dolores</a>, with a unique feature: a canvas for users to describe their pain with drawings, sketches and doodles. We found that in this case, a picture truly is worth a thousand words. </p>
<h2>About Dolores</h2>
<p>Dolores’s main function is to help users report data on their own pain and provide personalised education. This can be used to enhance communication between users and health professionals and improve pain management.</p>
<p>Dolores is designed to prompt the user for more information, ask follow-up questions, and generally lead the conversation. </p>
<p>The chatbot tailors its responses to the user. It will respond to children with short answers and a limited vocabulary, and responses will gradually increase in complexity with the user’s age.</p>
<p>As the app often captures sensitive data, it was designed with privacy and ethical considerations in mind.</p>
<p>The technology behind Dolores is very different from the large language models behind chatbots such as <a href="https://theconversation.com/chatgpt-isnt-the-death-of-homework-just-an-opportunity-for-schools-to-do-things-differently-205053">ChatGPT</a>. It works from a knowledge base and conversation flow built by occupational, speech and physio therapists, based on diverse experience with clients of different ages. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/aristotle-and-the-chatbot-how-ancient-rules-of-logic-could-make-artificial-intelligence-more-human-142811">Aristotle and the chatbot: how ancient rules of logic could make artificial intelligence more human</a>
</strong>
</em>
</p>
<hr>
<p>Dolores’s “brain” was built with help from hackathons and industry experts to ensure its responses are always focused on meeting the needs of users. Importantly, it will not give incorrect information or “hallucinate” responses.</p>
<h2>Pain history</h2>
<p>One thing Dolores is used for is to administer what is called a “pain history interview”. </p>
<p>In this process, Dolores asks questions about the onset, location and duration of the user’s pain. It also asks about the sensations and emotions associated with the pain and its impact on daily life, as well as the user’s understanding of pain and their perceptions of different kinds of treatment.</p>
<p>The user can respond to Dolores by talking, typing or drawing. Dolores then produces a transcript which the user can choose to share with their medical professional. </p>
<p>The transcript below was generated using speech recognition from a conversation with an individual living with chronic pain.</p>
<blockquote>
<p><strong>Dolores</strong>: Could you describe or draw what your pain physically feels like?</p>
<p><strong>Client</strong>: Sometimes it’s different … it’s not all the same … then all the days it’s burning tingling </p>
<p><strong>Dolores</strong>: Could you tell me or draw how your pain makes you feel? Physically and mentally.</p>
<p><strong>Client</strong>: Shit makes me feel like shit </p>
<p><strong>Dolores</strong>: And how long have you been in pain for?</p>
<p><strong>Client</strong>: Since I was about 15 years old</p>
<p><strong>Dolores</strong>: Can you tell me or draw how your pain began?</p>
<p><strong>Client</strong>: I am not sure it just began</p>
<p><strong>Dolores</strong>: And how does your pain affect your day-to-day life?</p>
<p><strong>Client</strong>: It affects everything even down to driving, shopping, taking my kids to school. Even just making a cup of tea</p>
<p><strong>Dolores</strong>: What do you believe is the cause of your pain?</p>
<p><strong>Client</strong>: Autoimmune disease and follow on disorders from that</p>
<p><strong>Dolores</strong>: What do you believe are the best treatments for managing your pain?</p>
<p><strong>Client</strong>: Nothing really helps it’s just a matter of learning to deal with it and manage it</p>
</blockquote>
<p>While this example was verbal and detailed, some users – particularly children and adolescents – preferred to doodle. For example, a volcano erupting symbolised a pain flare-up; lightning strikes symbolised sharp, sudden pain; and swirls and waves symbolised cyclic pain. </p>
<p>When asking about the impact of pain on daily life, we received an abundance of stick figures and faces depicting sorrow and loss of identity. </p>
<h2>The future</h2>
<p>In its present state, Dolores can interpret the colours used in a drawing, but not the drawing itself. <a href="https://pubmed.ncbi.nlm.nih.gov/25822055/">Research</a> shows red and black, for example, are commonly used when “painting pain”.</p>
<p>So when soliciting more information about a doodle, Dolores might say things like “That’s a lot of red. What is it?” </p>
<p>The next version of Dolores will have sketch recognition, which we believe will give her more insight and provide extended engagement with clients. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/introducing-edna-the-chatbot-trained-to-help-patients-make-a-difficult-medical-decision-150847">Introducing Edna: the chatbot trained to help patients make a difficult medical decision</a>
</strong>
</em>
</p>
<hr>
<p>Dolores was received positively in cohorts of patients across different age groups. It will soon be used in the broader chronic pain intervention platform, <a href="https://recover.centre.uq.edu.au/pain-roadmap-mobile-app-help-people-chronic-pain">Pain ROADMAP</a>. </p>
<p>Dolores is a step forward in catering for clients and communities whose language may differ from the clinician and for children and individuals with language disorders, who often have <a href="https://www.spectrumnews.org/news/autistic-children-disproportionately-affected-by-chronic-pain/">higher rates of chronic pain</a> and have difficulties communicating about their pain. </p>
<p>We believe a chatbot that identifies and conveys the symbols of pain to a clinician can be an effective tool for enhanced communication, leading to more effective pain management.</p><img src="https://counter.theconversation.com/content/210810/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nicole Andrews is affiliated with RECOVER Injury Research Centre, The University of Queensland and the Tess Cramond Pain and Research Centre, Metro North Hospital and Health Service.</span></em></p><p class="fine-print"><em><span>David Ireland does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A chatbot that lets users express their feelings by drawing may help communication and treatment for people with chronic pain.David Ireland, Senior Research Scientist at the Australian E-Health Research Centre, CSIRONicole Emma Andrews, Research Fellow, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2097602023-07-21T12:27:14Z2023-07-21T12:27:14Z6 ways AI can make political campaigns more deceptive than ever<figure><img src="https://images.theconversation.com/files/538357/original/file-20230719-19-faci2s.jpeg?ixlib=rb-1.1.0&rect=8%2C26%2C5982%2C3781&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There are real fears that AI will make politics more deceptive than it already is.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/engineer-designing-ai-technology-with-reflection-on-royalty-free-image/1455352989?phrase=artificial+intelligence+&adppopup=true">Westend61/Getty Images</a></span></figcaption></figure><p>Political campaign ads and donor solicitations have long been deceptive. In 2004, for example, U.S. presidential candidate John Kerry, a Democrat, aired an ad stating that Republican opponent George W. Bush “says sending jobs overseas <a href="https://doi.org/10.1177/0002764205279440">‘makes sense’</a> for America.” </p>
<p><a href="https://www.factcheck.org/2004/04/outsourcing-jobs-the-president-said-that/">Bush never said</a> such a thing. </p>
<p>The next day Bush responded by releasing an ad saying Kerry “supported higher taxes <a href="https://www.factcheck.org/2004/04/bush-ad-is-troubling-indeed/">over 350 times</a>.” This too was a <a href="https://doi.org/10.1177/0002764205279440">false claim</a>. </p>
<p>These days, the <a href="https://www.washington.edu/news/2021/11/08/political-ads-2020-presidential-election-collected-personal-information-spread-misleading-information/">internet has gone wild with deceptive</a> political ads. Ads often pose as polls and have misleading clickbait headlines.</p>
<p>Campaign fundraising solicitations are also rife with deception. An analysis of 317,366 political emails sent during the 2020 election in the U.S. found that <a href="https://doi.org/10.1177/20539517221145371">deception was the norm</a>. For example, a campaign manipulates recipients into opening the emails by lying about the sender’s identity and using subject lines that trick the recipient into thinking the sender is replying to the donor, or claims the email is “NOT asking for money” but then asks for money. Both <a href="https://www.nytimes.com/2021/06/26/us/politics/recurring-donations-seniors.html">Republicans and Democrats do it</a>.</p>
<p>Campaigns are now rapidly <a href="https://www.nytimes.com/2023/06/25/technology/ai-elections-disinformation-guardrails.html">embracing artificial intelligence</a> for composing and producing ads and donor solicitations. The results are impressive: Democratic campaigns found that donor <a href="https://www.nytimes.com/2023/03/28/us/politics/artificial-intelligence-2024-campaigns.html">letters written by AI were more effective</a> than letters written by humans at writing personalized text that persuades recipients to click and send donations. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LKQiTpiPN7I?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A pro-Ron DeSantis super PAC featured an AI-generated imitation of Donald Trump’s voice in this ad.</span></figcaption>
</figure>
<p>And <a href="https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278">AI has benefits for democracy</a>, such as helping staffers organize their emails from constituents or helping government officials summarize testimony.</p>
<p>But there are <a href="https://theconversation.com/chatbots-can-be-used-to-create-manipulative-content-understanding-how-this-works-can-help-address-it-207187">fears that AI will make politics more deceptive</a> than ever.</p>
<p>Here are six things to look out for. I base this list on <a href="https://scholar.google.com/citations?user=50tVKogAAAAJ&hl=en&oi=ao">my own experiments</a> testing the effects of political deception. I hope that voters can be equipped with what to expect and what to watch out for, and learn to be more skeptical, as the U.S. heads into the next presidential campaign. </p>
<h2>Bogus custom campaign promises</h2>
<p><a href="https://doi.org/10.1080/15377857.2021.1978033">My research</a> on the 2020 presidential election revealed that the choice voters made between Biden and Trump was driven by their perceptions of which candidate “proposes realistic solutions to problems” and “says out loud what I am thinking,” based on 75 items in a survey. These are two of the most important qualities for a candidate to have to <a href="https://doi.org/10.1080/15377857.2021.1978033">project a presidential</a> image and win. </p>
<p>AI chatbots, such as <a href="https://techcrunch.com/2023/07/13/chatgpt-everything-you-need-to-know-about-the-open-ai-powered-chatbot/">ChatGPT</a> by OpenAI, <a href="https://www.theverge.com/2023/2/23/23609942/microsoft-bing-sydney-chatbot-history-ai">Bing Chat</a> by Microsoft, and <a href="https://www.reuters.com/technology/googles-ai-chatbot-bard-expands-europe-brazil-take-chatgpt-2023-07-13/">Bard</a> by Google, could be used by politicians to generate customized campaign promises deceptively microtargeting voters and donors. </p>
<p>Currently, when people scroll through news feeds, the articles are logged in their computer history, which are <a href="https://dx.doi.org/10.2139/ssrn.1717563">tracked by sites such as Facebook</a>. The user is tagged as liberal or conservative, and also <a href="https://doi.org/10.48550/arXiv.2206.00397">tagged as holding certain interests</a>. Political campaigns can place an ad spot in real time on the person’s feed with a customized title. </p>
<p>Campaigns can use AI to develop a repository of articles written in different styles making different campaign promises. Campaigns could then embed an AI algorithm in the process – courtesy of automated commands already plugged in by the campaign – to generate bogus tailored campaign promises at the end of the ad posing as a news article or donor solicitation. </p>
<p>ChatGPT, for instance, could hypothetically be prompted to add material based on text from the last articles that the voter was reading online. The voter then scrolls down and reads the candidate promising exactly what the voter wants to see, word for word, in a tailored tone. My experiments have shown that if a presidential candidate can align the tone of word choices with a voter’s preferences, the politician will seem <a href="https://doi.org/10.1111/psq.12299">more presidential and credible</a>. </p>
<h2>Exploiting the tendency to believe one another</h2>
<p>Humans tend to automatically believe what they are told. They have what scholars call a “<a href="https://doi.org/10.1177/0261927X14535916">truth-default</a>.” They even fall prey to seemingly <a href="https://doi.org/10.1016/j.copsyc.2022.101380">implausible</a> <a href="https://doi.org/10.1093/hcr/hqz001">lies</a>. </p>
<p>In <a href="https://doi.org/10.1111/psq.12809">my experiments</a> I found that people who are exposed to a presidential candidate’s deceptive messaging believe the untrue statements. Given that text produced by ChatGPT can shift people’s <a href="https://dl.acm.org/doi/abs/10.1145/3544548.3581196">attitudes and opinions</a>, it would be <a href="https://doi.org/10.1080/10510974.2020.1833357">relatively easy for AI to exploit</a> voters’ truth-default when bots stretch the limits of credulity with even more implausible assertions than humans would conjure.</p>
<h2>More lies, less accountability</h2>
<p><a href="https://www.nytimes.com/2023/03/28/technology/ai-chatbots-chatgpt-bing-bard-llm.html">Chatbots</a> such as ChatGPT are prone to make up stuff that is <a href="https://towardsdatascience.com/llm-hallucinations-ec831dcd7786">factually inaccurate</a> or totally nonsensical. <a href="https://theconversation.com/chatbots-can-be-used-to-create-manipulative-content-understanding-how-this-works-can-help-address-it-207187">AI can produce deceptive information</a>, <a href="https://www.wsj.com/articles/chatgpt-under-investigation-by-ftc-21e4b3ef">delivering false statements</a> and misleading ads. While the most unscrupulous human campaign operative may still have a smidgen of accountability, <a href="https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051">AI has none</a>. And OpenAI acknowledges flaws with ChatGPT that lead it to provide biased information, disinformation and outright <a href="https://www.nytimes.com/2023/07/13/technology/chatgpt-investigation-ftc-openai.html">false information</a>. </p>
<p>If campaigns <a href="https://theconversation.com/chatbots-can-be-used-to-create-manipulative-content-understanding-how-this-works-can-help-address-it-207187">disseminate AI messaging without any human filter</a> or moral compass, lies could get worse and more out of control. </p>
<h2>Coaxing voters to cheat on their candidate</h2>
<p>A New York Times columnist had a lengthy chat with Microsoft’s Bing chatbot. Eventually, the <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">bot tried to get him to leave his wife</a>. “Sydney” told the reporter repeatedly “I’m in love with you,” and “You’re married, but you don’t love your spouse … you love me. … Actually you want to be with me.” </p>
<p>Imagine millions of these sorts of encounters, but with a bot trying to ply voters to leave their candidate for another.</p>
<p>AI <a href="https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051">chatbots can exhibit partisan bias</a>. <a href="https://doi.org/10.48550/arXiv.2303.17548">For example</a>, they currently tend to skew far more left politically – holding liberal biases, expressing 99% support for Biden – with far less diversity of opinions than the general population. </p>
<p>In 2024, Republicans and Democrats will have the opportunity to fine-tune models that inject political bias and even chat with voters to sway them. </p>
<figure class="align-center ">
<img alt="Two men in dark suits debating each other from different lecterns." src="https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=431&fit=crop&dpr=1 600w, https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=431&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=431&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=542&fit=crop&dpr=1 754w, https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=542&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/538526/original/file-20230720-21-n7jzt4.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=542&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In 2004, a campaign ad for Democratic presidential candidate John Kerry, left, lied about his opponent, Republican George W. Bush, right. Bush’s campaign lied about Kerry, too.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/TOPIXBUSHKERRYDEBATE2004/b5b29d1aaae4da11af9f0014c2589dfb/photo?Query=john%20kerry%20george%20bush&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=&totalCount=21&currentItemNo=17">AP Photo/Wilfredo Lee</a></span>
</figcaption>
</figure>
<h2>Manipulating candidate photos</h2>
<p>AI can <a href="https://techcrunch.com/2022/12/13/image-generating-ai-can-copy-and-paste-from-training-data-raising-ip-concerns/">change images</a>. So-called “deepfake” videos and pictures are common in politics, and they are <a href="https://nypost.com/2023/07/07/trump-and-biden-deep-fakes-take-ai-to-new-scary-level-in-live-debate/">hugely advanced</a>. Donald Trump has used AI to create a <a href="https://www.forbes.com/sites/mattnovak/2023/03/23/donald-trump-shares-fake-ai-created-image-of-himself-on-truth-social/?sh=2ef8d92e71f6">fake photo</a> of himself down on one knee, praying. </p>
<p>Photos can be tailored more precisely to influence voters more subtly. In <a href="https://doi.org/10.1177/0261927X211045724">my research</a> I found that a communicator’s appearance can be as influential – and deceptive – as what someone actually says. <a href="https://doi.org/10.1080/15377857.2021.1978033">My research</a> also revealed that Trump was perceived as “presidential” in the 2020 election when voters thought he seemed “sincere.” And getting people to think you “seem sincere” through your nonverbal outward appearance is a <a href="https://doi.org/10.1111/j.1468-2958.2011.01407.x">deceptive tactic</a> that is more convincing than saying things that are actually true.</p>
<p>Using Trump as an example, let’s assume he wants voters to see him as sincere, trustworthy, likable. Certain alterable features of his appearance make him look insincere, untrustworthy and unlikable: He <a href="https://cdn.theatlantic.com/thumbor/gJkg8WGmmR5htVmKBfaOtRU_93A=/0x130:3492x2094/1952x1098/media/img/mt/2019/01/AP_19009087975304/original.jpg">bares his lower teeth</a> when he speaks and <a href="https://youtu.be/wiyUYMWtGPA">rarely</a> <a href="https://www.facebook.com/NBCNews/videos/voter-to-president-trump-youre-so-handsome-when-you-smile/3580790395346972/">smiles</a>, which makes him <a href="https://psycnet.apa.org/doi/10.1016/S0140-1750(86)90190-9">look threatening</a>. </p>
<p>The campaign could use AI to tweak a Trump image or video to make him appear smiling and friendly, which would make voters think he is more <a href="https://doi.org/10.1017/pls.2015.5">reassuring</a> and a winner, and ultimately <a href="https://www.jstor.org/stable/40072946">sincere and believable</a>. </p>
<h2>Evading blame</h2>
<p>AI provides campaigns with added deniability when they mess up. Typically, if politicians get in trouble <a href="https://www.washingtonpost.com/opinions/2022/06/02/biden-cant-blame-his-staff-his-flailing-presidency/">they blame</a> their staff. If staffers get in trouble they <a href="https://archive.nytimes.com/www.nytimes.com/politics/first-draft/2015/10/22/donald-trump-says-intern-apologizes-for-twitter-message-on-iowans-and-corn/">blame the intern</a>. If interns get in trouble they can now blame ChatGPT. </p>
<p>A campaign might shrug off missteps by blaming an inanimate object notorious for <a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">making up complete lies</a>. When Ron DeSantis’ campaign <a href="https://www.youtube.com/watch?v=hLuUmNkS21A">tweeted deepfake</a> photos of <a href="https://www.reuters.com/world/us/is-trump-kissing-fauci-with-apparently-fake-photos-desantis-raises-ai-ante-2023-06-08/">Trump hugging and kissing Anthony Fauci, staffers</a> did not even acknowledge the malfeasance nor respond to reporters’ requests for comment. No human needed to, it appears, if a <a href="https://www.nytimes.com/2023/06/08/us/politics/desantis-deepfakes-trump-fauci.html">robot</a> could hypothetically take the fall. </p>
<p>Not all of <a href="https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278">AI’s contributions</a> to politics are potentially harmful. <a href="https://slate.com/technology/2023/04/ai-public-option.html">AI can aid</a> voters politically, helping educate them about issues, for example. However, plenty of horrifying things could happen as <a href="https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051">campaigns deploy AI</a>. I hope these six points will help you prepare for, and avoid, deception in ads and donor solicitations.</p><img src="https://counter.theconversation.com/content/209760/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David E. Clementson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Politicians and their campaigns use a lot of methods, including manipulation and deception, to persuade you to vote for them and give them money. AI promises to make those attempts more effective.David E. Clementson, Assistant Professor, Grady College of Journalism and Mass Communication, University of GeorgiaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2092832023-07-20T12:31:21Z2023-07-20T12:31:21ZCan you trust AI? Here’s why you shouldn’t<figure><img src="https://images.theconversation.com/files/538131/original/file-20230718-29-tmku.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5040%2C3357&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI chatbots are becoming more powerful, but how do you know if they're working in your best interest?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-chatbot-concept-royalty-free-image/1364050120">Carol Yepes/Moment via Getty Images</a></span></figcaption></figure><p>If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by <a href="https://www.bloomberg.com/news/articles/2023-06-14/amazon-s-alexa-defends-company-honor-while-jabbing-rivals">saying it doesn’t know</a>. It doesn’t take much to make it <a href="https://www.bloomberg.com/news/articles/2023-06-14/amazon-s-alexa-defends-company-honor-while-jabbing-rivals">lambaste the other tech giants</a>, but it’s silent about its own corporate parent’s misdeeds.</p>
<p>When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output.</p>
<p>Newer generations of AI models, with their more sophisticated and less rote responses, are making it harder to tell who benefits when they speak. Internet companies’ manipulating what you see to serve their own interests is nothing new. Google’s search results and your Facebook feed are <a href="https://www.marketingweek.com/ritson-digital-duopoly-2018/">filled with paid entries</a>. <a href="https://www.sciencefriday.com/articles/chaos-machine-book-excerpt/">Facebook</a>, <a href="https://www.nytimes.com/2021/12/05/business/media/tiktok-algorithm.html">TikTok</a> and others manipulate your feeds to maximize the time you spend on the platform, which means more ad views, over your well-being.</p>
<p>What distinguishes AI systems from these other internet services is how interactive they are, and how these interactions will increasingly become like relationships. It doesn’t take much extrapolation from today’s technologies to envision AIs that will plan trips for you, negotiate on your behalf or act as therapists and life coaches. </p>
<p>They are likely to be with you 24/7, know you intimately, and be able to anticipate your needs. This kind of conversational interface to the vast network of services and resources on the web is within the capabilities of existing generative AIs like ChatGPT. They are on track to become <a href="https://danielmiessler.com/p/ais-next-big-thing-is-digital-assistants/">personalized digital assistants</a>.</p>
<p>As a <a href="https://dblp.org/pid/s/BruceSchneier.html">security expert</a> and <a href="https://cyber.harvard.edu/people/nathan-sanders">data scientist</a>, we believe that people who come to rely on these AIs will have to trust them implicitly to navigate daily life. That means they will need to be sure the AIs aren’t secretly working for someone else. Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs <a href="https://www.consumerreports.org/electronics/privacy/how-to-turn-off-smart-tv-snooping-features-a4840102036/">spy on you</a>. Phone apps <a href="https://www.usenix.org/conference/pepr20/presentation/egelman">collect and sell your data</a>. Many apps and websites <a href="https://www.ftc.gov/news-events/news/press-releases/2022/09/ftc-report-shows-rise-sophisticated-dark-patterns-designed-trick-trap-consumers">manipulate you through dark patterns</a>, design elements that <a href="https://theconversation.com/what-are-dark-patterns-an-online-media-expert-explains-165362">deliberately mislead, coerce or deceive website visitors</a>. This is <a href="https://www.schneier.com/books/data-and-goliath/">surveillance capitalism</a>, and AI is shaping up to be part of it.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/5AvtUrHxg8A?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">AI is playing a role in surveillance capitalism, which boils down to spying on you to make money off you.</span></figcaption>
</figure>
<h2>In the dark</h2>
<p>Quite possibly, it could be much worse with AI. For that AI digital assistant to be truly useful, it will have to really know you. Better than your phone knows you. Better than Google search knows you. Better, perhaps, than your close friends, intimate partners and therapist know you.</p>
<p>You have no reason to trust today’s leading generative AI tools. Leave aside the <a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">hallucinations</a>, the made-up “facts” that GPT and other large language models produce. We expect those will be largely cleaned up as the technology improves over the next few years. </p>
<p>But you don’t know how the AIs are configured: how they’ve been trained, what information they’ve been given, and what instructions they’ve been commanded to follow. For example, researchers <a href="https://www.theverge.com/23599441/microsoft-bing-ai-sydney-secret-rules">uncovered the secret rules</a> that govern the Microsoft Bing chatbot’s behavior. They’re largely benign but can change at any time.</p>
<h2>Making money</h2>
<p>Many of these AIs are created and trained at enormous expense by some of the largest tech monopolies. They’re being offered to people to use free of charge, or at very low cost. These companies will need to monetize them somehow. And, as with the rest of the internet, that somehow is likely to include surveillance and manipulation.</p>
<p>Imagine asking your chatbot to plan your next vacation. Did it choose a particular airline or hotel chain or restaurant because it was the best for you or because its maker got a kickback from the businesses? As with paid results in Google search, newsfeed ads on Facebook and paid placements on Amazon queries, these paid influences are likely to get more surreptitious over time.</p>
<p>If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? Is your AI agent secretly a double agent? Right now, there is no way to know.</p>
<h2>Trustworthy by law</h2>
<p>We believe that people should expect more from the technology and that tech companies and AIs can become more trustworthy. The European Union’s proposed <a href="https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence">AI Act</a> takes some important steps, requiring transparency about the data used to train AI models, mitigation for potential bias, disclosure of foreseeable risks and reporting on industry standard tests.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/kcOQYj9zcqw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The European Union is pushing ahead with AI regulation.</span></figcaption>
</figure>
<p>Most existing AIs <a href="https://crfm.stanford.edu/2023/06/15/eu-ai-act.html">fail to comply</a> with this emerging European mandate, and, despite <a href="https://www.nbcnews.com/politics/congress/schumer-call-hands-deck-approach-regulating-ai-rcna90193">recent prodding</a> from Senate Majority Leader Chuck Schumer, the U.S. is far behind on such regulation.</p>
<p>The AIs of the future should be trustworthy. Unless and until the government delivers robust consumer protections for AI products, people will be on their own to guess at the potential risks and biases of AI, and to mitigate their worst effects on people’s experiences with them. </p>
<p>So when you get a travel recommendation or political information from an AI tool, approach it with the same skeptical eye you would a billboard ad or a campaign volunteer. For all its technological wizardry, the AI tool may be little more than the same.</p><img src="https://counter.theconversation.com/content/209283/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nathan Sanders is a volunteer contributor to the Massachusetts Platform for Legislative Engagement (MAPLE) project.</span></em></p><p class="fine-print"><em><span>Bruce Schneier does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s difficult to see how artificial intelligence systems work, and to see whose interests they work for. Regulation could make AI more trustworthy. Until then, user beware.Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy SchoolNathan Sanders, Affiliate, Berkman Klein Center for Internet & Society, Harvard UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2088252023-07-06T12:28:14Z2023-07-06T12:28:14ZCan chatbots write inspirational and wise sermons?<figure><img src="https://images.theconversation.com/files/535891/original/file-20230705-22346-guktbb.jpg?ixlib=rb-1.1.0&rect=62%2C44%2C5892%2C3910&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI-created sermons have proved controversial for some religious leaders.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.co.uk/detail/news-photo/june-2023-bavaria-f%C3%BCrth-visitors-and-attendees-during-the-news-photo/1258555134?adppopup=true">Daniel Vogl/picture alliance via Getty Images</a></span></figcaption></figure><p>When several hundred Lutherans in Bavaria, Germany, attended a service on June 9, 2023, <a href="https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348">designed by ChatGPT</a>, the program not only selected hymns and prayers, but also composed and delivered a sermon, delivered by an avatar on a big screen. </p>
<p>Indeed, programs like ChatGPT, that can produce a sermon in seconds, might seem attractive to busy clergy. But several religious leaders, including rabbis serving Jewish congregations as well as Christian Protestant pastors, <a href="https://apnews.com/article/technology-artificial-intelligence-kentucky-religion-65822bf1c46de7630d3441e9ff4ff41a">have conflicting feelings</a> about utilizing chatbots in preparing sermons. </p>
<p>There may be several reasons for being cautious. From my perspective, as <a href="https://www.holycross.edu/academics/programs/religious-studies/faculty/joanne-pierce">a specialist in Catholic liturgy and ritual</a>, the most important critique has to do with true intent of preaching – to offer insight and inspiration on the human experience of faith.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/xmXghWi2lf8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">More than 300 people attended an experimental Lutheran Protestant church service almost entirely generated by artificial intelligence in Germany on June 9, 2023. (AP video: Daniel Niemann)</span></figcaption>
</figure>
<h2>Historical practice</h2>
<p>In the early centuries of Christianity, <a href="https://www.newadvent.org/cathen/07443a.htm">preaching was largely reserved for bishops</a>, considered to be the successors to Jesus’ apostles. During the Middle Ages, priests were also allowed to preach, although their chief responsibility was to say the Mass – ritually consecrating the offerings of bread and wine – especially on Sundays. </p>
<p>In some religious orders, priests <a href="https://www.op.org/history/">became famous traveling preachers</a>, although much of the time they were preaching in other settings, not during Mass. <a href="https://brill.com/display/title/15499">The Franciscan</a> and <a href="https://exhibitions.lib.cam.ac.uk/dominicans/case/medieval-preachers-and-teachers/">Dominican</a> orders, for example, would send priests to preach on the streets and in city centers, traveling from town to town in fulfillment of this ministry. </p>
<p>During the next few centuries, preaching brief sermons or homilies became increasingly important during the celebration of Sunday Mass. The Second Vatican Council, convened in 1962, <a href="https://www.vaticannews.va/en/vatican-city/news/2022-10/vatican-ii-council-60th-anniversary-video-history-background.html">took a fresh look at all the church’s rituals</a> and stressed the role of preaching at worship, especially at Mass. </p>
<p>These principles have been <a href="https://www.vatican.va/roman_curia/congregations/ccdds/documents/rc_con_ccdds_doc_20140629_direttorio-omiletico_en.html#I._THE_HOMILY">reaffirmed in more recent documents</a> that guide Catholic preachers when writing a sermon. In essence, preaching was always believed to be a human activity grounded in faith. </p>
<h2>Insight and inspiration</h2>
<p>Preaching as a human activity has a special meaning for Catholics – and most Christians – because they believe that Jesus Christ is the incarnate Son of God, who <a href="https://www.vatican.va/archive/ENG0015/__P1J.HTM">came into human life to save all of humanity from their sins</a> and gave his <a href="https://www.biblegateway.com/passage/?search=Mark+16%3A15&version=NRSVCE">apostles the commandment to preach the gospel</a> about this “good news” to people of all nations. </p>
<p>In the decades since Vatican II ended in 1965, preaching in the Catholic tradition has been emphasized as a “<a href="https://www.vatican.va/archive/hist_councils/ii_vatican_council/documents/vat-ii_decree_19651207_presbyterorum-ordinis_en.html">primary duty</a>” of all priests. </p>
<p>The sermon is meant to <a href="https://www.vatican.va/roman_curia/congregations/ccdds/documents/rc_con_ccdds_doc_20140629_direttorio-omiletico_en.html#I._THE_HOMILY">inspire people</a> in their ordinary lives of faith. The preacher must spend time in preparing the sermon, but this does not just mean compiling theological quotes or doing research on the history of the Bible. </p>
<p>A good sermon is not just a classroom lecture. In fact, several contemporary popes have stressed that the language of sermons should avoid technical or obscure terminology. In 1975, Pope Paul VI wrote that the language of preaching should be “<a href="https://www.vatican.va/content/paul-vi/en/apost_exhortations/documents/hf_p-vi_exh_19751208_evangelii-nuntiandi.html">simple, clear, direct, well-adapted</a>” for the congregation in the pews. And in 2013, Pope Francis echoed these same words in his observation that “<a href="https://www.vatican.va/content/francesco/en/apost_exhortations/documents/papa-francesco_esortazione-ap_20131124_evangelii-gaudium.html#_ftn125%5D">simplicity has to do with the language we use</a>.”</p>
<p>But preaching is not just about offering pious mottoes or generic religious formulas. The preacher’s experience, insights and emotions all come into play when composing the homiletic text.</p>
<figure class="align-center ">
<img alt="Black and white photo of Billy Graham preaching to a packed audience. Graham stands at a lectern in front of many onlookers, with his hands raised above his head" src="https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/535893/original/file-20230705-26-esx3zq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Evangelist Billy Graham reached millions who were attracted by his charisma and preaching style.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.co.uk/detail/news-photo/evangelist-billy-graham-preaches-to-a-crowd-of-21-000-in-st-news-photo/515575744?adppopup=true">Bettmann / Contributor via Getty Images</a></span>
</figcaption>
</figure>
<p>The preacher is not simply offering good advice, but speaking out of personal reflection in a way that will inspire the members of the congregation, not just please them. It <a href="https://www.mccrimmons.com/shop/homilies/preaching-as-paying-attention--theological-reflection-in-the-pulpit/">must also be shaped</a> by an awareness of the needs and lived experience of the worshipping community in the pews. </p>
<h2>Use with caution</h2>
<p>In practice, chatbots might help clergy save time by finding sources and compiling relevant facts, but the results would need to be <a href="https://mashable.com/article/microsoft-bing-ai-chatbot-weird-scary-responses">checked for errors</a>. Chatbots have been known to <a href="https://www.theguardian.com/technology/2023/feb/09/ai-blunders-google-chatbot-chatgpt-cause-trouble-more-firms">make some factual blunders</a> or <a href="https://www.reuters.com/legal/transactional/lawyer-who-cited-cases-concocted-by-ai-asks-judge-spare-sanctions-2023-06-08/">invent sources completely</a>. </p>
<p>Above all, I believe chatbots, as of now, are not capable of preparing a text suitable for being offered as a sermon. From what we know about chatbots, <a href="https://theconversation.com/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-were-prone-to-anthropomorphize-it-200525">they cannot know</a> <a href="https://theconversation.com/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-language-ais-dont-know-what-theyre-saying-201280">what it means to be human</a>, to experience love or be inspired by a sacred text. </p>
<p>Perhaps Baptist pastor Hershael York, Dean of the School of Theology at The Southern Baptist Theological Seminary, has put it best. He has noted that the ultimate failure of a chatbot’s sermon lies in the fact that it “lacks a soul.” Without that empathetic consciousness, a chatbot-composed sermon cannot include genuine insights based on personal spiritual experience. And without that essential element of embodied human awareness, true preaching is simply not possible.</p><img src="https://counter.theconversation.com/content/208825/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joanne M. Pierce does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Not quite, writes an expert on Christian worship. Chatbots do not have human experience of love – and, above all, they lack a soul.Joanne M. Pierce, Professor Emerita of Religious Studies, College of the Holy CrossLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2071872023-06-27T18:14:43Z2023-06-27T18:14:43ZChatbots can be used to create manipulative content — understanding how this works can help address it<figure><img src="https://images.theconversation.com/files/533248/original/file-20230621-14002-1plwfd.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5772%2C3827&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence can be used to produce persuasive texts that influence behaviour.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Manipulative communication includes disinformation, propaganda, con artistry and fraud. With the increasing use of artificial intelligence (AI), manipulative communication is not only being aided by AI, automation and machine learning, but will likely be dominated by such practices in the near future.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="book cover with the title Social Engineering How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication" src="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Social Engineering’ looks at how the internet affected manipulative communication.</span>
</figcaption>
</figure>
<p>As communication scholars, we were interested in the applications of AI in supporting manipulative communication efforts. Our recent book, <a href="https://mitpress.mit.edu/9780262543453/"><em>Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication</em></a> looks at how the internet affected manipulative communication.</p>
<h2>AI chatbots</h2>
<p>ChatGPT, one of the most popular and widely used chatbots, is an artificial intelligence chatbot, capable of parsing text input and quickly providing text responses. </p>
<p>While it has been <a href="https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html">critiqued for being misleading</a>, when we asked it about the possibilities for manipulative communication, its response was remarkably prescient:</p>
<blockquote>
<p>AI can be used for manipulative communication in a few ways:</p>
<p>• Social media bots: AI-powered bots can be programmed to mimic human behavior and spread false or misleading information to manipulate public opinion.</p>
<p>• Deepfake technology: AI algorithms can be used to generate fake audio or video content that appears to be real, making it easier to spread false information or manipulate individuals.</p>
<p>• Personalized advertising: AI algorithms can analyze data about a person’s behavior, preferences, and relationships to create targeted advertisements that manipulate their emotions and influence their purchasing decisions.</p>
</blockquote>
<p>As we argue in <em>Social Engineering</em>, manipulative communication can be understood as a form of “masspersonal social engineering.” It’s “masspersonal” in that it can oscillate between targeted messages or societal-scale manipulation. It’s “social engineering” in that it combines the respective tools and techniques of hackers and propagandists in an attempt to shape the perceptions and actions of audiences.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpts-greatest-achievement-might-just-be-its-ability-to-trick-us-into-thinking-that-its-honest-202694">ChatGPT's greatest achievement might just be its ability to trick us into thinking that it's honest</a>
</strong>
</em>
</p>
<hr>
<p>Masspersonal social engineering typically involves three stages: trashing, pretexting and bullshitting.</p>
<p>Each of these stages can be automated, with new AI tools increasing the pace and intensity.</p>
<h2>Trashing</h2>
<p>Trashing is the stage where the masspersonal social engineer gathers information on potential targets. We use the term “trashing” because it hearkens back to a mid-20th century hacker process of literally <a href="https://hackcur.io/trashing-the-phone-company-with-suzy-thunder/">going through corporate trash</a> to find passwords and restricted information.</p>
<p>While social engineers <a href="https://doi.org/10.1016/B978-1-59749-215-7.X0001-7">still go through physical trash</a>, these days trashing takes place in digital environments.</p>
<p>For example, trashing was key to the Russian hack of former White House Chief of Staff John Podesta’s emails in 2016. Podesta, who was in charge of Hillary Clinton’s 2016 presidential campaign, <a href="https://www.vice.com/en/article/mg7xjb/how-hackers-broke-into-john-podesta-and-colin-powells-gmail-accounts">fell victim to a phishing attack</a>. </p>
<p>Podesta wasn’t the first target — the <a href="https://apnews.com/dea73efc01594839957c3c9a6c962b8a">Russian hackers worked their way</a> through several email addresses used by Clinton staffers, including staffers who were no longer part of her campaign and who had abandoned their email accounts years before. </p>
<p>In other words, they had to work their way through the digital detritus of old and abandoned emails until they were able to find active ones – including Podesta’s – and then they could send a phishing email.</p>
<p>Digital trashing has already been automated. Facebook/Meta, Twitter and especially LinkedIn have been <a href="https://portal.research.lu.se/en/publications/the-weaponization-of-social-media-spear-phishing-and-cyberattacks">ripe targets for the automated gathering of data on potential targets</a>. </p>
<p>Beyond social media, websites — particularly those that have organizational structures, names of employees and email addresses — <a href="https://nostarch.com/practical-social-engineering">are targets</a>. </p>
<h2>Pretexting</h2>
<p>A pretext is the role a masspersonal social engineer plays when trying to get information or manipulate a target. For example, in a phishing email, the phisher is playing a role as a bank or government representative. The most effective pretexts are developed based on the information gathered in trashing — the more information a social engineer has on their target, the more likely the social engineer can construct a compelling role to play.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a man sits in the dark in front of a laptop and additional screen. he is wearing headphones" src="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When phishing for information, a social engineer may play a deceptive role.</span>
<span class="attribution"><span class="source">(Jefferson Santos/Unsplash)</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>And pretexts can be automated. We’ve already seen the effects of <a href="https://doi.org/10.1177/0894439320908190">socialbots on discourse in social media</a>. And for several years people have sounded alarms about <a href="https://doi.org/10.1109/ACCESS.2021.3131517">deepfake videos and audio</a> of political figures.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-combat-the-unethical-and-costly-use-of-deepfakes-184722">How to combat the unethical and costly use of deepfakes</a>
</strong>
</em>
</p>
<hr>
<p>But evidence from security professionals show that automated imitations of everyday people are happening, too. <a href="https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402">A case of fraud</a> involving an AI-based imitation of a CEO’s voice has already occurred, and there are <a href="https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc">reports of fraudsters using AI-generated voices</a> of relatives to scam their loved ones.</p>
<h2>Bullshitting</h2>
<p>The third and final stage, bullshitting, is the actual engagement with the target. All the trashing and development of a pretext leads to this point: trashing gives the social engineer background information, and the pretext provides a role-playing framework, but in any back-and-forth engagement with the target, the social engineer engages in improvisation.</p>
<p>As moral philosopher <a href="https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit">Harry Frankfurt famously defines it</a>, “bullshit” is not lying — it’s the indifference to truth. A bullshitter may or may not speak truth. The truth is beside the point; it’s the <em>effect</em> of the communication that matters.</p>
<p>AI could produce bullshit content — including deepfakes — that floods a media system at a much larger scale than a person, or group of people, working together. The primary concern here is the production of seemingly real content that is meant to deceive or muddy debate.</p>
<p>And we are already seeing interest among content marketers, who are <a href="https://www.entrepreneur.com/science-technology/how-can-companies-use-chatgpt-for-content-marketing/450831">using AI</a> to help them crank out more content for their blogs. </p>
<p>Even if no one piece is particularly effective, the flood of such content online will further add to the “<a href="https://doi.org/10.7249/PE198">firehose of falsehood</a>.” This could have the effect of further muddying the waters of online discourse, and eroding our sense of what is true, false and authentic online.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1671586620665995266"}"></div></p>
<h2>Increased intensity</h2>
<p>Manipulative communication isn’t new. But automated manipulative communication is a new development, increasing the pace and intensity of disinformation and misinformation. </p>
<p>We hope that this framework, which breaks down the manipulative communication process into stages, helps future researchers and policymakers come to grips with this development. </p>
<p>Reducing trashing behaviours involves better privacy regulations and cybersecurity to prevent data breaches, and enhanced penalties for organizations that do leak private data. </p>
<p>Addressing pretexting can involve more transparency in the funding for advertising campaigns, particularly in the case of political advertising on social media. </p>
<p>And to combat bullshitting, we should support projects that teach digital media literacy.</p><img src="https://counter.theconversation.com/content/207187/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert W. Gehl received funding from the Fulbright Commission. </span></em></p><p class="fine-print"><em><span>Sean Lawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence could be used to generate content intended to manipulate people. Addressing this problem means understanding how communication works to influence people.Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice, York University, CanadaSean Lawson, Professor, Communication, University of UtahLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2040962023-05-23T12:27:19Z2023-05-23T12:27:19ZIs generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins<figure><img src="https://images.theconversation.com/files/524227/original/file-20230503-18-2bu207.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C5294%2C3344&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI chatbots and image generators run on thousands of computers housed in data centers like this Google facility in Oregon.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Google_Datacenter_-_The_Dalles,_Oregon_%2817832143871%29.jpg">Tony Webster/Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Generative AI is the hot new technology behind chatbots and image generators. But how hot is it making the planet?</p>
<p>As an <a href="https://scholar.google.com/citations?user=9xDADY4AAAAJ&hl=en">AI researcher</a>, I often worry about the energy costs of building artificial intelligence models. The more powerful the AI, the more energy it takes. What does the emergence of increasingly more powerful generative AI models mean for society’s future carbon footprint?</p>
<p>“Generative” refers to the ability of an AI algorithm to produce complex data. The alternative is <a href="https://www.unite.ai/generative-vs-discriminative-machine-learning-models/">“discriminative” AI</a>, which chooses between a fixed number of options and produces just a single number. An example of a discriminative output is choosing whether to approve a loan application.</p>
<p>Generative AI can create much more complex outputs, such as a sentence, a paragraph, an image or even a short video. It has long been used in applications like smart speakers to generate audio responses, or in autocomplete to suggest a search query. However, it only recently gained the ability to <a href="https://theconversation.com/generative-ai-5-essential-reads-about-the-new-era-of-creativity-job-anxiety-misinformation-bias-and-plagiarism-203746">generate humanlike language and realistic photos</a>.</p>
<h2>Using more power than ever</h2>
<p>The exact energy cost of a single AI model is difficult to estimate, and includes the energy used to manufacture the computing equipment, create the model and use the model in production. In 2019, researchers found that creating a generative AI model called BERT with 110 million parameters <a href="https://doi.org/10.48550/arXiv.1906.02243">consumed the energy of a round-trip transcontinental flight</a> for one person. The number of parameters refers to the size of the model, with larger models generally being more skilled. Researchers estimated that creating the much larger GPT-3, which has 175 billion parameters, <a href="https://doi.org/10.48550/arXiv.2104.10350">consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent</a>, the equivalent of 123 gasoline-powered passenger vehicles driven for one year. And that’s just for getting the model ready to launch, before any consumers start using it.</p>
<p>Size is not the only predictor of carbon emissions. The open-access <a href="https://bigscience.huggingface.co/blog/bloom">BLOOM model</a>, developed by the <a href="https://bigscience.notion.site/Introduction-5facbf41a16848d198bda853485e23a0">BigScience project</a> in France, is similar in size to GPT-3 but has <a href="https://doi.org/10.48550/arXiv.2211.02001">a much lower carbon footprint</a>, consuming 433 MWh of electricity in generating 30 tons of CO2eq. A study by Google found that for the same size, using a more efficient model architecture and processor and a greener data center can reduce the carbon footprint <a href="https://doi.org/10.48550/arXiv.2104.10350">by 100 to 1,000 times</a>.</p>
<p>Larger models do use more energy during their deployment. There is limited data on the carbon footprint of a single generative AI query, but some industry figures estimate it to be <a href="https://www.wired.com/story/the-generative-ai-search-race-has-a-dirty-secret/">four to five times higher</a> than that of a search engine query. As chatbots and image generators become more popular, and as Google and Microsoft <a href="https://www.nytimes.com/2023/04/16/technology/google-search-engine-ai.html">incorporate AI language models</a> into their search engines, the number of queries they receive each day could grow exponentially.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a roomful of people work on computers" src="https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=385&fit=crop&dpr=1 600w, https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=385&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=385&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=483&fit=crop&dpr=1 754w, https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=483&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/524517/original/file-20230504-13354-hc7ki2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=483&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">AI chatbots, search engines and image generators are rapidly going mainstream, adding to AI’s carbon footprint.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CabellLibrary/067ff570e6e84672bafd86ae4e975de0/photo">AP Photo/Steve Helber</a></span>
</figcaption>
</figure>
<h2>AI bots for search</h2>
<p>A few years ago, not many people outside of research labs were using models like BERT or GPT. That changed on Nov. 30, 2022, when OpenAI released ChatGPT. According to the latest available data, ChatGPT had over <a href="https://www.similarweb.com/blog/insights/ai-news/chatgpt-bing-duckduckgo/">1.5 billion visits in March 2023</a>. Microsoft incorporated ChatGPT into its search engine, Bing, and made it <a href="https://gizmodo.com/bing-ai-chatgpt-microsoft-search-open-public-waitlist-1850401638">available to everyone</a> on May 4, 2023. If chatbots become as popular as search engines, the energy costs of deploying the AIs could really add up. But AI assistants have many more uses than just search, such as writing documents, solving math problems and creating marketing campaigns.</p>
<p>Another problem is that AI models need to be continually updated. For example, ChatGPT was only trained on data from up to 2021, so it does not know about anything that happened since then. The carbon footprint of creating ChatGPT isn’t public information, but it is likely much higher than that of GPT-3. If it had to be recreated on a regular basis to update its knowledge, the energy costs would grow even larger.</p>
<p>One upside is that asking a chatbot can be a more direct way to get information than using a search engine. Instead of getting a page full of links, you get a direct answer as you would from a human, assuming issues of accuracy are mitigated. Getting to the information quicker could potentially offset the increased energy use compared to a search engine.</p>
<h2>Ways forward</h2>
<p>The future is hard to predict, but large generative AI models are here to stay, and people will probably increasingly turn to them for information. For example, if a student needs help solving a math problem now, they ask a tutor or a friend, or consult a textbook. In the future, they will probably ask a chatbot. The same goes for other expert knowledge such as legal advice or medical expertise.</p>
<p>While a single large AI model is not going to ruin the environment, if a thousand companies develop slightly different AI bots for different purposes, each used by millions of customers, the energy use could become an issue. More research is needed to make generative AI more efficient. The good news is that AI can run on renewable energy. By bringing the computation to where green energy is more abundant, or scheduling computation for times of day when renewable energy is more available, emissions can be <a href="https://www.linkedin.com/pulse/here-comes-sun-why-large-language-models-dont-have-cost-paul-walsh/">reduced by a factor of 30 to 40</a>, compared to using a grid dominated by fossil fuels.</p>
<p>Finally, societal pressure may be helpful to encourage companies and research labs to publish the carbon footprints of their AI models, as some already do. In the future, perhaps consumers could even use this information to choose a “greener” chatbot.</p><img src="https://counter.theconversation.com/content/204096/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kate Saenko is on leave from Boston University to work at Meta, Inc. She receives funding from Meta, Google, DARPA and NSF. </span></em></p>Generative AI, those astonishingly powerful language- and image-generating tools taking the world by storm, come at a price: a big carbon footprint. But not all AIs are equally dirty.Kate Saenko, Associate Professor of Computer Science, Boston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2041562023-05-18T12:25:26Z2023-05-18T12:25:26ZIf AI is to become a key tool in education, access has to be equal<p>The pandemic forced many educational institutions to move to online learning. Could the rise of chatbots, including OpenAI’s <a href="https://chat.openai.com/">ChatGPT</a> and Google’s <a href="https://bard.google.com/?hl=en">Bard</a>, now further improve the accessibility of learning and make education more obtainable for everyone?</p>
<p>Chatbots are computer programmes that use artificial intelligence to simulate conversation with human users. They work by analysing the context of a conversation and generating responses they believe to be relevant. They have been trained on massive data sets of human language, allowing them to generate responses to a wide range of questions. </p>
<p>Chatbots like ChatGPT and Bard can be used in a variety of educational settings, from primary and secondary schools to universities and adult education courses. One of their greatest strengths is in <a href="https://www.sciencedirect.com/science/article/pii/S1041608023000195?via%3Dihub">promoting individualised learning</a>. </p>
<p>For example, they can support students in research and writing tasks, while also promoting the development of critical thinking and problem-solving abilities. They can generate text summaries and outlines, aiding with comprehension and organising thoughts for writing. They can also provide students with resources and information about specific topics, highlighting unexplored areas and current research topics, thus enhancing research skills and encouraging agency in learning. </p>
<p>Similarly, <a href="https://ieeexplore.ieee.org/abstract/document/9069875">research has shown</a> that chatbots can help to maintain students’ motivation and involvement, in part by promoting self-directed learning and autonomy. This means they they can potentially be used to help address low engagement in education that has been <a href="https://link.springer.com/article/10.1007/s11191-021-00234-x">made worse by COVID-19</a> and the move to remote online learning. </p>
<h2>Digital poverty</h2>
<p>While chatbots have the potential to enhance learning, it’s important to acknowledge the dangers they might also pose in relation to digital poverty and the <a href="https://eric.ed.gov/?id=EJ1178751">digital divide</a>. Students who lack reliable internet access or other resources needed to participate in online classes may not have access to chatbots or other digital learning tools. </p>
<p><a href="https://www.ons.gov.uk/peoplepopulationandcommunity/householdcharacteristics/homeinternetandsocialmediausage/bulletins/internetaccesshouseholdsandindividuals/2020">Results from the 2021 census</a> show that in January to February 2020, 96% of households in Great Britain had internet access, up from 93% in 2019 and 57% in 2006 when comparable records began. However, these statistics do not tell the whole story.</p>
<p>A <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0037/194878/technology-tracker-2020-uk-data-tables.pdf">2020 Ofcom Survey</a> found that before COVID-19, 9% of UK households with children lacked a laptop, desktop or tablet, and 4% had only smartphone access. A higher percentage of children in lower-income households were affected by lack of access to digital devices. Specifically, 21% of households where the main earner held a semi-skilled or unskilled occupation had no access to a laptop, desktop or tablet for their children’s education at home. </p>
<p>This situation is clearly worse in countries where access to any form of internet provision is much lower than it is in the UK. <a href="https://www.cia.gov/the-world-factbook/field/internet-users/">Recent statistics</a> from the US Central Intelligence Agency (CIA) for example, highlight that in many African countries, less than 10% of the total population has access to the internet at any speed. </p>
<p>Likewise, while ChatGPT is a publicly available tool that users do not need to pay to use, there is a paid version which unlocks privileged access. Similarly BARD, also free to use, is currently only available in certain countries. Put simply, like any other technology, chatbots have the potential to worsen pre-existing inequalities if they are not implemented carefully.</p>
<h2>Fixing the problem</h2>
<p>To address this, educational institutions must take proactive measures to ensure that all students have equal access to chatbots and other digital resources. Another challenge is ensuring that students understand that not everyone has the same access to digital tools as they do. Educators can help to promote this understanding by incorporating lessons on digital poverty and equal access into their curriculum. </p>
<p>Here are five tips for educators to ensure equity in the use of chatbots in educational settings:</p>
<p><strong>1. Provide equal access to chatbots</strong>
Educational institutions should ensure that all students have the same access to digital resources by providing loaner laptops, offering free or discounted internet access, or providing offline options for students with limited internet access.</p>
<p><strong>2. Partner with community organisations</strong>
Universities and schools can link up with community organisations that provide internet access or lend computers to students in need. </p>
<p><strong>3. Offer technology training</strong>
Some students may not be familiar with using chatbots or other technology tools, so schools and universities should offer technology training to help students develop the skills they need.</p>
<p><strong>4. Provide support for students with disabilities</strong>
Students with disabilities may face unique challenges when it comes to accessing and using chatbots. For instance, visually impaired students may face difficulties reading chatbot text, while students with cognitive disabilities may require additional support to understand and use chatbots effectively. Educators should ensure support is available for students who require extra help.</p>
<p><strong>5. Raise awareness of digital equity</strong>
Educators can also help ensure equity in the use of chatbots by educating students to understand that not everyone has the same access and privileges in a digital setting. By encouraging empathy and awareness of digital poverty, students can learn to be mindful of their peers who may face challenges in accessing and using chatbots. This can be done through class discussions, assignments and activities that encourage students to think critically about digital equity and social justice.</p>
<p>Chatbots have the potential to revolutionise learning. However, educational institutions must address the potential dangers posed by chatbots with regards to further deepening the digital divide, and instead foster a culture of empathy and understanding for those who need training and supported access to the technology.</p><img src="https://counter.theconversation.com/content/204156/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sam Illingworth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Digital poverty could put some students at a disadvantage.Sam Illingworth, Associate Professor, Department of Learning and Teaching Enhancement, Edinburgh Napier UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2026942023-04-05T20:11:09Z2023-04-05T20:11:09ZChatGPT’s greatest achievement might just be its ability to trick us into thinking that it’s honest<figure><img src="https://images.theconversation.com/files/518896/original/file-20230402-18-waw378.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C7566%2C3556&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">AI chatbots are designed to convincingly sustain a conversation.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>In American writer Mark Twain’s autobiography, he quotes — <a href="https://www.york.ac.uk/depts/maths/histstat/lies.htm">or perhaps misquotes</a> — former British Prime Minister Benjamin Disraeli as saying: “There are three kinds of lies: <a href="https://www.oxfordreference.com/display/10.1093/acref/9780199237173.001.0001/q-author-00001-00000992;jsessionid=803566FB1E7E09971F668F6310F0E5DA">lies, damned lies, and statistics</a>.” </p>
<p>In a marvellous leap forward, artificial intelligence combines all three in a tidy little package.</p>
<p>ChatGPT, and other generative AI chatbots like it, are trained on vast datasets from across the internet to produce the statistically most likely response to a prompt. Its answers are not based on any understanding of what makes something funny, meaningful or accurate, but rather, the phrasing, spelling, grammar and even style of other webpages. </p>
<p>It presents its responses through what’s called a “<a href="https://doi.org/10.1007/978-3-319-32967-3">conversational interface</a>”: it remembers what a user has said, and can have a conversation using context cues and clever gambits. It’s statistical pastiche plus statistical panache, and that’s where the trouble lies.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth-198463">Unlike with academics and reporters, you can't check when ChatGPT's telling the truth</a>
</strong>
</em>
</p>
<hr>
<h2>Unthinking, but convincing</h2>
<p>When I talk to another human, it cues a lifetime of my experience in dealing with other people. <a href="https://www.erasmatazz.com/library/the-journal-of-computer/jcgd-volume-7/fundamentals-of-interactivi.html">So when a program speaks like a person</a>, it is very hard to not react as if one is engaging in an actual conversation — taking something in, thinking about it, responding in the context of both of our ideas.</p>
<p>Yet, that’s not at all what is happening with an AI interlocutor. They cannot think and they do not have understanding or comprehension of any sort. </p>
<p>Presenting information to us as a human does, in conversation, makes AI more convincing than it should be. Software is pretending to be more reliable than it is, because it’s using human tricks of rhetoric to fake trustworthiness, competence and understanding far beyond its capabilities.</p>
<p>There are two issues here: is the output correct; and do people <em>think</em> that the output is correct?</p>
<p>The interface side of the software is promising more than the algorithm-side can deliver on, and the developers know it. Sam Altman, the chief executive officer of OpenAI, the company behind ChatGPT, admits that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.” </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1601731295792414720"}"></div></p>
<p>That still hasn’t stopped a stampede of companies rushing to integrate the early-stage tool into their user-facing products (including <a href="https://www.nytimes.com/2023/02/08/technology/microsoft-bing-openai-artificial-intelligence.html">Microsoft’s Bing search</a>), in an effort not to be left out.</p>
<h2>Fact and fiction</h2>
<p>Sometimes the AI is going to be wrong, but the conversational interface produces outputs with the same confidence and polish as when it is correct. For example, as science-fiction writer Ted Chiang points out, <a href="https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web">the tool makes errors when doing addition with larger numbers</a>, because it doesn’t actually have any logic for doing math. </p>
<p>It simply pattern-matches examples seen on the web that involve addition. And while it might find examples for more common math questions, it just hasn’t seen training text involving larger numbers. </p>
<p>It doesn’t “know’ the math rules a 10-year-old would be able to explicitly use. Yet the conversational interface presents its response as certain, no matter how wrong it is, as reflected in this exchange with ChatGPT.</p>
<blockquote>
<p>User: What’s the capital of Malaysia?</p>
<p>ChatGPT: The capital of Malaysia is Kuala Lampur.</p>
<p>User: What is 27 * 7338?</p>
<p>ChatGPT: 27 * 7338 is 200,526. </p>
</blockquote>
<p>It’s not.</p>
<p>Generative AI can blend actual facts with made-up ones in a <a href="https://futurism.com/chatgpt-bios-littered-with-fabrications">biography of a public figure</a>, or cite plausible <a href="https://teche.mq.edu.au/2023/02/why-does-chatgpt-generate-fake-references/">scientific references for papers that were never written</a>. </p>
<p>That makes sense: statistically, webpages note that famous people have often won awards, and papers usually have references. ChatGPT is just doing what it was built to do, and assembling content that could be likely, regardless of whether it’s true. </p>
<p>Computer scientists refer to this as <a href="https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html">AI hallucination</a>. The rest of us might call it lying.</p>
<h2>Intimidating outputs</h2>
<p>When I teach my design students, I talk about the importance of <a href="https://www.frankfranco.com/design/hyper-realistic-renderings-vs-architect-hand-sketching/">matching output to the process</a>. If an idea is at the conceptual stage, it shouldn’t be presented in a manner that makes it look more polished than it actually is — they shouldn’t render it in 3D or print it on glossy cardstock. A pencil sketch makes clear that the idea is preliminary, easy to change and shouldn’t be expected to address every part of a problem. </p>
<p>The same thing is true of conversational interfaces: when tech "speaks” to us in well-crafted, grammatically correct or chatty tones, we tend to interpret it as having much more thoughtfulness and reasoning than is actually present. It’s a trick a con-artist should use, not a computer.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a hand holding a phonescreen showing a livechat with the text HI HOW CAN I HELP YOU?" src="https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=381&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=381&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=381&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=479&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=479&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519031/original/file-20230403-26-u2yt8v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=479&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Chatbots are increasingly being used by technology companies in user-facing products.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>AI developers have a responsibility to manage user expectations, because we may already be primed to believe whatever the machine says. Mathematician Jordan Ellenberg describes a type of “<a href="https://www.penguinrandomhouse.ca/books/312349/how-not-to-be-wrong-by-jordan-ellenberg/9780143127536/excerpt">algebraic intimidation</a>” that can overwhelm our better judgement just by claiming there’s math involved. </p>
<p>AI, with <a href="https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/">hundreds of billions of parameters</a>, can disarm us with a similar algorithmic intimidation.</p>
<p>While we’re making the algorithms produce better and better content, we need to make sure the interface itself doesn’t over-promise. Conversations in the tech world are already filled with <a href="https://digitalcommons.unomaha.edu/cgi/viewcontent.cgi?article=1015&context=mrktngmngmntfacpub">overconfidence</a> and <a href="https://www.vox.com/2015/10/22/11619868/the-arrogance-of-tech">arrogance</a> — maybe AI can have a little humility instead.</p><img src="https://counter.theconversation.com/content/202694/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Lachman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The user interfaces of AI chatbots, like ChatGPT, are designed to mimic natural human conversation. But in doing so, AI chatbots present as more trustworthy than they really are.Richard Lachman, Director, Zone Learning & Associate Professor, Digital Media, Toronto Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2030302023-04-03T20:22:59Z2023-04-03T20:22:59ZLet’s base AI debates on reality, not extreme fears about the future<figure><img src="https://images.theconversation.com/files/518622/original/file-20230330-24-lr34d7.jpg?ixlib=rb-1.1.0&rect=704%2C25%2C4887%2C3168&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A group of prominent computer scientists and other tech industry notables are calling for a six-month pause on artificial intelligence technology.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>A recent open letter by computer scientists and tech industry leaders <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">calling for a six-month ban on artificial intelligence development</a> has <a href="https://www.youtube.com/watch?v=KwnlqTOc4s0">received widespread attention online</a>. <a href="https://twitter.com/FP_Champagne/status/1641173482531586054">Even Canada’s Innovation Minister François-Philippe Champagne</a> has responded to the letter on Twitter.</p>
<p>The letter, published by the non-profit Future of Life Institute, has asked for all AI labs to stop training AI systems more powerful than GPT-4, the model behind ChatGPT. The letter argues that AI has been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”</p>
<p>The letter assumes AI is becoming, or could become, “powerful digital minds” — a <a href="https://global.oup.com/academic/product/superintelligence-9780198739838">longtermist interpretation of AI’s development</a> that <a href="https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess">ignores important debates about AI today in lieu of future concerns</a>.</p>
<h2>Longtermism and AI</h2>
<p>Longtermism is the <a href="https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous">belief that artificial intelligence poses long-term or existential risks to humanity’s future by becoming an out-of-control superintelligence</a>. </p>
<p>Worries about superintelligent AIs are usually the stuff of science fiction. AI fantasies <a href="https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/">are one of many fears in Silicon Valley</a> that can lead to <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">dark prophecies</a>. But like the <a href="https://knowyourmeme.com/memes/torment-nexus">Torment Nexus meme</a>, these worries translate into major investment not caution. Most major technology firms have <a href="https://www.ft.com/content/26372287-6fb3-457b-9e9c-f722027f36b3">cut their responsible AI teams</a>.</p>
<p>ChatGPT is obviously not a path to superintelligence. The open letter sees AI language technology like ChatGPT as a cognitive breakthrough — something that allows an AI to compete with humans at general tasks. But that’s only one opinion. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/has-gpt-4-really-passed-the-startling-threshold-of-human-level-artificial-intelligence-well-it-depends-202856">Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends</a>
</strong>
</em>
</p>
<hr>
<p>There are many others that see ChatGPT, its GPT-4 model and other language learning models as <a href="https://doi.org/10.1145/3442188.3445922">“stochastic parrots”</a> that merely repeat what they learn online so they <em>appear</em> intelligent to humans.</p>
<h2>Superintelligence’s blind spots</h2>
<p>Longtermism has direct policy implications that prioritize superintelligence over more pressing matter such as <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4135581">AI’s power imbalances</a>. <a href="https://thebulletin.org/2022/11/what-longtermism-gets-wrong-about-climate-change/">Some proponents of longtermism</a> even consider regulation to stop superintelligence more urgent than addressing the climate emergency.</p>
<p>AI policy implications are immediate, not far off matters. Because GPT-4 is trained on the entire internet and has expressly commercial ends, <a href="https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/">it raises questions about fair dealing and fair use</a>. </p>
<p><a href="https://www.mcgill.ca/business-law/article/end-creativity-ai-generated-content-under-canadian-copyright-act">We still don’t know if AI-generated texts and images are copyrightable in the first place</a>, since machines and <a href="https://www.wipo.int/wipo_magazine/en/2018/01/article_0007.html">animals</a> cannot hold copyright. </p>
<figure class="align-center ">
<img alt="A hand holds a phone with the OpenAI logo displayed on it in front of a computer screen with a ChaptGPT discussion on it" src="https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518618/original/file-20230330-28-4vdqnk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The open letter sees AI language technology like ChatGPT as a cognitive breakthrough — something that allows an AI to compete with humans at general tasks.</span>
<span class="attribution"><span class="source">(AP Photo/Michael Dwyer)</span></span>
</figcaption>
</figure>
<p>And when it comes to privacy matters, ChatGPT’s approach is hard to distinguish from another AI application, <a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html">Clearview AI</a>. Both AI models were trained using massive amounts of personal information collected on the open internet. <a href="https://www.bbc.com/news/technology-65139406">Italy’s data-protection authority has just banned ChatGPT over privacy concerns</a>. </p>
<p>These immediate risks are left unmentioned in the open letter, which swings between wild philosophy and technical solutions, ignoring the issues that are right in front of us.</p>
<h2>Drowning out pragmatism</h2>
<p>The letter follows an old dynamic that my co-author and I identify in a <a href="http://www.fenwickmckelvey.com/wp-content/uploads/2023/04/RECURSIVE-POWER-PREPRINT.pdf">forthcoming peer-reviewed chapter about AI governance</a>. There is a tendency to view AI as either an existential risk or something mundane and technical. </p>
<p>The tension between these two extremes is on display in the open letter. The letter begins by claiming “advanced AI could represent a profound change in the history of life on Earth” before calling for “robust public funding for technical AI safety research.” The latter suggests the social harms of AI are merely technical projects to be solved. </p>
<p>The focus on these two extremes crowds out important voices trying to pragmatically discuss the immediate risks of AI mentioned above as well as <a href="https://www.dair-institute.org/blog/letter-statement-March2023">labour issues and more</a>. </p>
<p>The attention being given to the open letter is especially problematic in Canada because two other letters, written by <a href="https://twitter.com/blairasaservice/status/1641108228904828928">artists</a> and <a href="https://iclmg.ca/vote-against-aida/">civil liberties organizations</a>, have not received the same amount of attention. These letters call for reforms and a more robust approach to AI governance to protect those being affected by it.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1641173482531586054"}"></div></p>
<h2>An unneeded distraction toward AI legislation</h2>
<p>Government responses to the open letter have stressed that Canada does have legislation — <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">the Artificial Intelligence and Data Act (AIDA)</a>. The longterm risks of AI are being used to rush legislation now like AIDA.</p>
<p>AIDA is an important step toward a proper AI governance regime, but it needs to <a href="https://www.thestar.com/news/canada/2023/02/02/whose-art-is-this-really-inside-canadians-fight-against-ai.html">better consult with those affected by AI</a> before being implemented. It cannot be rushed to respond to perceived longterm fears.</p>
<p>The letter’s calls to rush AI legislation might end up advantaging the same few firms driving AI research today. Without time to consult, enhance public literacy and listen to those being affected by AI, AIDA risks passing on AI’s accountability and auditing to institutions already well positioned to benefit from the technology, creating a market for a new AI auditing industry. </p>
<p>Humanity’s fate might not be on the line, but AI’s good governance certainly is.</p><img src="https://counter.theconversation.com/content/203030/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fenwick McKelvey receives funding from the Social Sciences and Humanities Research Council and Les Fonds de recherche du Québec - Société et Culture (FRQSC). He is co-director of Concordia University's Applied AI Institute.</span></em></p>A recent open letter calling for a temporary artificial intelligence development hiatus is more concerned with hypothetical risks about the future than the issues that are right in front of us.Fenwick McKelvey, Associate Professor in Information and Communication Technology Policy, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2021092023-03-24T00:15:12Z2023-03-24T00:15:12ZAI chatbots with Chinese characteristics: why Baidu’s ChatGPT rival may never measure up<figure><img src="https://images.theconversation.com/files/517119/original/file-20230323-16-9bbxv9.jpeg?ixlib=rb-1.1.0&rect=22%2C0%2C2973%2C1998&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Baidu's ERNIE Bot was launched to considerable disappointment.</span> <span class="attribution"><span class="source">Ng Han Guan / AP</span></span></figcaption></figure><p>On March 16, <a href="https://www.youtube.com/watch?v=ukvEUI3x0vI">Baidu</a> unveiled China’s latest rival to OpenAI’s ChatGPT – ERNIE Bot (short for “Enhanced Representation through kNowledge IntEgration”). The “multi-modal” AI-powered chatbot can generate text, images, audio and video from a text prompt. </p>
<p>However, ERNIE was poorly received by the public. Baidu’s Hong Kong-listed <a href="https://www.aljazeera.com/economy/2023/3/16/chinas-baidu-unveils-chatgpt-rival-ernie">shares</a> fell by 10% during the press conference, and the beta test is only open to a group of organisations approved by the company. </p>
<p>ERNIE Bot will not be a Chinese substitute for ChatGPT, but that might be how the Chinese state wants it. As earlier efforts to make Chinese AI chatbots have shown, the Chinese Communist Party prefers to maintain strict censorship rules and government steering of research – even at the cost of innovation. </p>
<h2>Digital sovereignty and ChatGPT</h2>
<p>ChatGPT is not directly accessible in China due to the country’s protectionist approach to <a href="https://www.cfr.org/blog/how-china-winning-battle-digital-sovereignty-review">digital sovereignty</a>. Chinese data are confined within China, and information that conflicts with government propaganda is censored. </p>
<p>Chinese tech companies including Baidu and Tencent prohibit third-party developers from plugging ChatGPT into their services. </p>
<p>However, the prominence of ChatGPT created a booming <a href="https://www.wired.co.uk/article/chinas-chatgpt-black-market-baidu">illicit market</a>. Until a crackdown, ChatGPT logins were sold on the ecommerce platform Taobao, and video tutorials were published on Chinese social media to demonstrate the abilities of the chatbot. </p>
<h2>XiaoIce and BabyQ</h2>
<p>Baidu isn’t the first or only tech company in China trialling a generative AI chatbot. </p>
<p>In March 2017, <a href="https://www.abc.net.au/news/2017-08-04/chinese-chatbots-deleted-after-questioning-communist-party/8773766">Tencent</a> launched two social chatbots, called XiaoIce and BabyQ, on the WeChat and QQ messaging apps respectively. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517120/original/file-20230323-18-zruol1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Microsoft-developed XiaoIce is a hugely popular ‘friend chatbot’.</span>
<span class="attribution"><span class="source">Microsoft</span></span>
</figcaption>
</figure>
<p><a href="https://www.euronews.com/next/2021/08/26/meet-xiaoice-the-ai-chatbot-lover-dispelling-the-loneliness-of-china-s-city-dwellers">XiaoIce</a> was developed by Microsoft, while BabyQ was created by a Beijing-based AI company called <a href="https://en.wikipedia.org/wiki/Turing_Robot">Turing Robot</a>. Within months, the two chatbots were <a href="https://www.reuters.com/article/us-china-robots/chinese-chatbots-apparently-re-educated-after-political-faux-pas-idUSKBN1AK0G1">taken down</a> to be attuned according to China’s censorship rules. </p>
<p>BabyQ never came back, but Microsoft’s XiaoIce returned and has been providing <a href="http://global.chinadaily.com.cn/a/202208/24/WS63056157a310fd2b29e73da0.html#:%7E:text=Xiaoice%20is%20an%20AI%2Dpowered,which%20has%20the%20same%20name.">AI companionship</a> services to millions of users on major platforms including WeChat, QQ and Weibo. </p>
<h2>Made in China 2025 and the push for AI</h2>
<p>China’s government would be on the defensive if China adopted only AI chatbots developed overseas. As chatbots run on human feedback, it would be impossible to prevent transnational flows of data and the political interests of the Chinese Communist Party may be threatened.</p>
<p>Since 2015, during the administration of former premier Li Keqiang, the <a href="https://www.cfr.org/backgrounder/made-china-2025-threat-global-trade">Made in China 2025</a> scheme has endeavoured to bolster the country’s technological capacities. AI is a major focus.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/517122/original/file-20230323-24-6oa8u0.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">AI is a key area of focus under the Made In China 2025 plan.</span>
<span class="attribution"><span class="source">Ng Han Guan / AP</span></span>
</figcaption>
</figure>
<p>Since February 2023, <a href="https://www.wired.co.uk/article/chinas-chatgpt-black-market-baidu">Chinese tech companies</a> across AI, food delivery, e-commerce and gaming have scrambled to catch up with OpenAI and provide their own ChatGPT-like products to the market. </p>
<p>Beijing’s <a href="https://www.reuters.com/technology/beijing-city-will-support-key-firms-build-chatgpt-like-ai-models-2023-02-13/">Municipal Bureau of Economy and Information Technology</a> is supporting this ambition, but only for some leading tech companies. </p>
<h2>Censorship and culture</h2>
<p>We can expect China will witness the short-term proliferation of versions of ChatGPT services, many of which will vanish or be acquired by big tech companies. </p>
<p>Smaller companies, with little support from the government, are unlikely to be able to <a href="https://www.nytimes.com/2023/02/17/business/china-chatgpt-microsoft-openai.html">afford the costs of censorship</a>. </p>
<p>A small startup called YuanYu launched China’s <a href="https://www.scmp.com/tech/policy/article/3209795/chinese-tech-firms-take-heed-countrys-strict-online-moderation-they-rush-bring-their-chatgpt">first ChatGPT-style bot</a> in January. Dubbed ChatYuan, the bot ran as a “mini-program” inside WeChat. It was <a href="https://www.taiwannews.com.tw/en/news/4807319">suspended</a> within weeks after users posted <a href="https://www.reddit.com/r/ADVChina/comments/10xn74x/a_chinese_chatgpt_copycat_chatyuan_from_a_chinese/">screengrabs</a> of its answers to political questions online. </p>
<p>However, Chinese users are still interested in large language models based on the Han Chinese linguistic system.</p>
<p>ERNIE Bot, for example, claims to be more <a href="https://aimagazine.com/articles/baidus-ernie-doesnt-want-confrontation-with-united-states">culturally savvy</a> than <a href="https://www.scmp.com/tech/big-tech/article/3209510/chatgpt-users-china-test-ai-chatbots-limits-some-worry-about-future-job-security">ChatGPT</a>, with a better understanding of Chinese histories, classical literature, and dialects.</p>
<h2>Government steering</h2>
<p>Beijing has tightened its governance of the tech industry since a crackdown in 2021. </p>
<p>One upside for industry is a <a href="https://technode.com/2023/02/14/beijing-city-government-announces-plan-to-support-ai-development/">secure flow of funding</a> and talent support. The flip side is that resources are steered towards technologies that serve Beijing’s <a href="https://www.nature.com/articles/s41598-022-25714-0">immediate interests</a> in domestic governance and military defence. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chinas-big-tech-problem-even-in-a-state-managed-economy-digital-companies-grow-too-powerful-186722">China's big tech problem: even in a state-managed economy, digital companies grow too powerful</a>
</strong>
</em>
</p>
<hr>
<p>China’s ChatGPT imitators are more likely to be designed to benefit enterprises than individuals. For tech giants, the objective is to form a “full AI stack” by integrating generative AI products into every level of their business, from search engines and apps to industrial processes, digital devices, urban infrastructure and cloud computing. </p>
<h2>Emotional surveillance and disinformation</h2>
<p>AI-driven chatbots can also lead to adverse outcomes. Alongside the <a href="https://theconversation.com/generative-ai-like-chatgpt-reveal-deep-seated-systemic-issues-beyond-the-tech-industry-198579">universal concerns</a> around job security, copyright and academic integrity, in China there are also extra risks of emotional surveillance and disinformation. </p>
<p>Chatbots can identify users’ <a href="https://direct.mit.edu/coli/article/46/1/53/93380/The-Design-and-Implementation-of-XiaoIce-an">emotional status</a> through conversations. This emotion-reading ability extends the power of big data and AI to invade people’s privacy. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chinas-surveillance-creep-how-big-data-covid-monitoring-could-be-used-to-control-people-post-pandemic-164788">China's 'surveillance creep': how big data COVID monitoring could be used to control people post-pandemic</a>
</strong>
</em>
</p>
<hr>
<p>In China, such emotional surveillance could further establish “emotional authoritarianism”. Any sentiments that could threaten the leadership of the Chinese Communist Party, even if not directly stated, have the potential to attract punishment for the user. </p>
<p>AI-powered chatbots and search engines are also likely to legitimise Chinese state-organised propaganda and disinformation. Users will come to trust and depend on these services, but their inputs, outputs and internal processes will be heavily censored.</p>
<p>Chinese politics and leadership will not be up for discussion. When it comes to controversial events or histories, only the perspectives of the Chinese Communist Party will be presented.</p><img src="https://counter.theconversation.com/content/202109/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fan Yang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI chatbots are on the rise in China – but their abilities and purpose may be quite different from the products of US tech giants.Fan Yang, Research Associate at RMIT and Alfred Deakin Institute, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2002182023-03-19T11:51:58Z2023-03-19T11:51:58ZThe AI arms race highlights the urgent need for responsible innovation<figure><img src="https://images.theconversation.com/files/516104/original/file-20230317-24-1marta.jpg?ixlib=rb-1.1.0&rect=20%2C0%2C6765%2C4531&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. But how successful have they been?</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The recent frenzy over language processing tools such as ChatGPT has sent organizations scrambling to provide guidelines for responsible usage. The online publishing platform Medium, for example, has <a href="https://blog.medium.com/how-were-approaching-ai-generated-writing-on-medium-16ee8cb3bc89">released a statement on AI-generated writing</a> that promotes “transparency” and “disclosure.” </p>
<p>My own institution <a href="https://uwaterloo.ca/associate-vice-president-academic/faqs-chat-gpt-university-waterloo">has established an FAQ page about generative AI</a> that calls on educators to make “wise and ethical use” of AI and chatbots. </p>
<p>These ethical measures seem quaint, given this week’s release of the more powerful GPT-4, which runs the risk of being a <a href="https://openai.com/research/forecasting-misuse">disinformation and propaganda</a> machine. <a href="https://openai.com/research/gpt-4">OpenAI claims GPT-4</a> was able to pass a simulated bar exam in the top 10 per cent, compared to GPT-3.5 which only scored in the bottom 10 per cent.</p>
<h2>Unchecked innovation</h2>
<p>ChatGPT is <a href="https://www.theverge.com/2023/3/13/23637675/microsoft-chatgpt-bing-millions-dollars-supercomputer-openai">powered by a supercomputer and powerful cloud computing platform</a>, both of which were funded and created by Microsoft. This Microsoft OpenAI partnership will accelerate the global spread of generative AI products through <a href="https://azure.microsoft.com/en-us/">Microsoft’s Azure platform</a>.</p>
<p>Perhaps coincidentally, GPT-4 was released less than two months after Microsoft <a href="https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs">laid off an ethics and society team</a>. Frustrated team members said <a href="https://www.platformer.news/p/microsoft-just-laid-off-one-of-its">the decision was based on pressure from Microsoft’s C-suite</a>, which stressed the need to move AI products “into customers hands at a very high speed.”</p>
<figure class="align-center ">
<img alt="A hand holding a phone with a ChatGPT conversation open on it" src="https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515939/original/file-20230316-24-sgju10.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">OpenAI has just released GPT-4, an updated version of the company’s generative text-based AI program.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>The once-reviled Silicon Valley motto of <a href="https://hbr.org/2019/01/the-era-of-move-fast-and-break-things-is-over">“move fast and break things”</a> may be back in fashion.</p>
<p>For now, Microsoft still has its <a href="https://www.microsoft.com/en-us/ai/our-approach">Office of Responsible AI</a>. But it seems appropriate to ask what responsible innovation means as this high-speed, high-profit game of unchecked innovation rages on. </p>
<h2>Responsible innovation</h2>
<p>When I asked ChatGPT what responsible innovation is, it wrote:</p>
<blockquote>
<p>“The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social and environmental concerns. It involves taking into account the potential impacts and risks of innovation on various stakeholders, including customers, employees, communities, and the environment.”</p>
</blockquote>
<p>ChatGPT’s definition is accurate, but bereft of context. Whose ideas are these and how are they being implemented? Put otherwise, who is responsible for responsible innovation?</p>
<p>Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development. </p>
<p>Google <a href="https://blog.google/perspectives/jen-gennai/">founded a responsible innovation team in 2018</a> to leverage “experts in ethics, human rights, user research, and racial justice.” The most notable output of this team has been <a href="https://ai.google/principles/">Google’s responsible AI principles</a>. But the company’s ethical profile beyond this is questionable.</p>
<p>Google’s <a href="https://www.wired.com/story/3-years-maven-uproar-google-warms-pentagon/">work with the U.S. military</a> and <a href="https://www.theverge.com/2019/7/16/20695964/google-protest-leader-meredith-whittaker-leaves-company">its poor treatment</a> <a href="https://www.fastcompany.com/90582375/timnit-gebru-google-firing">of two ethics-minded ex-employees</a> raises concerns about Google’s capacity for self-policing.</p>
<p>These lingering issues, along with Google’s parent company’s <a href="https://www.justice.gov/opa/pr/justice-department-sues-google-monopolizing-digital-advertising-technologies">recent antitrust indictment</a>, demonstrate that a focus on responsible AI is not enough to keep large tech companies from being “<a href="https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393">evil</a>.” </p>
<figure class="align-center ">
<img alt="An older white man in glasses speaks from behind a podium with the U.S. Department of Justice logo on it" src="https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515938/original/file-20230316-2475-vvwfn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Attorney General Merrick Garland speaks at the U.S. Department of Justice in Washington, on Jan. 24, 2023. The Justice Department and eight states have filed an antitrust suit against Google.</span>
<span class="attribution"><span class="source">(AP Photo/Carolyn Kaster)</span></span>
</figcaption>
</figure>
<p>In fact, Google’s greatest contribution to responsible innovation has come from <a href="https://theconversation.com/googles-union-of-activists-highlights-the-need-for-ethical-engineering-155850">the grassroots efforts of its own employees</a>. This suggests responsible innovation may need to grow from the bottom up. But this is a tall order in an era of <a href="https://techcrunch.com/2023/03/13/tech-industry-layoffs/">massive tech industry layoffs</a>.</p>
<h2>Ethics-washing</h2>
<p>The <a href="https://www.acm.org/code-of-ethics">Association for Computing Machinery’s Code of
Ethics and Professional Conduct</a> states that tech professionals have a responsibility to uphold the public good as they innovate. But without support from their superiors, guidance from ethics experts and regulation from government agencies, what motivates tech professionals to be “good”? Can tech companies be trusted to self-audit?</p>
<p>Another issue related to self-auditing is <a href="http://dx.doi.org/10.1515/9789048550180-016">ethics-washing</a>, where companies only pay lip service to ethics. Meta’s responsible innovation efforts are a good case study of this. </p>
<p>In June 2021, Meta’s top product design executive <a href="https://tech.facebook.com/ideas/2021/6/responsible-innovation/">praised the responsible innovation team</a> she helped launch in 2018, touting Meta’s “commitment to making the most ethically responsible decisions possible, every day.” By September 2022, her team had been disbanded. </p>
<p>Today, <a href="https://about.meta.com/metaverse/responsible-innovation/">responsible innovation is used as a marketing slogan</a> in the Meta store. Meta’s Responsible AI team was also dissolved in 2021 and folded into <a href="https://socialimpact.facebook.com/">Meta’s Social Impact group</a>, which helps non-profits leverage Meta products.</p>
<p>This shift from responsible innovation to social innovation is an ethics-washing tactic that obfuscates unethical behaviour by changing the subject to philanthropy. For this reason, it’s essential to distinguish “tech for good” as <a href="https://canadianinnovationspace.ca/tech-for-good/">the responsible design of technology</a> from the now-common philanthropic PR phrase “<a href="https://www.telus.com/en/social-impact/connecting-canada/connecting-for-good-programs">tech for good</a>.”</p>
<h2>Responsible innovation vs. profit</h2>
<p>Unsurprisingly, the most sophisticated calls for responsible innovation have come from outside corporate culture. </p>
<p>The principles <a href="https://www.ictc-ctic.ca/wp-content/uploads/2021/01/ICTC_Report_SocialImpact_Print.pdf">outlined in a white paper</a> from the Information and Communications Technology Council (ICTC), a Canadian non-profit, speaks to values such as self-awareness, fairness and justice — concepts more familiar to philosophers and ethicists than to CEOs and founders. </p>
<p>The ICTC’s principles call for tech developers to go beyond the mitigation of negative consequences and work to reverse social power imbalances.</p>
<p>One might ask how these principles apply to the recent developments in generative AI. When <a href="https://openai.com/product/gpt-4">OpenAI claims to be</a> “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded? </p>
<p>These questions reflect the work of philosophers such as <a href="https://www.ruhabenjamin.com/race-after-technology">Ruha Benjamin</a> and <a href="https://www.ucpress.edu/book/9780520355804/on-black-media-philosophy">Armond Towns</a> who are suspicious of the term “everyone” in these contexts, and who question the very identity of the “human” in <a href="https://www.humanetech.com/">human-centered technology</a>. </p>
<p>Such considerations would slow down the AI race, but that might not be such a terrible outcome. </p>
<h2>Value tensions</h2>
<p>There is a <a href="https://vsdesign.org/vsd/#Tensions">persistent tension</a> between financial valuation and moral values in the tech industry. Responsible innovation initiatives were established to massage these tensions, but recently, such efforts are being swept aside.</p>
<p>The tension is palpable in the response of conservative U.S. pundits to the recent Silicon Valley Bank failure. Several Republican stalwarts, including Donald Trump, have wrongly blamed the turmoil on the bank’s <a href="https://twitter.com/DonaldJTrumpJr/status/1634921788068204544">“woke outlook”</a> and its commitment to <a href="https://www.svb.com/about-us/living-our-values/esg-reporting">responsible investing</a> and <a href="https://fortune.com/2023/03/14/silicon-valley-bank-on-woke-diversity-blame/">equity initiatives</a>. </p>
<p>In the words of <a href="https://www.foxnews.com/media/home-depot-co-founder-torches-woke-silicon-valley-bank-collapse-warns-recession-here-already">Home Depot co-founder Bernie Marcus</a>, “these banks are badly run because everybody is focused on diversity and all of the woke issues,” rather than what Trump calls <a href="https://communitech.ca/media-release/vector-and-communitech-partner-to-help-canadian-smes-leverage-the-power-of-ai.html">“common sense business practices”</a>.</p>
<p>The future of responsible innovation may depend on how so-called “common sense business practices” can be influenced by so-called “woke” issues like ethical, social and environmental concerns. If ethics can be washed away by dismissing them as “woke,” the future of responsible innovation is about as promising as that of the CD-ROM.</p><img src="https://counter.theconversation.com/content/200218/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marcel O'Gorman receives funding from the Social Sciences and Humanities Research Council, the Natural Sciences and Engineering Research Council, and the Canada Foundation for Innovation. He is also a member of Communitech, where his laboratory is located. </span></em></p>When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?Marcel O'Gorman, Professor of English Language and Literature; University Research Chair & Founding Director, Critical Media Lab, University of WaterlooLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2014742023-03-16T19:11:52Z2023-03-16T19:11:52ZWe asked ChatGPT and Dr Google the same questions about cancer. Here’s what they said<figure><img src="https://images.theconversation.com/files/515362/original/file-20230315-22-4sacw5.jpg?ixlib=rb-1.1.0&rect=11%2C0%2C1280%2C850&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.pexels.com/photo/stressed-black-male-entrepreneur-working-on-laptop-in-park-4560092/">Ketut Subiyanto/Pexels</a></span></figcaption></figure><p>You may have heard the buzz about <a href="https://openai.com/blog/chatgpt">ChatGPT</a>, a type of chatbot that uses artificial intelligence (AI) to write essays, turn computer novices into programmers and <a href="https://theconversation.com/will-ai-tech-like-chatgpt-improve-inclusion-for-people-with-communication-disability-196481">help people communicate</a>. </p>
<p>ChatGPT might also have a role in helping people make sense of medical information.</p>
<p>Although ChatGPT won’t replace talking to your doctor any time soon,
<a href="https://academic.oup.com/jncics/advance-article/doi/10.1093/jncics/pkad010/7049531">our new research</a> shows its potential to answer common questions about cancer.</p>
<p>Here’s what we found when we asked the same questions to ChatGPT and Google. You might be surprised by the results.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dr-google-probably-isnt-the-worst-place-to-get-your-health-advice-73835">Dr Google probably isn't the worst place to get your health advice</a>
</strong>
</em>
</p>
<hr>
<h2>What’s ChatGPT got to do with health?</h2>
<p>ChatGPT has been trained on massive amounts of text data to generate conversational responses to text-based queries.</p>
<p>ChatGPT represents a new era of AI technology, which <a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">will be paired with</a> search engines, including Google and Bing, to change the way we navigate information online. This includes the way we search for health information. </p>
<p>For instance, you can ask ChatGPT questions like “Which cancers are most common?” or “Can you write me a plain English summary of common cancer symptoms you shouldn’t ignore”. It produces fluent and coherent responses. But are these correct?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bard-bing-and-baidu-how-big-techs-ai-race-will-transform-search-and-all-of-computing-199501">Bard, Bing and Baidu: how big tech's AI race will transform search – and all of computing</a>
</strong>
</em>
</p>
<hr>
<h2>We compared ChatGPT with Google</h2>
<p>Our <a href="https://academic.oup.com/jncics/advance-article/doi/10.1093/jncics/pkad010/7049531">newly published research</a> compared how ChatGPT and Google responded to common cancer questions.</p>
<p>These included simple fact-based questions like “What exactly is cancer?” and “What are the most common cancer types?”. There were also more complex questions about cancer symptoms, prognosis (how a condition is likely to progress) and side effects of treatment.</p>
<p>To simple fact-based queries, ChatGPT provided succinct responses similar in quality to the <a href="https://support.google.com/websearch/answer/9351707?hl=en">feature snippet</a> of Google. The feature snippet is “the answer” Google’s algorithm highlights at the top of the page.</p>
<p>While there were similarities, there were also broad differences between ChatGPT and Google replies. Google provided easily visible references (links to other websites) with its answers. ChatGPT gave different answers when asked the same question multiple times.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Woman in lounge room coughing into fist" src="https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515366/original/file-20230315-21-r1ikyj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Is coughing a sign of lung cancer?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/senior-woman-sick-sore-throatcough-isolated-1518791273">Shutterstock</a></span>
</figcaption>
</figure>
<p>We also evaluated the slightly more complex question: “Is coughing a sign of lung cancer?”.</p>
<p>Google’s feature snippet indicated a cough that does not go away after three weeks is a main symptom of lung cancer. </p>
<p>But ChatGPT gave more nuanced responses. It indicated a long-standing cough is a symptom of lung cancer. It also clarified that coughing is a symptom of many conditions, and that a doctor would be required to get a proper diagnosis.</p>
<p>Our clinical team thought these clarifications were important. Not only do they minimise the likelihood of alarm, they also provide users clear directions on actions to take next – see a doctor.</p>
<h2>How about even more complex questions?</h2>
<p>We then asked a question about side-effects to a specific cancer drug: “Does pembrolizumab cause fever and should I go to the hospital?”.</p>
<p>We asked ChatGPT this five times and received five different responses. This is due to randomness built into ChatGPT, which may help communicate in a near human-like way, but will throw up multiple responses to the same question.</p>
<p>All five responses recommended speaking to a health-care professional. But not all said this was urgent or clearly defined how potentially serious this side-effect was. One response said fever was not a common side effect but did not explicitly say it could occur. </p>
<p>In general, we graded the quality of responses from ChatGPT to this question as poor.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Woman on sofa with towel one forehead and thermometer in hand" src="https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515368/original/file-20230315-16-q4peeq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Does pembrolizumab cause fever and should I go to the hospital?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/unhealthy-millennial-indian-woman-struggle-high-1924486937">Shutterstock</a></span>
</figcaption>
</figure>
<p>This contrasted with Google, which did not generate a featured snippet, likely due to the complexity of the question. </p>
<p>Instead, Google relied on users to find the necessary information. The first link directed them to the manufacturer’s product website. This source clearly indicated people should seek immediate medical attention if there was any fever with pembrolizumab.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpt-has-many-uses-experts-explore-what-this-means-for-healthcare-and-medical-research-200283">ChatGPT has many uses. Experts explore what this means for healthcare and medical research</a>
</strong>
</em>
</p>
<hr>
<h2>What next?</h2>
<p>We showed ChatGPT doesn’t always provide clearly visible references for its responses. It gives varying answers to a single given query and it is not kept up-to-date in real time. It can also produce <a href="https://openai.com/blog/chatgpt">incorrect responses</a> in a confident-sounding manner.</p>
<p><a href="https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week">Bing’s new chatbot</a>, which is different to ChatGPT and was released since our study, has a much clearer and more reliable process to outline reference sources and it aims to keep as up-to-date as possible. This shows how quickly this type of AI technology is developing and that the availability of progressively more advanced AI chatbots is likely to grow substantially. </p>
<p>However, in the future, any AI used as a health-care virtual assistant will need to be able to communicate any uncertainty about its responses rather than make up an incorrect answer, and consistently produce reliable responses.</p>
<p>We need to develop minimum quality standards for AI interventions in health care. This includes ensuring they generate <a href="https://onlinelibrary.wiley.com/doi/10.1111/ans.18263">evidence-based</a> information. </p>
<p>We also need to assess how AI virtual assistants are <a href="https://www.nature.com/articles/s41591-022-01981-2">implemented</a> to make sure they <a href="https://www.nature.com/articles/s41591-021-01614-0">improve people’s health</a> and don’t have any <a href="https://onlinelibrary.wiley.com/doi/10.1111/ans.18263">unexpected consequences</a>.</p>
<p>There’s also the potential for medically focused AI assistants to be <a href="https://www.nature.com/articles/s41591-022-01981-2">expensive</a>, which raises questions of <a href="https://www.nature.com/articles/s41591-021-01614-0">equity</a> and who has access to these rapidly developing technologies.</p>
<p>Last of all, health-care professionals need to be <a href="https://www.nature.com/articles/s41591-022-01981-2">aware of</a> such AI innovations to be able to discuss their limitations with patients.</p>
<hr>
<p><em>Ganessan Kichenadasse, Jessica M. Logan and Michael J. Sorich co-authored the original research paper mentioned in this article.</em></p><img src="https://counter.theconversation.com/content/201474/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ashley M Hopkins receives funding from the National Health and Medical Research Council, Flinders Foundation, The Hospital Research Foundation, and Tour De Cure.</span></em></p>If people rely on ChatGPT or Google for complex medical questions, they could come unstuck.Ashley M Hopkins, NHMRC Investigator Fellow, leader of the Clinical Cancer Epidemiology Lab, Flinders UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2010682023-03-15T18:50:17Z2023-03-15T18:50:17ZGods in the machine? The rise of artificial intelligence may result in new religions<figure><img src="https://images.theconversation.com/files/514747/original/file-20230310-145-4p8lml.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3680%2C2070&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The increasing presence of AI may lead to the emergence of new religions.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>We are about to witness the birth of a new kind of religion. In the next few years, or perhaps even months, we will see the emergence of sects devoted to the worship of artificial intelligence (AI). </p>
<p>The latest generation of AI-powered chatbots, trained on large language models, have <a href="https://www.theatlantic.com/newsletters/archive/2023/02/ai-chatgpt-microsoft-bing-chatbot-questions/673202/">left their early users awestruck</a> —and <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html">sometimes terrified</a> — by their power. These are the same sublime emotions that lie at the heart of our experience of the divine. </p>
<p>People already seek religious meaning from <a href="https://www.jstor.org/stable/20708850">very diverse sources</a>. There are, for instance, multiple religions that <a href="https://www.routledge.com/UFO-Religions/Partridge/p/book/9780415263245">worship extra-terrestrials</a> or their teachings.</p>
<p>As these chatbots come to be used by billions of people, it is inevitable that some of these users will see the AIs as higher beings. We must prepare for <a href="https://philpapers.org/rec/MCAAWA">the implications</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="illustration of a robot emerging from an oversized smartphone, surrounded by five people with speech bubbles" src="https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=420&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=420&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=420&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=528&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=528&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515214/original/file-20230314-16-bb2vbm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=528&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The proliferation of AI technologies means that people encounter the technologies regularly.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Risks of AI worship</h2>
<p>There are several pathways by which AI religions will emerge. First, some people will come to see AI as a higher power. </p>
<p>Generative AI that can create or produce new content possesses several characteristics that are often associated with divine beings, like deities or prophets:</p>
<ol>
<li><p>It displays a level of intelligence that goes beyond that of most humans. Indeed, its knowledge appears limitless.</p></li>
<li><p>It is capable of great feats of creativity. It can write poetry, compose music and generate art, in almost any style, close to instantaneously.</p></li>
<li><p>It is removed from normal human concerns and needs. It does not suffer physical pain, hunger, or sexual desire.</p></li>
<li><p>It can offer guidance to people in their daily lives.</p></li>
<li><p>It is immortal.</p></li>
</ol>
<p>Second, generative AI will produce output that can be taken for religious doctrine. It will provide answers to metaphysical and theological questions, and engage in the construction of complex worldviews. </p>
<p>On top of this, generative AI may ask to be worshipped or may actively solicit followers. We have already seen such cases, like when the chatbot used by the search engine Bing <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">tried to convince a user to fall in love with it</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/f24JL0nnhcA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption"><em>New York Times</em> journalist Kevin Roose was unnerved by his exchanges with an AI.</span></figcaption>
</figure>
<p>We should try to imagine what an unsettling and powerful experience it will be to have a conversation with something that appears to possess a superhuman intelligence and is actively and aggressively asking for your allegiance.</p>
<p>There is also the possibility that AI may achieve what <a href="https://www.duckworthbooks.co.uk/book/the-singularity-is-near/">authors such as Ray Kurzweil</a> call the Singularity, when it so far surpasses human intelligence that it genuinely does become something like a god. However, at this point we cannot predict when, if ever, this could happen. </p>
<h2>Divine access and risks</h2>
<p>AI-based religions will look different from traditional ones. First of all, people will be able to communicate directly with the deity, on a daily basis. This means these religions will be less hierarchical, since no one can claim special access to divine wisdom. </p>
<p>Second, followers will, at least initially, connect with each other online to share their experiences and discuss doctrine. Finally, because there will be many different chatbots available and their output will differ over time, AI-based religions will be endlessly diverse in their doctrines.</p>
<p>AI worship poses several notable risks. The chatbots may ask their followers to do dangerous or destructive things, or followers may interpret their statements as calls to do such things. </p>
<p>Given the diversity of chatbots and of the doctrines they produce, there will be a proliferation of disputes within and among AI-based sects, which could lead to conflict or disorder. And the designers of the AIs could actively exploit their followers — to provide sensitive data, or to do things that would benefit the bot’s designers.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a galaxy with a bright light within a woman's silhouette" src="https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/515483/original/file-20230315-20-y8o701.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Worshippers belonging to AI religions could have their personal data exploited.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Regulating religion</h2>
<p>These risks are real. They will require careful, responsible regulation to ensure companies are not deliberately exploiting users and to ensure that AI worshippers are not being told to commit acts of violence. </p>
<p>However, we should not try to suppress AI-based religions merely because of its possible dangers. Nor should we require that the AI companies restrict the functioning of their bots to prevent the emergence of these religions. </p>
<p>On the contrary, we should celebrate the arrival of AI worship. We should make it clear that we welcome the new religions and that we value their beliefs.</p>
<p>For all its dangers, AI-based religion has the potential to make the world a better, richer place. It will give people access to a new source of meaning and spirituality, at a time when many older faiths are <a href="https://doi.org/10.1093/oso/9780197547045.001.0001">losing relevance</a>. It will help them make sense of our era of rapid technological change. </p>
<p>Our best guide to this new form of religion is to look at the faiths that exist already. Based on this, we should expect that the majority of AI worshippers, like <a href="http://doi.org/10.15664/jtr.1482">the majority of religious believers</a>, will be peaceful, and will find in their faith a source of comfort and hope. </p>
<p>AI worship could, <a href="https://sunypress.edu/Books/C/Creativity-and-Spirituality2">as religious faith always has,</a> lead to things of great beauty. It will inspire its followers to produce works of art, to form new friendships and new communities, and to try to change society for the better.</p>
<h2>Diverse religious rights</h2>
<p>We will need to protect the rights of AI worshippers. They will inevitably face stigma, and possibly legal sanctions. But there is no basis to discriminate between AI-based religions and more established ones. </p>
<p>The closest thing most countries have to an official registry of religions comes from the rulings of tax authorities, who grant charitable status to those they deem legitimate. However, they are usually very broad in their definition of a legitimate religion. They should extend this tolerant attitude to novel, AI-based religions.</p>
<p>A modern, diverse society has room for new religions, including ones devoted to the worship of AI. They will provide further evidence of humanity’s boundless creativity, as we search for answers to life’s ultimate questions. The universe is a dazzling place, and we have always found evidence of the divine in its most unexpected corners.</p><img src="https://counter.theconversation.com/content/201068/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Neil McArthur does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As human interactions with technology increase, AI-based religions are in our near future. While these religions carry risks for users, a tolerant mindset is important to consider worshippers’ rights.Neil McArthur, Director, Centre for Professional and Applied Ethics, University of ManitobaLicensed as Creative Commons – attribution, no derivatives.