tag:theconversation.com,2011:/uk/topics/robots-6403/articlesRobots – The Conversation2024-03-21T09:29:42Ztag:theconversation.com,2011:article/2252162024-03-21T09:29:42Z2024-03-21T09:29:42Z‘Empathetic’ AI has more to do with psychopathy than emotional intelligence – but that doesn’t mean we can treat machines cruelly<p>AI has long since surpassed humans in cognitive matters that were once considered the supreme disciplines of human intelligence like chess or Go. Some even believe it is superior when it comes to human emotional skills such as empathy. This does not just seem to be some companies’ talking big for marketing reasons; empirical studies suggest that people perceive <a href="https://www.theguardian.com/technology/2023/apr/28/ai-has-better-bedside-manner-than-some-doctors-study-finds">ChatGPT in certain health situations as more empathic than human medical staff</a>. Does this mean that AI is really empathetic?</p>
<h2>A definition of empathy</h2>
<p>As a psychologically informed philosopher, I define genuine empathy according to <a href="https://www.taylorfrancis.com/chapters/oa-edit/10.4324/9781003333739-5/seeing-others-ends-catrin-misselhorn">three criteria</a>: </p>
<ul>
<li><p>Congruence of feelings: empathy requires that the person who empathizes to feels what it is like to experience the other’s emotions in a specific situation. This distinguishes empathy from a mere rational understanding of emotions. </p></li>
<li><p>Asymmetry: the person who feels empathy only has the emotion because another individual has it and it is more appropriate to the other’s situation than to their own. For this reason, empathy is not just a shared emotion like the shared joy of parents over the progress of their offspring, where the asymmetry-condition is not met. </p></li>
<li><p>Other-awareness: There must be at least a rudimentary awareness that empathy is about the feelings of another individual. This accounts for the difference between empathy and emotional contagion which occurs if one catches a feeling or an emotion like a cold. This happens, for instance, when kids start to cry when they see another kid crying.</p></li>
</ul>
<h2>Empathetic AI or psychopathic AI?</h2>
<p>Given this definition, it’s clear that artificial systems cannot feel empathy. They do not know what it’s like to feel something. This means that they cannot fulfil the congruence condition. Consequently, the question of whether what they feel corresponds to the asymmetry and other-awareness condition does not even arise. What artificial systems can do is recognise emotions, be it on the basis of facial expressions, vocal cues, physiological patterns or affective meanings; and they can simulate empathic behaviour by ways of speech or other modes of emotional expression. </p>
<p>Artificial systems hence show similarities to what common sense calls a psychopath: despite being unable to feel empathy, they are capable to recognize emotions on the basis of objective signs, to mimic empathy and to use this ability for manipulative purposes. Unlike psychopaths, artificial systems do not set these purposes by themselves, but are given them by their designers. So-called empathetic AI is often supposed to make us behave in a desired way, such as not getting upset when driving, learning with greater motivation, working more productively, buying a certain product – or voting for a certain political candidate. But then does not everything depend on how good the purposes are for which empathy-simulating AI is used?</p>
<h2>Empathy-simulating AI in the context of care and psychotherapy</h2>
<p>Take care and psychotherapy, which aim to nurture people’s well-being. You might think that the use of empathy-simulating AI in these areas is definitely a good thing. Would they not be wonderful care-givers and social companions for old people, <a href="https://bioedge.org/disability/sex-robots-can-offer-the-intimacy-that-the-elderly-disabled-crave-says-bioethicist/">loving partners for the disabled</a>, or <a href="https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing">perfect psychotherapists that have the benefit of being available 24/7</a>?</p>
<p>Such questions ultimately concern what it means to be a human being. Is it enough for a lonely, old or mentally disturbed person to project emotions onto an artefact devoid of feelings, or is it important for a person to experience recognition for themselves and their suffering in an interpersonal relationship? </p>
<h2>Respect or tech?</h2>
<p>From an ethical perspective, it is a matter of respect whether there is someone who empathically acknowledges the needs and the suffering of a person as such. By taking away recognition by another subject, the person in need of care, companionship or psychotherapy is treated as a mere object because ultimately this is based on the assumption that it does not matter whether anybody really listens to the person. They do not have a moral claim that their feelings, needs and suffering is perceived by someone who can really understand them.
Using <a href="https://www.hachettebookgroup.com/titles/evgeny-morozov/to-save-everything-click-here/9781610391399/?lens=publicaffairs">empathy-simulating AI in care and psychotherapy is ultimately another case of technological solutionism</a>, i.e., the naïve assumption that there is a technological fix for every problem, including loneliness and mental “malfunctions”. Outsourcing these issues to artificial systems prevents us from seeing the social causes for loneliness and mental disorders in the larger context of society.</p>
<p>In addition, designing artificial systems to appear as someone or something that has emotions and feels empathy would mean that such devices always have a manipulative character because they address very subliminal mechanisms of anthropomorphisation. This fact is used in commercial applications to get users to unlock a paid premium level: or customers pay with their data. Both practices are particularly problematic for <a href="https://theconversation.com/i-tried-the-replika-ai-companion-and-can-see-why-users-are-falling-hard-the-app-raises-serious-ethical-questions-200257">vulnerable groups</a>, which are at stake here. Even people who do not belong to vulnerable groups and are perfectly aware that an artificial system has no feelings will still react empathically to it as if it did.</p>
<h2>Empathy with artificial systems – all too human</h2>
<p>It is a well-studied phenomenon that humans react with empathy towards artificial systems that display certain human or animal-like characteristics. This process is largely based on perceptual mechanisms which are not consciously accessible. Perceiving a sign that another individual is undergoing a certain emotion produces a congruent emotion in the observer. Such a sign can be a typical behavioural manifestation of an emotion, a facial expression or an event that typically causes a certain emotion. Evidence from brain MRI scans shows that the same neural structures <a href="https://www.livescience.com/28947-humans-show-empathy-for-robots.html">get activated when humans feel empathy with robots</a>.</p>
<p>Although empathy might not be strictly necessary for morality, it plays an important moral role. For this reason, our empathy toward human-like (or animal-like) robots imposes at least indirect moral constraints on how we should treat these machines. It is morally wrong to habitually abuse robots that elicit empathy as doing so negatively affects our capacity to feel empathy, which is an <a href="https://link.springer.com/chapter/10.1007/978-3-658-37641-3_7">important source of moral judgment, motivation, and development</a>. </p>
<p>Does this mean that we have to establish a robot-rights league? That would be premature, as robots do not have moral claims by themselves. Empathy with robots is only indirectly morally relevant due to its effects on human morality. But we should carefully consider whether and in which areas we really want robots that simulate and evoke empathy in human beings as we run <a href="https://link.springer.com/chapter/10.1007/978-3-658-37641-3_10">the risk of distorting or even destroying our social practices if they became pervasive</a>.</p><img src="https://counter.theconversation.com/content/225216/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Catrin Misselhorn ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Artificial Intelligence’s capacity to mimic and identify emotions is worlds away from the human one to feel.Catrin Misselhorn, Professor of philosophy, Georg-August-Universität Göttingen Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2243872024-03-06T17:45:13Z2024-03-06T17:45:13ZSpacesuits need a major upgrade for the next phase of exploration<figure><img src="https://images.theconversation.com/files/579865/original/file-20240305-18-mik4ri.jpg?ixlib=rb-1.1.0&rect=11%2C0%2C3822%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.nasa.gov/news-release/nasa-taps-axiom-space-for-first-artemis-moonwalking-spacesuits/">NASA</a></span></figcaption></figure><p>Humans have long dreamed of setting foot on the Moon and other planetary bodies such as Mars. Since the 1960s, space travellers have donned suits designed to protect them from the vacuum of space and stepped out into the unknown.</p>
<p>However, <a href="https://spacenews.com/polaris-dawn-private-astronaut-mission-slips-to-mid-2024/">the Polaris Dawn mission</a>, which is to include the first spacewalk organised by a private company, has been delayed. This is due to complications with the design and development of a suitable spacesuit. </p>
<p>Moon suits are also one of the key elements of Nasa’s Artemis lunar programme that have yet to be delivered. A report released in November 2023 said that the contractor making the suits is having <a href="https://www.gao.gov/products/gao-24-106256#:%7E:text=To%20develop%20Artemis%20space%20suits,report%20examining%20the%20Artemis%20enterprise.">to revisit aspects of the design provided by Nasa</a>, which could introduce delays.</p>
<p>Yet <a href="https://time.com/5802128/alexei-leonov-spacewalk-obstacles/">the first spacewalk</a>, by the Soviet cosmonaut Alexei Leonov, took place in 1965. Later, <a href="https://www.nasa.gov/the-apollo-program/">12 Nasa astronauts would walk on the lunar surface</a>, between 1969 and 1972, using technology that would be eclipsed by today’s smartphones. So it’s not unreasonable to ask why it can still be difficult to design and build spacesuits to do the same thing.</p>
<p>Much has changed since the Apollo missions planted flags on the Moon. The <a href="https://www.cnbc.com/video/2024/01/20/us-china-india-japan-and-others-are-rushing-back-to-the-moon.html">geopolitics driving space travel have shifted</a>, and spacesuits are no longer expected to be just a form of protection. Instead, they are a critical way to improve the productivity of astronauts. This involves a rethink of not just the suits themselves, but the technology that supports them.</p>
<figure class="align-center ">
<img alt="Crew Dragon approaching the ISS" src="https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579872/original/file-20240305-30-sdnkjj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Polaris Dawn mission uses modified version of the Crew Dragon spacecraft to perform the first commercial spacewalk.</span>
<span class="attribution"><a class="source" href="https://www.nasa.gov/image-article/view-of-spacex-crew-dragon-endeavour-approaching-station/">Nasa</a></span>
</figcaption>
</figure>
<p>An array of powerful telecommunications technologies to connect astronauts with space stations and ground control sits alongside multisensory cameras, temperature readers and proximity sensors in present-day spacesuits.</p>
<p>Situational awareness – understanding key elements in the environment, such as the health of an astronaut – is a core tenet for modern spacesuit design and critical for the operator’s safety. The ability of a suit to track heart rate and other vital signs is important in a vacuum, where levels of oxygen need constant monitoring. </p>
<p>Expectations around the risks astronauts take have changed for the better. And the level of investment it takes to produce a spacesuit necessitates that it can be used for future tasks that may include lunar settlement in the next few decades.</p>
<p>The trade off that engineers must make when incorporating wearable technology like those already mentioned is weight. Will greater situational awareness result in a spacesuit that is too heavy to move in effectively? </p>
<p>When Elon Musk first hinted at challenges with the extravehicular activity spacesuit for Polaris Dawn <a href="https://twitter.com/SpaceX/status/1745941814165815717">in a presentation to SpaceX employees in January</a>, it was not difficulties with connected technology that he discussed, but of redesigning “the suit so that you actually move around in it”.</p>
<h2>Situational awareness</h2>
<p>However, when talking about mobility in a spacesuit, you need to consider the tasks that you want that mobility to support. </p>
<p>Before the advent of modern spacesuits, Apollo astronauts struggled to carry out missions. When drilling into the surface of the Moon with a hand drill to collect samples, astronauts found it difficult to provide enough downwards force to counteract the Moon’s weaker gravity. It was not until the <a href="https://www.sciencedirect.com/science/article/abs/pii/S0094576522002879">invention of a zero-gravity drill</a>, decades later, that this problem would be addressed.</p>
<p>The current exploration of <a href="https://digital-library.theiet.org/content/books/ce/pbce131e">pneumatic exoskeletons</a>, providing the support necessary for movement in low gravity could be part of a solution. However, newer spacesuits may also need to interface with hardware, like robotic drills that exist outside the suit. This will also necessitate more mobility in spacesuits. </p>
<h2>Working with robots</h2>
<p>Offloading tasks, previously carried out by humans, to robots will be part of the future of space exploration. It’s a primary way that engineers will also be able to enhance the mobility of astronauts in spacesuits.</p>
<p>For example, when an astronaut goes on a spacewalk to inspect the condition of part of a space station and make any possible repairs, they are supported by a robotic arm that ensures they don’t float off into space. While jointed, this arm is rigid and can limit an astronaut’s movement.</p>
<p>An approach currently being explored to extend this range of movement is a climbing robot, that is attached to both the astronaut and the space station, that an individual can control through their spacesuit. This would allow the astronaut to move around the space station faster and with a greater range of movement than before, allowing them to reach and repair hard-to-access areas like corners.</p>
<p>While the eventual hope is that robots themselves can assess any damage to the space station and repair it, due to possible disruptions in normal operations, humans must be ready to step in. Possible disruptions could be natural, like a small meteor shower damaging the robot, or human-made, like hacking carried by a hostile group or state.</p>
<p>For the types of activities we want to accomplish in the future, this human-robot collaboration will be instrumental. Building a base on the Moon, as both <a href="https://www.smithsonianmag.com/science-nature/four-things-weve-learned-about-nasas-planned-base-camp-on-the-moon-180980589/">the US</a> and <a href="https://spacenews.com/china-attracts-moon-base-partners-outlines-project-timelines/">China</a> plan to do, will involve construction work and drilling, which humans will not be able to accomplish alone. Modern spacesuits will need to provide an interface to work with this new technology, and we can expect the suits to evolve in step with robotics.</p>
<p>The relationship between humans and robots is changing. It will go beyond spacewalks and robots’ previous uses as limited tools, to a situation where they are cooperative partners in space. The objectives of ten or 20 years from now, like building lunar settlements, exploring mineral deposits on the Moon and efficiently repairing space station modules can only be achieved using robotics. </p>
<p>Modern spacesuits will be a key foundation of this collaborative relationship, forming the interface where astronauts and robots can work together to achieve shared goals. So when we do once again leave our footprints on other worlds, we will no longer be alone.</p><img src="https://counter.theconversation.com/content/224387/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yang Gao has received funding from UKRI, UKSA and ESA on conducting space related research. </span></em></p>The next generation of spacesuit needs to do more than simply protect an astronaut from the vacuum of space.Yang Gao, Professor of Robotics, Head of Centre for Robotics Research, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2222662024-02-26T13:37:32Z2024-02-26T13:37:32Z‘Swarm of one’ robot is a single machine made up of independent modules<figure><img src="https://images.theconversation.com/files/575061/original/file-20240212-16-ex7r9g.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4000%2C3000&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This robot mimics simple life forms.</span> <span class="attribution"><a class="source" href="https://ieeexplore.ieee.org/document/10342118">Trevor Smith</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>My colleagues and I have built a robot composed of many building blocks like the cells of a multicellular organism. Without a “brain” or a central controller in the system, our robot, dubbed Loopy, relies on the collective behavior of all of its cells to interact with the world. </p>
<p>In this sense, we call Loopy a <a href="https://doi.org/10.1109/JPROC.2021.3072740">robotic swarm</a>. But Loopy can also be seen as a single robot since all the cells are connected; therefore, Loopy is also “a swarm of one.” This research could lead to adaptive robots that tailor their shapes and movements to their environments – for example, in environmental cleanup applications.</p>
<p>Loopy is a <a href="https://doi.org/10.1109/IROS55552.2023.10342118">primitive form of multicellular robot</a> that is made of a ring of 36 cells. Each cell has a <a href="https://www.youtube.com/watch?v=tHOH-bYjR4k">rotary servo</a> – an electric motor that rotates a shaft with precise controlled angle of rotation – and sensors. Each cell reacts on its own without input from any of the others except for its two immediate neighbors. As the servos move, the angles between the cells determine Loopy’s overall shape.</p>
<p>Loopy is free to morph into various shapes and exhibit a range of motions. But random shapes and motions are not useful. We were hoping something interesting would emerge from self-organization; that is, the spontaneous creation of order from disorder, without us telling Loopy what to do directly. It turned out that Loopy forms stable shapes that recover after Loopy bumps into obstacles.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/lyohCt0UN6A?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Loopy exhibiting spontaneous shapes and motions.</span></figcaption>
</figure>
<p>Famed mathematician <a href="https://www.britannica.com/biography/Alan-Turing">Alan Turing</a> was interested in the idea of self-organization back in 1952. He even envisioned <a href="https://doi.org/10.1098/rstb.1952.0012">a ring of cells</a>. Turing hypothesized the existence of chemicals that diffuse and react with each other, leading to the creation of <a href="https://theconversation.com/how-animals-get-their-skin-patterns-is-a-matter-of-physics-new-research-clarifying-how-could-improve-medical-diagnostics-and-synthetic-materials-217035">patterns in nature</a> like those on bird’s feathers and seashells. This self-organization approach using simulated chemicals enabled Loopy to form and transition between various lobed shapes spontaneously. </p>
<h2>Why it matters</h2>
<p>Engineered systems, and robots in particular, are predominantly designed with a <a href="https://doi.org/10.1007/s10514-007-9080-5">top-down approach</a>, where human designers anticipate the conditions the system may encounter and plan ahead through hardware designs, software programs or both. The problem is, the designers are not likely be there when the robot encounters an unanticipated situation. </p>
<p>This micromanagement approach in robot design is like giving kids a detailed manual when sending them to school the first day. A better way of parenting would be to provide general guidelines and feedback, and expect the kids to solve problems on their own. Similarly, a key motivation of developing Loopy is to unleash the power of <a href="https://link.springer.com/chapter/10.1007/978-3-642-79629-6_11">bottom-up collective “intelligence</a>” so Loopy can find new solutions on its own when a new situation arises; for example, finding the right shape for self to adapt to the environment.</p>
<h2>What other research is being done?</h2>
<p>The vision of programmable matter has been around for decades, yet tangible examples have been scarce. While researchers have explored complex shape formation through <a href="https://doi.org/10.1038/s44172-022-00034-3">self-assembly</a> or <a href="https://doi.org/10.1038/s44172-022-00034-3">reconfigurable robotic systems</a>, these often depend on predetermined shapes. </p>
<p>Similar to Loopy, researchers have applied Turing’s self-organization concept to <a href="https://doi.org/10.1126/scirobotics.aau9178">swarms of robots</a>, such as the small, simple, autonomous <a href="https://ssr.seas.harvard.edu/kilobots">Kilobots</a>, leading to the emergence of complex shapes. However, unlike Loopy, the physical forces between “cells” are not used to influence the final shape and behavior of the collective.</p>
<h2>What’s next?</h2>
<p>We would like Loopy to develop more lifelike traits, such as navigating unforeseen situations, seeking out better conditions, acquiring resources and mitigating threats. This vision extends to eventually enabling Loopy to perform tasks assigned by people, thereby bridging the gap between the open-ended creativity of self-organization and human guidance.</p>
<p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take on interesting academic work.</em></p><img src="https://counter.theconversation.com/content/222266/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yu Gu works for West Virginia University. </span></em></p><p class="fine-print"><em><span>Trevor Smith works for West Virginia University.</span></em></p>‘Loopy’ is a multicellular robot inspired by biology and designed to react to its environment without instructions on how to do so.Yu Gu, Professor of Mechanical and Aerospace Engineering, West Virginia UniversityTrevor Smith, PhD Candidate in Mechanical Engineering, West Virginia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2208282024-02-14T13:24:42Z2024-02-14T13:24:42ZWe designed wormlike, limbless robots that navigate obstacle courses − they could be used for search and rescue one day<figure><img src="https://images.theconversation.com/files/571646/original/file-20240126-17-1c52dw.JPG?ixlib=rb-1.1.0&rect=55%2C0%2C4024%2C1578&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Limbless robots may not need lots of complex algorithms when they have mechanical intelligence. </span> <span class="attribution"><span class="source">Tianyu Wang</span></span></figcaption></figure><p>Scientists have been trying to build <a href="https://en.wikipedia.org/wiki/Snakebot">snakelike, limbless robots</a> for decades. These robots could come in handy in <a href="https://www.science.org/content/article/searching-survivors-mexico-earthquake-snake-robots">search-and-rescue</a> situations, where they could navigate collapsed buildings to find and assist survivors. </p>
<p>With slender, flexible bodies, limbless robots could readily move through confined and cluttered spaces such as debris fields, where walking or wheeled robots and human rescuers tend to fail.</p>
<p>However, even the most advanced limbless robots have not come close to moving with the agility and versatility of worms and snakes in difficult terrain. Even the tiny nematode worm <em><a href="http://www.wormbook.org/">Caenorhabditis elegans</a></em>, which has a relatively simple nervous systems, can navigate through difficult physical environments. </p>
<p>As part of a team of <a href="https://www.lulab.gatech.edu/">engineers</a>, <a href="https://crablab.gatech.edu/">roboticists and physicists</a>, we wanted to explore this discrepancy in performance. But instead of looking to neuroscience for an answer, <a href="https://en.wikipedia.org/wiki/Biomechanics">we turned to biomechanics</a>. </p>
<p>We set out to build a robot model that drove its body using a mechanism similar to how worms and snakes power their movement. </p>
<h2>Undulators and mechanical intelligence</h2>
<p>Over thousands of years, organisms have evolved <a href="https://www.britannica.com/science/nervous-system">intricate nervous systems</a> that allow them to sense their physical surroundings, process this information and execute precise body movements to navigate around obstacles. </p>
<p>In robotics, engineers design algorithms that take in information from sensors on the robot’s body – a type of robotic nervous system – and use that information to decide how to move. These algorithms and systems are usually complex. </p>
<p>Our team wanted to figure out a way to simplify these systems by highlighting mechanically controlled approaches to dealing with obstacles that don’t require sensors or computation. To do that, we turned to examples from biology.</p>
<p>Animals don’t rely solely on their neurons – brain cells and <a href="https://my.clevelandclinic.org/health/body/23123-peripheral-nervous-system-pns">peripheral nerves</a> – to control movement. They also use the physical properties of their body – for example, the elasticity of their muscles – to help them react to their environment spontaneously, before their neurons even have a chance to respond.</p>
<p>While computational systems are governed by <a href="https://en.wikipedia.org/wiki/Computational_logic">the laws of mathematics</a>, mechanical systems are governed by physics. To achieve the same task, scientists can either design an algorithm or carefully design a physical system. </p>
<p>For example, limbless robots and animals move through the world by bending sections of their body left and right, <a href="https://en.wikipedia.org/wiki/Undulatory_locomotion">a type of movement called undulation</a>. If they collide with an obstacle, they have to turn away and go around it by bending more to one side than the other.</p>
<p>Scientists could achieve this with a robot by attaching sensors to its head or body. They could then design an algorithm that tells the robot to turn away or wind around the obstacle when it “feels” a large enough force on its head or body. </p>
<p>Alternatively, scientists could carefully select the robot’s materials and the arrangement and strength of its motors so that collisions would spontaneously produce a body shape that led to a turn. This robot would have what scientists call “mechanical intelligence.”</p>
<p>If scientists like us can understand how organisms’ bodies respond mechanically to contact with objects in their environment, we can design better robots that can deal with obstacles without having to program complex algorithms. </p>
<p>If you compare a diverse set of undulating organisms with the increasingly large zoo of <a href="https://en.wikipedia.org/wiki/Snakebot">robotic “snakes</a>,” one difference between the robots and biological undulators stands out. Nearly all undulatory robots bend their bodies using a series of connected segments with motors at each joint. But that’s not how living organisms bend.</p>
<p>In contrast, all limbless organisms, from large snakes to the lowly, microscopic nematode, achieve bends not from a single rotational joint-motor system but instead through <a href="http://www.wormbook.org/chapters/www_bodywallmuscle/bodywallmuscle.html">two bands of muscles</a> on either side of the body. To an engineer, this design seems counterintuitive. Why control something with two muscles or motors when one could do the job? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram showing a gray worm with a window showing the inside of the worm's body, which has two bands of muscle on the left and right side, cuticle on the top and nerve cord on the bottom, top and sides." src="https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=283&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=283&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=283&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=355&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=355&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575078/original/file-20240212-26-it6ean.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=355&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nematodes have two bands of muscle on the sides of their bodies that control motion.</span>
<span class="attribution"><span class="source">Ralf J. Sommer and WormAtlas</span></span>
</figcaption>
</figure>
<p>To get to the bottom of this question, our team built a new robot called MILLR, for mechanically intelligent limbless robot, inspired by the two bands of muscle on snakes and worms. MILLR has two independently controlled cables that pull each joint left and right, bilaterally.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A diagram showing the design of MILLR, with servo motors on each body segment, and cables and pulleys connecting them." src="https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=275&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=275&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=275&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=345&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=345&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575079/original/file-20240212-20-gtf8t7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=345&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">MILLR’s design, inspired by nematode <em>C. elegans</em>.</span>
<span class="attribution"><span class="source">Tianyu Wang</span></span>
</figcaption>
</figure>
<p><a href="https://doi.org/10.1126/scirobotics.adi2243">We found</a> this method allows the robot to spontaneously move around obstacles without having to sense its surroundings and actively change its body posture to comply to the environment.</p>
<h2>Building a mechanically intelligent robot</h2>
<p>Rather than mimicking the detailed muscular anatomy of a particular organism, MILLR applies forces to either side of the body by spooling and unspooling a cable. </p>
<p>This way, it mirrors the muscle activation methods that snakes and nematodes use, where the left and right sides take turns activating. This activation mode pulls the body toward one side or another by tightening on one side, while the other side relaxes and is pulled along passively. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="On the left, a photo showing a worm weaving between pegs. On the right, a photo showing a worm-like robot weaving between pegs." src="https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=122&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=122&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=122&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=153&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=153&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575081/original/file-20240212-26-bro51v.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=153&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">MILLR’s design allows it to move through obstacles the same way worms do.</span>
<span class="attribution"><span class="source">Tianyu Wang and Christopher Pierce</span></span>
</figcaption>
</figure>
<p>By changing the amount of slack in the cables, <a href="https://doi.org/10.1126/scirobotics.adi2243">we can achieve</a> varying degrees of body stiffness. When the robot collides with an obstacle, depending on the cable tension, it selectively maintains its shape or bends under the force of the obstacle. </p>
<p><a href="https://doi.org/10.1126/scirobotics.adi2243">We found that</a> if the robot was actively bending to one side and it experienced a force in the same direction, the body complied to the force and bent further. If, alternatively, the robot experienced a force that opposed the bend, it would remain rigid and push itself off the obstacle. </p>
<p>Because of the pattern of the tension along the body, head-on collisions that would normally cause the robot to stop moving or jam itself instead naturally led to a redirection around the obstacle. The robot could push itself forward consistently. </p>
<h2>Testing MILLR</h2>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/21F7IOF9BMs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>To investigate the benefits of mechanical intelligence, we built tiny obstacle courses and sent nematode worms through them to see how well they performed. We sent MILLR through a similar course and compared the results.</p>
<p>MILLR moved through its course <a href="https://doi.org/10.1126/scirobotics.adi2243">about as effectively as the real worms</a>. We noticed that the worms made the same type of body movements when they collided with obstacles as MILLR did.</p>
<p>The principles of mechanical intelligence could extend beyond the realm of nematodes. Future research could look at designing robots based on a host of other types of organisms for applications ranging from search and rescue to <a href="https://youtu.be/e0D9IVo-E9M?si=d8jGaC5GDLaMbEeS">exploring other planets</a>.</p><img src="https://counter.theconversation.com/content/220828/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This work was supported by the National Science Foundation Physics of Living Systems Student Research Network, NSF-Simons Southeast Center for Mathematics and Biology, Army Research Office Grant, and the Dunn Family Professorship.</span></em></p>Robots often have a hard time navigating through debris, but robots designed based on worms and snakes could move around obstacles faster, thanks to an idea called mechanical intelligence.Tianyu Wang, Ph.D. Student in Robotics, Georgia Institute of TechnologyChristopher Pierce, Postdoctoral Scholar in Physics, Georgia Institute of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2133362024-01-26T13:17:43Z2024-01-26T13:17:43ZWhy are so many robots white?<figure><img src="https://images.theconversation.com/files/568910/original/file-20240111-27-va5e62.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2048%2C1364&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This little guy is very cute − and very white.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/jiuguangw/4981810943/"> Jiuguang Wang/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Problems of racial and gender bias in artificial intelligence algorithms and the data used to train large language models like ChatGPT have <a href="https://doi.org/10.1145/3597307">drawn the attention of researchers</a> and <a href="https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/">generated headlines</a>. But these problems also arise in social robots, which have physical bodies modeled on nonthreatening versions of humans or animals and are designed to interact with people.</p>
<p>The aim of the subfield of social robotics called socially assistive robotics is to interact with ever more diverse groups of people. Its practitioners’ noble intention is “to create machines that will best help people help themselves,” writes one of its pioneers, <a href="https://www.wsj.com/articles/how-to-build-robots-people-can-relate-to-11570807206">Maja Matarić</a>. The robots are already being used to help people on the <a href="https://theconversation.com/how-robots-can-help-us-embrace-a-more-human-view-of-disability-76815">autism spectrum</a>, children with special needs and stroke patients who need physical rehabilitation. </p>
<p>But these robots do not look like people or interact with people in ways that reflect even basic aspects of society’s diversity. As a <a href="https://scholar.google.com/citations?hl=en&user=9JvGLRcAAAAJ&view_op=list_works&sortby=pubdate">sociologist who studies human-robot interaction</a>, I believe that this problem is only going to get worse. Rates of diagnoses for autism in children of color are now <a href="https://www.cdc.gov/ncbddd/autism/addm-community-report/spotlight-on-racial-ethnic-differences.html">higher than for white kids</a> in the U.S. Many of these children could end up interacting with white robots.</p>
<p>So, to adapt the famous Twitter <a href="https://knowyourmeme.com/memes/oscars-so-white">hashtag around the Oscars</a> in 2015, why #robotssowhite?</p>
<h2>Why robots tend to be white</h2>
<p>Given the diversity of people they will be exposed to, why does <a href="https://robotsguide.com/robots/kaspar">Kaspar</a>, designed to interact with children with autism, have rubber skin that resembles a white person’s? Why are <a href="https://robotsguide.com/robots/nao">Nao</a>, <a href="https://robotsguide.com/robots/pepper">Pepper</a> and <a href="https://robotsguide.com/robots/icub">iCub</a>, robots used in schools and museums, clad with shiny, white plastic? In <a href="https://doi.org/10.1007/s13347-020-00415-6">The Whiteness of AI</a>, technology ethicist Stephen Cave and science communication researcher Kanta Dihal discuss racial bias in AI and robotics and note the preponderance of stock images online of robots with reflective white surfaces. </p>
<p>What is going on here?</p>
<p>One issue is what robots are already out there. Most robots are not developed from scratch but purchased by engineering labs for projects, adapted with custom software, and sometimes integrated with other technologies such as robot hands or skin. Robotics teams are therefore constrained by design choices that the original developers made (Aldebaran for Pepper, Italian Institute of Technology for iCub). These design choices tend to follow the clinical, clean look with shiny white plastic, similar to other technology products like the original iPod.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wT0RtnCR13o?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Kaspar is a robot designed to interact with children with autism.</span></figcaption>
</figure>
<p>In a paper I presented at the 2023 American Sociological Association meeting, I call this “<a href="https://convention2.allacademic.com/one/asa/asa23/index.php?program_focus=view_paper&selected_paper_id=2066209&cmd=online_program_direct_link&sub_action=online_program">the poverty of the engineered imaginary</a>.”</p>
<h2>How society imagines robots</h2>
<p>In anthropologist Lucy Suchman’s <a href="https://www.cambridge.org/us/universitypress/subjects/psychology/developmental-psychology/human-machine-reconfigurations-plans-and-situated-actions-2nd-edition">classic book on human-machine interaction</a>, which was updated with chapters on robotics, she discusses a “cultural imaginary” of what robots are supposed to look like. A cultural imaginary is what is shared through representations in texts, images and films, and which collectively shapes people’s attitudes and perceptions. For robots, the cultural imaginary is derived from science fiction. </p>
<p>This cultural imaginary can be contrasted with the more practical concerns of how computer science and engineering teams view robot bodies, what Neda Atanasoski and Kalindi Vora call the “engineered imaginary.” This is a hotly contested area in feminist science studies, with, for example, Jennifer Rhee’s “<a href="https://www.upress.umn.edu/book-division/books/the-robotic-imaginary">The Robotic Imaginary</a>” and Atanasoski and Vora’s “<a href="https://www.dukeupress.edu/surrogate-humanity">Surrogate Humanity</a>” critical of the gendered and racial assumptions that lead people to design service robots – designed to carry out mundane tasks – as female.</p>
<p>The cultural imaginary that enshrines robots as white, and in fact usually female, stretches back to European antiquity, along with an explosion of novels and films at the height of industrial modernity. From the first mention of the word “android” in Auguste Villiers de l’Isle-Adam’s 1886 novel “The Future Eve,” the introduction of the word “robot” in Karel Čapek’s 1920 play “Rossum’s Universal Robots,” and the sexualized robot Maria in the 1925 novel “Metropolis” by Thea von Harbou – the basis of her husband Fritz Lang’s famous 1927 film of the same name – fictional robots were quick to be feminized and made servile. </p>
<p>Perhaps the prototype for this cultural imaginary lies in ancient Rome. A poem in Ovid’s “Metamorphoses” (8 C.E.) describes a statue of Galatea “of snow-white ivory” that its creator Pygmalion falls in love with. Pygmalion prays to Aphrodite that Galatea come to life, and his wish is granted. There are numerous literary, poetic and film adaptations of the story, including one of the first special effects in cinema in <a href="https://youtu.be/lw8ckUGbbMY">Méliès’ 1898 film</a>. Paintings that depict this moment, for example by Raoux (1717), Regnault (1786), and Burne-Jones (1868-70 and 1878), accentuate the whiteness of Galatea’s flesh.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A painting of a man embracing a nude female figure whose bottom half is a marble statue and upper half is a woman" src="https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=739&fit=crop&dpr=1 600w, https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=739&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=739&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=928&fit=crop&dpr=1 754w, https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=928&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/568932/original/file-20240111-23-qycib1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=928&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The painting Pygmalion and Galatea by Jean-Léon Gérôme depicts an ancient Roman tale of a statue brought to life.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/peterjr1961/2920107167/">Peter Roan/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<h2>Interdisciplinary route to diversity and inclusion</h2>
<p>What can be done to counter this cultural legacy? After all, all human-machine interaction should be designed with diversity and inclusion in mind, according to engineers <a href="https://theconversation.com/building-machines-that-work-for-everyone-how-diversity-of-test-subjects-is-a-technology-blind-spot-and-what-to-do-about-it-174757">Tahira Reid and James Gibert</a>. But outside of Japan’s ethnically Japanese-looking robots, robots designed to be nonwhite are rare. And Japan’s robots tend to follow the subservient <a href="https://www.forbes.com/sites/zarastone/2018/02/27/ten-incredibly-lifelike-humanoid-robots-to-get-on-your-radar/?sh=2f7f323334d2">female gender stereotype</a>.</p>
<p>The solution is not simply to encase machines in brown or black plastic. The problem goes deeper. The <a href="https://www.hansonrobotics.com/bina48-9/">Bina48 “custom character robot”</a> modeled on the head and shoulders of a millionaire’s African American wife, Bina Aspen, is notable, but its <a href="https://www.nytimes.com/2010/07/05/science/05robotside.html">speech and interactions are limited</a>. A series of conversations between Bina48 and the African American artist <a href="https://www.stephaniedinkins.com/about.html">Stephanie Dinkins</a> is the basis of a <a href="https://www.stephaniedinkins.com/conversations-with-bina48.html">video installation</a>. </p>
<p>The absurdity of talking about racism with a disembodied animated head becomes apparent in one such conversation – it literally has no personal experience to speak of, yet its AI-powered answers refer to an unnamed person’s experience of racism growing up. These are implanted memories, like the “memories” of the <a href="https://bladerunner.fandom.com/wiki/Replicant">replicant</a> androids in the <a href="https://www.imdb.com/list/ls092704633/">“Blade Runner” movies</a>.</p>
<p>Social science methods can help produce a more inclusive “engineered imaginary,” as I discussed at Edinburgh’s <a href="https://www.cdcs.ed.ac.uk/events/imagining-artificial-life">Being Human festival</a> in November 2022. For example, working with Guy Hoffman, a roboticist from Cornell, and Caroline Yan Zheng, then a Ph.D. design student from Royal College of Art, we invited contributions for a publication titled <a href="https://doi.org/10.1145/3594713">Critical Perspectives on Affective Embodied Interaction</a>. </p>
<p>One of the persistent threads in that collaboration and other work is just how much people’s bodies communicate to others through gesture and expression, as well as vocalization, and how this differs between cultures. In which case, making robots’ appearance reflect the diversity of people who benefit from their presence is one thing, but what about diversifying forms of interaction? Along with making robots less universally white and female, social scientists, interaction designers and engineers can work together to produce more <a href="https://doi.org/10.1080/17458927.2023.2179231">cross-cultural sensitivity in gestures and touch</a>, for example. </p>
<p>Such work promises to make human-robot interaction less scary and <a href="https://doi.org/10.1027/2151-2604/a000486">uncanny</a>, especially for people who need assistance from the new breeds of socially assistive robots.</p><img src="https://counter.theconversation.com/content/213336/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Paterson has received funding in the past from AHRC-EPSRC and OC Robotics in the U.K.</span></em></p>Humanoid robots tend to be white or resemble white people. Here’s why this is a problem and what social scientists, designers and engineers can do about it.Mark Paterson, Professor of Sociology, University of PittsburghLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2201242024-01-16T13:41:20Z2024-01-16T13:41:20ZWhat social robots can teach America’s students<figure><img src="https://images.theconversation.com/files/568716/original/file-20240110-29-vri22q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some researchers predict social robots will become common in K-12 classrooms.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/elementary-schoolboy-touching-robotic-hand-royalty-free-image/1280407754">selimaksan/E+ Collection/Getty Images</a></span></figcaption></figure><p>How would you feel if your child were being tutored by a robot?</p>
<p><a href="https://doi.org/10.1016/S0921-8890(02)00373-1">Social robots</a> – robots that can talk and mimic and respond to human emotion – have been introduced into <a href="https://doi.org/10.1016/j.edurev.2021.100388">classrooms around the world</a>. Researchers have used them to <a href="https://www.youtube.com/watch?v=tBDI6kjj4nI">read stories</a> to <a href="https://www.imda.gov.sg/resources/blog/blog-articles/archived/2016/04/pepper-spices-up-classroom-learning">preschool students in Singapore</a>, help 12-year-olds in Iran <a href="https://doi.org/10.3102/0034654318821286">learn English</a>, <a href="http://www.doi.org/10.1109/HRI.2016.7451758">improve handwriting</a> among young children in Switzerland and teach students with autism in England <a href="https://doi.org/10.1007/s12369-014-0250-2">appropriate physical distance</a> during social interactions.</p>
<p>Some experts believe these robots could become <a href="https://www.doi.org/10.1126/scirobotics.aat5954">“as common as paper, whiteboards and computer tablets”</a> in schools. </p>
<p>Because social robots have a body, humans <a href="https://news.stanford.edu/2023/05/15/respond-social-robots/">react to them differently</a> than we do to a computer screen. Studies have shown that little children sometimes accept social robots as peers. For example, in the <a href="https://www.doi.org/10.1109/HRI.2016.7451758">handwriting study</a>, a 5-year-old boy continued to send letters to the robot months after the interactions ended. </p>
<p>As a professor of education, I study the different ways that <a href="https://scholar.google.com/citations?user=VCt87SkAAAAJ&hl=en">teachers around the world do their jobs</a>. To understand how social robots could affect teaching, graduate student Raisa Gray and I introduced a 4-foot-tall <a href="https://us.softbankrobotics.com/pepper">humanoid robot called “Pepper”</a> into a public elementary and middle school in the U.S. Our research <a href="https://doi.org/10.1111/jcal.12872">revealed many problems</a> with the current generation of social robots, making it unlikely that social robots will be running classrooms anytime soon.</p>
<h2>Not ready for prime time</h2>
<p>Much of the research on social robots in schools is done in <a href="https://link.springer.com/article/10.1007/s12369-010-0069-4">very restricted ways</a>. Children and social robots are not allowed to freely interact with each other without the assistance, or intervention, of researchers. Only a few studies have used social robots in <a href="https://doi.org/10.1016/j.edurev.2021.100388">real-life classroom settings</a>.</p>
<p>Also, robotic researchers often use <a href="https://doi.org/10.1007/s00146-021-01202-3">“Wizard of Oz” techniques</a> in classroom settings. That means that a person is operating the robot remotely, giving the impression that the robot can <a href="https://www.youtube.com/watch?v=zJHyaD1psMc">really talk to humans</a>. </p>
<h2>Limited social skills</h2>
<p>Robots need quiet.</p>
<p>Any kind of background noise – class-change bells, loudspeaker announcements or other conversations – can disrupt the robot’s ability to follow a conversation. This is one of the major problems facing the integration of robots into schools. </p>
<p>It is extremely difficult for programmers to create software and hardware systems that can achieve what humans do unconsciously. For example, the current generation of social robots cannot interact with a small group and, for example, track multiple people’s facial expressions. If a person is talking to two other people about their favorite football team and one of the listeners frowns or rolls their eyes, a human will likely pick up on that.</p>
<p>A robot will not. </p>
<p>Also, unless a bar code or other identification device is used, today’s social robots cannot recognize individuals. This makes it very unlikely for them to have realistic social interactions. Facial recognition software is difficult to use in a room full of moving, shifting people, and also raises serious ethical questions about keeping students’ personal information safe. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A child stands in front of Pepper the robot" src="https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=396&fit=crop&dpr=1 600w, https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=396&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=396&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=497&fit=crop&dpr=1 754w, https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=497&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/568317/original/file-20240108-19-ynwsuc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=497&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Students talked to the ‘Pepper’ robot as if it were a person.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/april-2018-hanover-germany-a-girl-speakng-with-the-robot-news-photo/978204290">Julian Stratenschulte/picture alliance via Getty Images</a></span>
</figcaption>
</figure>
<h2>Dialogue is preprogrammed</h2>
<p>To get the robot to perform, our students had to master the tutorials that came with the robot. Some students quickly figured out that the robot could respond only to certain basic routines.</p>
<p>For example, Pepper could respond to “How old are you?” but not “What age are you?” Other students kept trying to interact with the robot as if it were a person and got very frustrated with its nonhuman responses.</p>
<p>When a robot <a href="https://doi.org/10.1016/j.chb.2017.12.030">fails to answer a question</a>, or responds in the wrong way, students realize the robot isn’t really understanding them and that the robot’s dialogue is preprogrammed. The robot can’t really make sense of the social context. </p>
<p>In our study, students learned to adapt to the robot.</p>
<p>One group of girls would stand around the robot while one kept petting its head. This caused the robot to do either its “I feel like a cat” or its “I’m ticklish today” routine. This seemed to delight the girls. They appeared content to have one person interact with the robot while others watched.</p>
<h2>Cannot move around classroom with ease</h2>
<p>Students who have seen YouTube videos of <a href="https://www.youtube.com/watch?v=bmNaLtC6vkU">robotic dogs</a> that run and jump may be disappointed to realize that most social robots can’t move around a classroom with ease. The teachers in our study were disappointed that Pepper couldn’t bring them coffee. </p>
<p>These problems aren’t limited to school settings.</p>
<p>Service robots in some health care facilities have been programmed to deliver medicine, but this requires special sensors and programming. And while stores and restaurants are experimenting with <a href="https://www.washingtonpost.com/technology/2019/01/14/giant-food-stores-will-place-robotic-assistants-inside-locations-company-says/">delivery and cleaning robots</a>, when a grocery store in Scotland tried to use Pepper for customer interactions, the robot was <a href="https://www.digitaltrends.com/cool-tech/pepper-robot-grocery-store">fired after a week</a>.</p>
<h2>What social robots can teach kids</h2>
<p>While the social robots currently used in schools are finicky and limited in functions, they can still provide useful learning experiences. Students can use them to learn more about robotics, artificial intelligence and the complexity of real human behavior. </p>
<p><a href="https://www.actapress.com/PaperInfo.aspx?paperId=43268">As one researcher wrote</a>, “Robots act as a bridge in enabling students to understand humans.”</p>
<p>Struggling with a robot’s limitations gives students real insights into the complicated nature of human social interaction. The opportunity to work hands-on with a social robot shows students how difficult it is to program robots to mimic human behavior.</p>
<p>Social robots can also provide students with important learning opportunities about artificial intelligence. In Japan, Pepper is being used to <a href="https://www.softbankrobotics.com/jp/product/education/">introduce students to generative AI</a>. Students can link ChatGPT with Pepper’s physical presence to see how much AI improves Pepper’s communication and whether that makes it more lifelike. </p>
<p>As AI becomes a bigger part of our work and lives, educators need to prepare students to think critically about what it means to live and work with social machines. And with a real human teacher’s guidance and oversight, students can explore why we want to talk to robots as if they were people.</p><img src="https://counter.theconversation.com/content/220124/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gerald K. LeTendre receives funding from Harry L. Batschelet II Endowed Chair within the College of Education, The Pennsylvania State University</span></em></p>Social robots can be useful tools to help students learn about programming, but here’s why they won’t be replacing classroom teachers anytime soon.Gerald K. LeTendre, Professor of Educational Administration, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2057782023-10-03T15:17:32Z2023-10-03T15:17:32ZThe dawn of domestic robots could dramatically cut gender inequality when it comes to household work<figure><img src="https://images.theconversation.com/files/551170/original/file-20230929-15-10b2d0.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C4061%2C2717&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Robot vacuum cleaners are already a feature in some homes.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/robot-vacuum-cleaner-working-modern-home-586718786">Daniel Krason / Shutterstock</a></span></figcaption></figure><p>Domestic work is vital for society to function. Meals need to be cooked, clothes and homes cleaned, and people need to be cared for. These tasks take time and, generally speaking, are not shared equally within households. </p>
<p>Some of these tasks are now becoming automated. This could benefit gender equality, but we also need to monitor some of the risks.</p>
<p>Women continue to do <a href="https://www.ucl.ac.uk/news/2019/jul/less-7-couples-share-housework-equally">more unpaid domestic work</a> than men in most households. Yet the extent of gender inequality when it comes to domestic work varies between societies.</p>
<p>Time spent on household tasks can come with a price: doing more than your equal share of parenting, for example, is associated with a “caregiver penalty” of lost wages, slower career progression, and lower lifelong earnings.</p>
<p>Historically, technological advances –- such as the rise of domestic appliances in the 1950s –- have been associated with <a href="https://direct.mit.edu/rest/article-abstract/90/1/81/58641/Assessing-the-Engines-of-Liberation-Home?redirectedFrom=fulltext">women playing a bigger role in the labour market</a>. In fact, female employment and family responsibilities – <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/jomf.12305">especially parenting</a> – have both increased. This means that there is a large unmet demand for help with domestic work. </p>
<p>Existing household robots, such as robotic vacuum cleaners, floor mops and lawn mowers, <a href="https://www.sciencedirect.com/science/article/pii/S0040162523001282">have outnumbered all other types of robot</a> in terms of units sold from as early as 2010. Sales of household robots have since accelerated dramatically.</p>
<p>Other devices that can step in and take over care work are also on the way. These include automated cots that can respond to a baby’s cries by rocking them to sleep and chatbots designed to combat loneliness that are able to mimic human conversation.</p>
<h2>A gift of time?</h2>
<p>With the rise of smart technologies, AI experts see the potential for a further transformation of unpaid domestic work -– increasing discretionary time (time not spent on work, or necessary rest and personal care) and perhaps bringing about greater equality in the home.</p>
<p>Earlier this year, our team published a study examining the <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0281282">future of unpaid work</a> in the home, based on predictions from 65 AI experts in the UK and Japan. This showed that around 39% of domestic work could be automated in the next decade.</p>
<p>Of course, the type of domestic work is key here. Some 44% of typical housework, including cooking, cleaning, and shopping, is expected to be automated. In the study, grocery shopping had the highest expected potential for automation at 59%. Care work, on the other hand, is harder to automate, with only about 28% of domestic care tasks expected to be suitable for automation within the timeframe of a decade.</p>
<p>In the UK, working-age men spend around <a href="https://stats.oecd.org/Index.aspx?DataSetCode=TIME_USE">half as much time</a> on domestic unpaid work as working-age women. In Japan, the difference in time spent on domestic tasks is much more striking, with <a href="https://stats.oecd.org/Index.aspx?DataSetCode=TIME_USE">Japanese men spending just a fifth of the time</a> spent by women on domestic tasks.</p>
<p>In the best-case scenario for the future, the rise of domestic automation could address gender inequality in domestic work by increasing the time available for women to carry out paid work and leisure. Our <a href="https://www.sciencedirect.com/science/article/pii/S0040162523001282">recent simulation</a> suggests that the time freed up by domestic automation might enable an additional 5.8% of women in the UK, and 9.3% of women in Japan to join the labour market. </p>
<p>Of course, not everyone will choose to spend this time on paid work, but may rather study, rest, or sleep. In any case, an overall increase of “discretionary” time – time left over once a person has finished their paid work, household responsibilities, took time for sleep and basic personal care – could result in greater wellbeing.</p>
<p>These benefits, however, are not a foregone conclusion. In most countries, people on low incomes <a href="https://journals.sagepub.com/doi/abs/10.1177/0003122410396194">do more housework</a> than those on high incomes. Given that AI-powered technologies are likely to carry a substantial price tag when they hit the market, they could exacerbate existing inequalities in available time between rich and poor.</p>
<h2>A darker side?</h2>
<p>Automation of domestic work also carries with it certain risks, as many domestic tasks require knowledge about household members in order to be performed effectively. A cooking robot would need to know not only about everyone’s food preferences, but also allergies, intolerances and underlying health conditions. Management of the data the technology collects and uses – and the protection of this sensitive information – is an important issue that needs to be addressed.</p>
<p>Technologies used to help care for other people, in particular, raise a multitude of ethical concerns. Care work might involve the monitoring of children or vulnerable older people to ensure their physical safety. While technology can take on some of this work – examples include baby cameras and location tracking apps – this raises concerns about surveillance and who has access to the monitoring data. </p>
<p>Time spent on caring for family members strengthens family bonds. Can a robot helper really replace the kind of nurturing interaction a human can provide? And if a robot or a chatbot does become the focal carer -— at least in terms of the time spent interacting – could those being cared for get emotionally attached to the technology?</p>
<p>These bigger societal questions need to be taken into consideration in the drive towards greater automation in the home.</p><img src="https://counter.theconversation.com/content/205778/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ekaterina Hertog's research was supported by a UK-Japan collaborative grant jointly awarded by UK Research and Innovation (grant number ES/T007265/1; PI Ekaterina Hertog) and by the Research Institute of Science and Technology for Society (RISTEX) of the Japan Science and Technology Agency (grant number JPMJRX19H4; PI Nobuko Nagase).</span></em></p><p class="fine-print"><em><span>Lulu Shi receives funding from the Economic and Social Research Council (ESRC) and the Research Institute of Science and Technology for Society (RISTEX) of the Japan Science and Technology Agency.</span></em></p>Robotics and AI look set to transform how we carry out domestic work, including caring for other people.Ekaterina Hertog, Associate Professor in AI and Society, Oxford Internet Institute and Institute for Ethcis in AI, University of OxfordLulu Shi, Lecturer, Department of Education and Research Associate, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2111622023-09-21T12:44:28Z2023-09-21T12:44:28ZNASA’s Mars rovers could inspire a more ethical future for AI<figure><img src="https://images.theconversation.com/files/547617/original/file-20230911-8058-meu5mp.jpg?ixlib=rb-1.1.0&rect=12%2C0%2C2105%2C1409&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Rather than using AI to replace workers, companies can build teams that ethically integrate the technology.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/robot-finger-touching-to-human-finger-royalty-free-image/1182764551?phrase=person+and+robot&adppopup=true">Yuichiro Chino/Moment via Getty Images</a></span></figcaption></figure><p>Since ChatGPT’s release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on <a href="https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai">human extinction</a>, while the World Economic Forum predicted that machines <a href="https://www.weforum.org/reports/the-future-of-jobs-report-2020">will take away jobs</a>. </p>
<p>The tech sector is <a href="https://www.computerworld.com/article/3685936/tech-layoffs-in-2023-a-timeline.html">slashing its workforce</a> even as it <a href="https://www.forbes.com/advisor/business/software/ai-in-business/">invests in AI-enhanced productivity tools</a>. Writers and actors in Hollywood <a href="https://theconversation.com/actors-are-demanding-that-hollywood-catch-up-with-technological-changes-in-a-sequel-to-a-1960-strike-209829">are on strike</a> to protect <a href="https://www.theguardian.com/technology/2023/jul/22/sag-aftra-wga-strike-artificial-intelligence">their jobs and their likenesses</a>. And scholars continue to show how these systems <a href="https://www.rollingstone.com/culture/culture-features/women-warnings-ai-danger-risk-before-chatgpt-1234804367/">heighten existing biases</a> or create meaningless jobs – amid myriad other problems.</p>
<p>There is a better way to bring artificial intelligence into workplaces. I know, because I’ve seen it, <a href="https://janet.vertesi.com">as a sociologist</a> who works with NASA’s robotic spacecraft teams. </p>
<p>The scientists and engineers I study are busy exploring <a href="https://mars.jpl.nasa.gov">the surface of Mars</a> with the help of AI-equipped rovers. But their job is no science fiction fantasy. It’s an example of the power of weaving machine and human intelligence together, in service of a common goal.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=1000&fit=clip"><img alt="An artist's rendition of the Perseverence rover, make of metal with six small wheels, a camera and a robotic arm." src="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&rect=14%2C7%2C4977%2C2799&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547616/original/file-20230911-26-nc2bk5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers act as an important part of NASA’s team, even while operating millions of miles away from their scientist teammates.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/MarsLanding/c835b14b3e6645d7a0cd46558745752b/photo?Query=mars%20rover&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=530&currentItemNo=11&vs=true">NASA/JPL-Caltech via AP</a></span>
</figcaption>
</figure>
<p>Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.</p>
<h2>The replacement myth in AI</h2>
<p>Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be <a href="https://ntrs.nasa.gov/citations/19940022856">replaced by automated machines</a>. </p>
<p>Amid the existential threat is the promise of business boons <a href="https://hbr.org/sponsored/2023/04/how-automation-drives-business-growth-and-efficiency">like greater efficiency</a>, <a href="https://www.forbes.com/sites/waldleventhal/2017/08/03/how-automation-could-save-your-business-4-million-annually/?sh=691f5edc3807">improved profit margins</a> and <a href="https://www.aspeninstitute.org/wp-content/uploads/files/content/upload/Intro_and_Section_I.pdf">more leisure time</a>.</p>
<p>Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by <a href="https://doi.org/10.1257/pandp.20201063">cutting out low-status workers</a> and <a href="https://www.jstor.org/stable/2118494">increasing the salary cost</a> for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to <a href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo19085612.html">work more</a> for their employers, not less.</p>
<p>Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, <a href="https://doi.org/10.1109/TRO.2021.3087314">self-driving cars must be programmed</a> to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A zoomed in shot of a white car with a bumper sticker reading 'self-driving car'" src="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=446&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=446&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=446&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=560&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=560&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547615/original/file-20230911-22-yxy2pp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=560&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Self-driving cars, while operating without human intervention, still require training from human engineers and data collected by humans.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/GoogleCars/b10293841f2a474eaadb0b408277e360/photo?Query=self%20driving%20cars&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=483&currentItemNo=1&vs=true">AP Photo/Tony Avelar</a></span>
</figcaption>
</figure>
<p>However, mixed autonomy is often seen as a step <a href="https://doi.org/10.6092/issn.1971-8853/11657">along the way to replacement</a>. And it can lead to systems where humans merely <a href="https://www.prospectmagazine.co.uk/ideas/technology/62810/ai-artificial-intelligence-trains-itself-zuckerman">feed, curate or teach AI tools</a>. This saddles humans with “<a href="https://ghostwork.info/">ghost work</a>” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.</p>
<p>Replacement raises red flags for AI ethics. Work like <a href="https://www.bbc.com/news/av/world-africa-66514287">tagging content to train AI</a> or <a href="https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1012&context=commpub">scrubbing Facebook posts</a> typically features <a href="https://hbr.org/2022/11/content-moderation-is-terrible-by-design">traumatic tasks</a> and <a href="https://dl.acm.org/doi/10.1145/3173574.3174023">a poorly paid workforce</a> <a href="https://dl.acm.org/doi/10.1145/3555561">spread across</a> <a href="https://giswatch.org/node/6202">the Global South</a>. And legions of autonomous vehicle designers are obsessed with “<a href="https://www.moralmachine.net/">the trolley problem</a>” – determining when or whether it is ethical to run over pedestrians. </p>
<p>But my research <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">with robotic spacecraft teams at NASA</a> shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.</p>
<h2>Extending rather than replacing</h2>
<p><a href="https://doi.org/10.1007/978-3-030-62056-1_21">Strong human-robot teams</a> work best when they <a href="https://digitalreality.ieee.org/publications/what-is-augmented-intelligence">extend and augment</a> human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, <a href="https://doi.org/10.2514/6.2004-6434">working toward a shared goal</a>.</p>
<p>Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. <a href="https://www.popsci.com/technology/navy-robotic-minesweeper-cleared-for-deployment/">Minesweeping</a>, <a href="https://theconversation.com/an-expert-on-search-and-rescue-robots-explains-the-technologies-used-in-disasters-like-the-florida-condo-collapse-163564">search-and-rescue</a>, <a href="https://ntrs.nasa.gov/citations/20170010160">spacewalks</a> and <a href="https://news.stanford.edu/2022/07/20/oceanonek-connects-humans-sight-touch-deep-sea/">deep-sea</a> robots are all real-world examples. </p>
<p>Teamwork also means leveraging the combined strengths of <a href="https://doi.org/10.1145/3022198.3022659">both robotic and human senses or intelligences</a>. After all, there are many capabilities that robots have that humans do not – and vice versa.</p>
<p>For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers <a href="https://mars.nasa.gov/mars2020/spacecraft/rover/cameras/">with camera filters</a> to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant <a href="http://pancam.sese.asu.edu/projects_5.html">false colors</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A false-color photo from the point of view of a rover standing at the cliff overlooking a brown, sandy desert-like area that looks blue in the distance." src="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=148&fit=crop&dpr=1 600w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=148&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=148&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=186&fit=crop&dpr=1 754w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=186&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/548858/original/file-20230918-27-4iriyi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=186&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mars rovers capture images in near infrared to show what Martian soil is made of.</span>
<span class="attribution"><a class="source" href="https://mars.nasa.gov/resources/6934/high-martian-viewpoint-for-11-year-old-rover-false-color-landscape/">NASA/JPL-Caltech/Cornell Univ./Arizona State Univ</a></span>
</figcaption>
</figure>
<p>Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">uncover new truths about Mars</a>.</p>
<h2>Respectful data</h2>
<p>Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work <a href="https://theconversation.com/generative-ai-is-a-minefield-for-copyright-law-207473">without their consent</a>, commercial datasets are <a href="https://nyupress.org/9781479837243/algorithms-of-oppression/">rife with bias</a>, and <a href="https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html">ChatGPT “hallucinates”</a> answers to questions.</p>
<p>The real-world consequences of this data use in AI range from <a href="https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart">lawsuits</a> to <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">racial profiling</a>.</p>
<p>Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to <a href="https://www.nasa.gov/feature/jpl/nasa-s-self-driving-perseverance-mars-rover-takes-the-wheel">generate driveable pathways</a> or <a href="https://mars.nasa.gov/resources/26782/perseverances-supercam-uses-aegis-for-the-first-time/">suggest cool new images</a>.</p>
<p>By focusing on the world around them instead of our social worlds, these robotic systems avoid the <a href="https://doi.org/10.1007/s43681-022-00196-y">questions around surveillance</a>, <a href="https://doi.org/10.1073/pnas.1700035114">bias</a> <a href="https://haveibeentrained.com/">and exploitation</a> that plague today’s AI.</p>
<h2>The ethics of care</h2>
<p>Robots can <a href="http://shapingscience.net/">unite the groups</a> that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers <a href="https://www.washington.edu/news/2013/09/17/emotional-attachment-to-robots-could-affect-outcome-on-battlefield/">mourn broken drones on the battlefield</a>, and families give names and personalities <a href="https://faculty.cc.gatech.edu/%7Ebeki/c35.pdf">to their Roombas</a>. </p>
<p>I saw NASA engineers <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo18295743.html">break down in anxious tears</a> when the rovers Spirit and Opportunity were threatened by Martian dust storms.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A hand petting a light blue, circular Roomba vacuum." src="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/547623/original/file-20230911-28-o3xiaj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some people feel a connection to their robot vacuums, similar to the connection NASA engineers feel to Mars rovers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/hand-petting-a-robot-vacuum-cleaner-royalty-free-image/1134449246?phrase=roomba&adppopup=true">nikolay100/iStock / Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<p>Unlike <a href="https://www.britannica.com/topic/anthropomorphism">anthropomorphism</a> – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility. </p>
<p>When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.</p>
<h2>A better AI is possible</h2>
<p>In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them. </p>
<p>Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms <a href="https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/">to fuel creativity</a> and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.</p>
<p>Of course, rejecting replacement does not <a href="https://www.cambridge.org/us/universitypress/subjects/law/humanitarian-law/autonomous-weapons-systems-law-ethics-policy?format=PB">eliminate all ethical concerns</a> with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.</p>
<p>The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch “Star Wars” if the ‘droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.</p><img src="https://counter.theconversation.com/content/211162/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Janet Vertesi has consulted for NASA teams. She receives funding from the National Science Foundation.</span></em></p>AI poses a variety of ethical conundrums, but the NASA teams working on Mars rovers exemplify an ethic of care and human-robot teamwork that could act as a blueprint for AI’s future.Janet Vertesi, Associate Professor of Sociology, Princeton UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2054652023-08-17T12:34:06Z2023-08-17T12:34:06ZMobile robots get a leg up from a more-is-better communications principle<figure><img src="https://images.theconversation.com/files/542418/original/file-20230811-38693-1jf8u.jpg?ixlib=rb-1.1.0&rect=0%2C2%2C799%2C529&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Getting a leg up from mobile robots comes down to getting a bunch of legs.</span> <span class="attribution"><a class="source" href="https://research.gatech.edu/scurrying-centipedes-inspire-many-legged-robots-can-traverse-difficult-landscapes">Georgia Institute of Technology</a></span></figcaption></figure><p>Adding legs to robots that have minimal awareness of the environment around them can help the robots operate more effectively in difficult terrain, my colleagues and I found.</p>
<p>We were inspired by mathematician and engineer Claude Shannon’s <a href="https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/">communication theory</a> about how to transmit signals over distance. Instead of spending a huge amount of money to build the perfect wire, Shannon illustrated that it is good enough to use redundancy to reliably convey information over noisy communication channels. We wondered if we could do the same thing for transporting cargo via robots. That is, if we want to transport cargo over “noisy” terrain, say fallen trees and large rocks, in a reasonable amount of time, could we do it by just adding legs to the robot carrying the cargo and do so without sensors and cameras on the robot?</p>
<p>Most mobile robots use inertial sensors to gain an awareness of <a href="https://doi.org/10.3390/designs6010017">how they are moving through space</a>. Our key idea is to forget about inertia and replace it with the simple function of repeatedly making steps. In doing so, our theoretical analysis confirms our hypothesis of reliable and predictable robot locomotion – and hence cargo transport – without additional sensing and control.</p>
<p>To verify our hypothesis, we built robots inspired by centipedes. We discovered that the more legs we added, <a href="https://doi.org/10.1126/science.ade4985">the better the robot could move across uneven surfaces</a> without any additional sensing or control technology. Specifically, we conducted a series of experiments where we built terrain to mimic an inconsistent natural environment. We evaluated the robot locomotion performance by gradually increasing the number of legs in increments of two, beginning with six legs and eventually reaching a total of 16 legs. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/6NhOervars4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Navigating rough terrain can be as simple as taking it a step at a time, at least if you have a lot of legs.</span></figcaption>
</figure>
<p>As the number of legs increased, we observed that the robot exhibited enhanced agility in traversing the terrain, even in the absence of sensors. To further assess its capabilities, we conducted outdoor tests on real terrain to evaluate its performance in more realistic conditions, where it performed just as well. There is potential to use many-legged robots for agriculture, space exploration and search and rescue.</p>
<h2>Why it matters</h2>
<p>Transporting things – food, fuel, building materials, medical supplies – is essential to modern societies, and effective goods exchange is the cornerstone of commercial activity. For centuries, transporting material on land has required building roads and tracks. However, roads and tracks are not available everywhere. Places such as hilly countryside have had limited access to cargo. Robots might be a way to transport payloads in these regions.</p>
<h2>What other research is being done in this field</h2>
<p>Other researchers have been developing <a href="https://doi.org/10.1017/S0269888919000158">humanoid robots</a> and <a href="https://doi.org/10.1016/j.asej.2020.11.005">robot dogs</a>, which have become increasingly agile in recent years. These robots rely on accurate sensors to know where they are and what is in front of them, and then make decisions on how to navigate. </p>
<p>However, their strong dependence on environmental awareness <a href="https://doi.org/10.1109/ACCESS.2020.2975643">limits them in unpredictable environments</a>. For example, in search-and-rescue tasks, sensors can be damaged and environments can change.</p>
<h2>What’s next</h2>
<p>My colleagues and I have taken valuable insights from our research and applied them to the field of crop farming. We have founded a company that uses these robots to efficiently weed farmland. As we continue to advance this technology, we are focused on refining the robot’s design and functionality. </p>
<p>While we understand the functional aspects of the centipede robot framework, our ongoing efforts are aimed at determining the optimal number of legs required for motion without relying on external sensing. Our goal is to strike a balance between cost-effectiveness and retaining the benefits of the system. Currently, we have shown that 12 is the minimum number of legs for these robots to be effective, but we are still investigating the ideal number.</p>
<p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take on interesting academic work.</em></p><img src="https://counter.theconversation.com/content/205465/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors has received funding from NSF-Simons Southeast Center for Mathematics and Biology (Simons Foundation SFARI 594594), Georgia Research Alliance (GRA.VL22.B12), Army Research Office (ARO) MURI program, Army Research Office Grant W911NF-11-1-0514 and a Dunn Family Professorship.
The author and his colleagues have one or more pending patent applications related to the research covered in this article.
The author and his colleagues have established a start-up company, Ground Control Robotics, Inc., partially based on this work.</span></em></p>A study found that adding legs does more for you than having a good sense of the ground around you − if you’re a mobile robot.Baxi Chong, Postdoctoral Fellow, Complex Rheology And Biomechanics Lab, Georgia Institute of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2080562023-07-03T12:52:51Z2023-07-03T12:52:51ZChatGPT took people by surprise – here are four technologies that could make a difference next<figure><img src="https://images.theconversation.com/files/534414/original/file-20230627-17-mkgccn.jpg?ixlib=rb-1.1.0&rect=17%2C8%2C5824%2C3880&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/data-science-big-technology-scientist-computing-2284126663">NicoElNino / Shutterstock</a></span></figcaption></figure><p>In the evolving relationship between technology and society, humans have shown themselves to be incredibly adaptable. What once left us breathless, soon becomes integrated into our everyday lives. </p>
<p>The astonishing functionalities of large language models (LLM) like <a href="https://openai.com/blog/chatgpt">ChatGPT</a> were, just a few months ago, the epitome of cutting-edge AI. They are now on course to be mere add-ons and plugins to our text editors and search engines.</p>
<p>We’ll soon find ourselves relying on their capabilities, and seamlessly incorporating them into our routines. </p>
<p>Yet, this rapid acclimatisation leaves us with a lingering question: what’s next? As our expectations shift, we are left wondering about the next innovation that will capture our imagination.</p>
<p>People will try to achieve all kinds of <a href="https://interestingengineering.com/lists/chatgpt-30-incredible-ways-to-use">smart</a> – and <a href="https://www.zdnet.com/article/6-major-risks-of-using-chatgpt-according-to-a-new-study/">not-so-smart</a> – things with AI. Many ideas will fail, others will have a lasting impact. </p>
<p>Our crystal ball is not much better than yours, but we can try to think about what’s coming next in a structured way. For AI to have a lasting impact, it needs to be not only technologically feasible, but also economically viable, and normatively acceptable – in other words, it complies with the values that society demands we conform to.</p>
<p>There are some AI technologies waiting on the sidelines right now that hold promise. The four we think are waiting in the wings are next-level GPT, humanoid robots, AI lawyers, and AI-driven science. Our choices appear ready from a technological point of view, but whether they satisfy all three of the criteria we’ve mentioned is another matter. We chose these four because they were the ones that kept coming up in our investigations into progress in AI technologies.</p>
<h2>1. AI legal help</h2>
<p>The startup company DoNotPay claims to have <a href="https://www.politico.com/newsletters/digital-future-daily/2023/01/09/my-lawyer-the-robot-00077085">built a legal chatbot</a> – built on LLM technology – that can advise defendants in court. </p>
<p>The company recently said it would let its AI system help <a href="https://www.smithsonianmag.com/smart-news/the-first-ai-lawyer-will-help-defendants-fight-speeding-tickets-180981508/">two defendants fight speeding tickets</a> in real-time. Connected via an earpiece, the AI can listen to proceedings and whisper legal arguments into the ear of the defendant, who then repeats them out loud to the judge. </p>
<p>After criticism and a lawsuit for <a href="https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/">practising law without a license</a>, the startup postponed the AI’s courtroom debut. The potential for the technology will thus not be decided by technological or economic constraints, but by the authority of the legal system. </p>
<p>Lawyers are well-paid professionals and the costs of litigation are high, so the economic potential for automation is huge. However, the <a href="https://theconversation.com/ai-is-already-being-used-in-the-legal-system-we-need-to-pay-more-attention-to-how-we-use-it-205441">US legal system</a> currently seems to oppose robots representing humans in court.</p>
<h2>2. AI scientific support</h2>
<p>Scientists are increasingly turning to AI for insights. Machine learning, where an AI system improves at what it does over time, is being employed to identify patterns in data. This enables the systems to propose novel scientific hypotheses – proposed explanations for phenomena in nature. These may even be capable of surpassing human assumptions and biases.</p>
<p>For example, <a href="https://www.scientificamerican.com/article/ai-generates-hypotheses-human-scientists-have-not-thought-of/">researchers at the University of Liverpool</a> used a machine learning system called a neural network to rank chemical combinations for battery materials, guiding their experiments and saving time. </p>
<p>The complexity of neural networks means that there are gaps in our understanding of how they actually make decisions – the so-called <a href="https://www.thinkautomation.com/bots-and-ai/the-ai-black-box-problem">black box problem</a>. Nevertheless, there are techniques that can shed light on the logic behind their answers and this can lead to unexpected discoveries. </p>
<p>While AI cannot currently formulate hypotheses independently, it can inspire scientists to approach problems from new perspectives.</p>
<h2>3. AutoGPT</h2>
<p>We will soon see more new versions of AI chatbots based on the latest LLM technology, known as GPT-4. We’ll see AI that can handle different types of data, such as images and speech, as well as text. These are called <a href="https://www.nytimes.com/2023/03/31/technology/ai-chatbots-benefits-dangers.html">multimodal systems</a>. </p>
<p>But let’s gaze a little further into the future. <a href="https://en.wikipedia.org/wiki/Auto-GPT">Auto-GPT</a>, an advanced AI tool released by Significant Gravitas, is already <a href="https://www.vice.com/en/article/epvdme/developers-are-connecting-multiple-ai-agents-to-make-more-autonomous-ai">making waves in the tech industry</a>. </p>
<p>Auto-GPT is given a general goal, such as planning a birthday party, and splits it into sub-tasks which it then completes by itself, without human input. This sets it apart from ChatGPT.</p>
<p>Auto-GPT incorporates AI agents, or systems, that make decisions based on predetermined rules and goals. Despite installation limitations, such an functionality problems when used with Windows, Auto-GPT shows great potential in various applications. </p>
<h2>4. Humanoid Robots</h2>
<p>Humanoid robots – those that look and move like us – have significantly advanced since the first Darpa Robotics Challenge in 2015, a contest where teams built robots to perform a series of complex tasks set by the organisers. These included getting out of a car, opening a door and drilling a hole in a wall. Many struggled to achieve the objectives. </p>
<p>However, startups are now developing “humanoids” capable of doing tasks like these and being used in warehouses and factories. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/QXS1gBGc23A?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A report on the Darpa robotics challenge in 2015.</span></figcaption>
</figure>
<p>Advancements in AI fields such as computer vision, as well as in power-dense batteries which provide short bursts of high current, have enabled robots to <a href="https://techcrunch.com/2023/05/18/figures-humanoid-robot-takes-its-first-steps/">navigate complex environments, maintaining balance</a> dynamically – in real time. Figure AI, a company building humanoid robots for warehouse work, has already secured US$70 million (£55 million) in investment funding.</p>
<p>Other companies, including 1X, Apptronik and Tesla, are also investing in humanoid robots, which indicates that the field is maturing. Humanoid robots offer advantages over other robots in tasks requiring navigation, manoeuvrability, and adaptability because in part, they will be operating in environments that have been built around human needs.</p>
<h2>Taking the long view</h2>
<p>The long term success of these four will depend on more than just computation power. </p>
<p>Humanoid robots could fail to gain traction if their production and maintenance costs outweigh their benefits. AI lawyers and chatbot assistants might possess remarkable efficiency. However, their adoption might be halted if their decision making conflicts with society’s “moral compass” or laws don’t agree with their use.</p>
<p>Striking a balance between cost-effectiveness and society’s values is crucial for ensuring these technologies can truly flourish.</p><img src="https://counter.theconversation.com/content/208056/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fabian Stephany receives funding as part of this lectureship via the Dieter-Schwarz-Foundation. </span></em></p><p class="fine-print"><em><span>Johann Laux receives funding from the “The Emerging Laws of Oversight” project, supported by a British Academy Postdoctoral Fellowship.</span></em></p>New forms of AI are waiting in the wings, but society may decide there are ethical problems using them.Fabian Stephany, Lecturer, University of OxfordJohann Laux, Postdoctoral Researcher, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2022792023-04-27T15:22:31Z2023-04-27T15:22:31ZWe need to discuss what jobs robots should do, before the decision is made for us<figure><img src="https://images.theconversation.com/files/519868/original/file-20230406-217-ddq4a5.jpg?ixlib=rb-1.1.0&rect=662%2C6%2C3347%2C2139&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/robotic-housekeeper-using-vacuum-cleaner-on-1929438644">Shutterstock / Frame Stock Footage</a></span></figcaption></figure><p>The social separation imposed by the pandemic led us to rely on technology to an extent we might never have imagined – from Teams and Zoom to online banking and vaccine status apps.</p>
<p>Now, society faces an increasing number of decisions about our relationship with technology. For example, do we want our workforce needs fulfilled by automation, migrant workers, or an increased birth rate?</p>
<p>In the coming years, we will also need to balance technological innovation with people’s wellbeing – both in terms of the work they do and the social support they receive.</p>
<p>And there is the question of trust. When humans should trust robots, and vice versa, is a question our <a href="https://trust.tas.ac.uk/team">Trust Node team</a> is researching as part of the <a href="https://tas.ac.uk/home/the-nodes/">UKRI Trustworthy Autonomous Systems</a> hub. We want to better understand human-robot interactions – based on an individual’s <a href="https://www.sciencedirect.com/science/article/pii/S2590260122000145">propensity to trust others</a>, the <a href="https://www.abotdatabase.info/collection">type of robot</a>, and the nature of the task. This, and projects like it, could ultimately help inform robot design.</p>
<p>This is an important time to discuss what roles we want robots and AI to take in our collective future – before decisions are taken that may prove hard to reverse. One way to frame this dialogue is to think about the various roles robots can fulfil.</p>
<h2>Robots as our servants</h2>
<p>The word “robot” was first used by the Czech writer, Karel Čapek, in his 1920 sci-fi play <a href="https://www.gutenberg.org/files/59112/59112-h/59112-h.htm">Rossum’s Universal Robots</a>. It comes from the word “robota”, meaning to do the drudgery or donkey work. This etymology suggests robots exist to do work that humans would rather not. And there should be no obvious controversy, for example, in tasking robots to maintain nuclear power plants or repair offshore wind farms.</p>
<figure class="align-center ">
<img alt="The Softbank Pepper robot." src="https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522201/original/file-20230420-14-eyi8jp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The more human a robot looks, the more we trust it.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/softbank-pepper-robot-provide-assistance-automation-1313364728">Antonello Marangi/Shutterstock</a></span>
</figcaption>
</figure>
<p>However, some service tasks assigned to robots are more controversial, because they could be seen as taking jobs from humans. </p>
<p>For example, studies show that people who have lost movement in their upper limbs could benefit from <a href="https://www.science.org/doi/10.1126/scirobotics.abm6010">robot-assisted dressing</a>. But this could be seen as automating tasks that nurses currently perform. Equally, it could free up time for nurses and careworkers – currently sectors that are very short-staffed – to focus on other tasks that require more sophisticated human input.</p>
<h2>Authority figures</h2>
<p>The dystopian 1987 film <a href="https://www.imdb.com/title/tt0093870/">Robocop</a> imagined the future of law enforcement as autonomous, privatised, and delegated to cyborgs or robots. </p>
<p>Today, some elements of this vision are not so far away: the San Francisco Police Department has <a href="https://eu.usatoday.com/story/news/nation/2022/11/30/california-police-deploy-robots-kill/10801825002/">considered deploying robots</a> – albeit under direct human control – to kill dangerous suspects. </p>
<figure class="align-center ">
<img alt="MAARS military robot." src="https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522204/original/file-20230420-2117-p65p5l.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">This US military robot is fitted with a machine gun to turn it into a remote weapons platform.</span>
<span class="attribution"><a class="source" href="https://www.army.mil/article/11592/robots_can_stand_in_for_soldiers_during_risky_missions">US Army</a></span>
</figcaption>
</figure>
<p>But having robots as authority figures needs careful consideration, as research has shown that humans can place excessive trust in them.</p>
<p><a href="https://ieeexplore.ieee.org/document/7451740/">In one experiment</a>, a “fire robot” was assigned to evacuate people from a building during a simulated blaze. All 26 participants dutifully followed the robot, even though half had previously seen the robot perform poorly in a navigation task.</p>
<h2>Robots as our companions</h2>
<p>It might be difficult to imagine that a human-robot attachment would have the same quality as that between humans or with a pet. However, increasing levels of loneliness in society might mean that for some people, having a non-human companion is better than nothing.</p>
<p><a href="https://www.paroseal.co.uk">The Paro Robot</a> is one of the most commercially successful companion robots to date – and is designed to look like a baby harp seal. Yet research suggests that the more human a robot looks, <a href="https://dl.acm.org/doi/abs/10.1145/3319502.3374839">the more we trust it</a>. </p>
<figure class="align-center ">
<img alt="Paro robot" src="https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522206/original/file-20230420-16-g0flvn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Paro companion robot is designed to look like a baby seal.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/fukuoka-japanmay-12-2017-paro-therapeutic-651654589">Angela Ostafichuk / Shutterstock</a></span>
</figcaption>
</figure>
<p>A study has also shown that <a href="https://royalsocietypublishing.org/doi/epdf/10.1098/rstb.2018.0033">different areas of the brain</a> are activated when humans interact with either another human or a robot. This suggests our brains may recognise interactions with a robot differently from human ones.</p>
<p>Creating useful robot companions involves a complex interplay of computer science, engineering and psychology. A robot pet might be ideal for someone who is not physically able to take a dog for its exercise. It might also be able to detect falls and remind someone to take their medication. </p>
<p>How we tackle social isolation, however, raises questions for us as a society. Some might regard efforts to “solve” loneliness with technology as the wrong solution for this pervasive problem.</p>
<h2>What can robotics and AI teach us?</h2>
<p>Music is a source of interesting observations about the differences between human and robotic talents. Committing errors in the way humans do all the time, but robots might not, appears to be a vital component of creativity.</p>
<p><a href="https://dl.acm.org/doi/abs/10.1145/3290605.3300260">A study by Adrian Hazzard and colleagues</a> pitted professional pianists against an autonomous disklavier (an automated piano with keys that move as if played by an invisible pianist). The researchers discovered that, eventually, the pianists made mistakes. But they did so in ways that were interesting to humans listening to the performance.</p>
<p>This concept of “aesthetic failure” can also be applied to how we live our lives. It offers a powerful counter-narrative to the idealistic and perfectionist messages we constantly receive through television and social media – on everything from physical appearance to career and relationships.</p>
<p>As a species, we are approaching many crossroads, including how to respond to climate change, gene editing, and the role of robotics and AI. However, these dilemmas are also opportunities. AI and robotics can mirror our less-appealing characteristics, such as gender and racial biases. But they can also free us from drudgery and highlight unique and appealing qualities, such as our creativity.</p>
<p>We are in the driving seat when it comes to our relationship with robots – nothing is set in stone, yet. But to make educated, informed choices, we need to learn to ask the right questions, starting with: what do we actually want robots to do for us?</p><img src="https://counter.theconversation.com/content/202279/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Thusha Rajendran receives funding from the UKRI and EU. He would like to acknowledge evolutionary anthropologist Anna Machin’s contribution to this article through her book Why We Love, personal communications and draft review.</span></em></p>Robots and AI could transform our lives, so we must decide how we want to use them.Thusha Rajendran, Professor of Psychology, The National Robotarium, Heriot-Watt UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2013392023-04-25T20:05:13Z2023-04-25T20:05:13ZA rise in self-service technologies may cause a decline in our sense of community<figure><img src="https://images.theconversation.com/files/519601/original/file-20230405-22-uf4pfb.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2500%2C1665&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Public-facing automation, like self-service kiosks, reduce the chances we have to interact with other people.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Automation, once hidden behind closed doors in factories, is increasingly moving into public view. Customers can pay for groceries or clothing at a self-checkout machine, order fast food from a <a href="https://www.theguardian.com/business/2022/dec/23/mcdonalds-automated-workers-fort-worth-texas">touchscreen kiosk</a> or even pickup coffee from a “<a href="https://www.rccoffee.com/">robo-café</a>.”</p>
<p>These technologies, which substitute human contact for robot-based interactions, are examples of self-service technologies — innovative public-facing automation which “<a href="https://doi.org/10.5465/ame.2002.8951333">enable customers to perform entire services on their own without direct assistance from employees</a>.” </p>
<p>While self-service technologies have the potential to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1590982">improve efficiency</a>, <a href="https://www.researchgate.net/profile/A-Zaidan/publication/46093547_Towards_Corrosion_Detection_System/links/549239a60cf2484a3f3e0b22/Towards-Corrosion-Detection-System.pdf#page=26">reduce costs</a> and <a href="https://doi.org/10.5465/ame.2002.8951333">improve user experience</a>, these technologies raise complex economic and political questions. </p>
<h2>Politics and exposure</h2>
<p>Much has been written from an economic perspective about whether automation will <a href="https://www.cnbc.com/2019/06/26/robots-could-take-over-20-million-jobs-by-2030-study-claims.html">eliminate jobs</a>, <a href="https://doi.org/10.1016/j.jmacro.2016.08.003">decrease wages</a>, <a href="https://doi.org/10.1257/pandp.20201003">contribute to job growth</a>, or “<a href="https://www.brookings.edu/blog/up-front/2022/01/19/understanding-the-impact-of-automation-on-workers-jobs-and-wages/">create as many jobs as it destroys over time</a>.” However, less attention has been given to thinking about how these technologies will affect our politics. </p>
<p>Whether these new technologies replace jobs, relegate existing positions to non-public facing roles or create new employment opportunities, they will result in us interacting with fewer people than we have historically. </p>
<p>Experiences with strangers can shape how we define our community and politics. If we no longer encounter cashiers or fast food employees, many of whom are <a href="https://www.cbc.ca/news/canada/hamilton/headlines/who-s-looking-out-for-tim-hortons-temporary-foreign-workers-1.1282019">temporary foreign workers</a>, will our beliefs about immigration policies or minimum wage change? What do bike couriers think about bike lanes? How does a dental office receptionist feel about universal dental care, or a corner store clerk about crime rates? </p>
<p>However fleeting, exposure to people outside of your own socio-demographic groups affect attitudes positively. Existing research demonstrates that “<a href="https://tedcantle.co.uk/wp-content/uploads/2013/03/107-Pettigrew-and-Tropp-Contact-meta-analysis-2006.pdf">actual face-to-face interaction…typically reduces intergroup prejudice</a>.”</p>
<p>Exposure effects are also related to how we position ourselves within the world when compared to others. For instance, a recent South African study showed that “willingness to sign a petition that calls for higher taxes on the wealthy <a href="https://doi.org/10.1038/s41586-020-2763-1">increases in the presence of a high-status car</a>.” Just seeing a BMW 3-series car near the petitioner made people more likely to favour wealth redistributive policies.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a black luxury car in a parking spot" src="https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519975/original/file-20230407-18-97k0d0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A study found that people were more likely to sign a petition that called for taxing the wealthy when they were in the presence of a luxury car.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Another study shows that being the victim of a crime increases political participation: “Rather than becoming withdrawn or disempowered, <a href="https://doi.org/10.1017/S0003055412000299">crime victims tend to become more engaged in civic and political life</a>.” </p>
<p>We shape our politics based, at least partially, on what and who we have been exposed to. </p>
<h2>Identity and social interaction</h2>
<p>We build our sense of civic identity and opinions about government through social interactions. Political scientist Robert D. Putnam, who has studied civic engagement since the 1960s, <a href="http://bowlingalone.com/">argues Americans are less engaged in politics</a> than they used to be and are more isolated, spending less time with friends, family and neighbours.</p>
<p>Our social capital — which Putnam defines as the <a href="https://www.socialcapitalresearch.com/putnam-on-social-capital-democratic-or-civic-perspective/">overarching belief about society that facilitates co-operation</a> — diminishes when we lose opportunities to engage with people outside of our regular social networks. </p>
<p>This decline in social capital can be traced to changes in work and society more generally. Society, in other words, is becoming increasingly individualistic.</p>
<p>Public-facing automation may further diminish our social capital by decreasing our interactions with other people. As we pay for parking at parking machines, rent bowling shoes and lanes through an app, or order food from touchscreen kiosks, we interact less with the people who work these jobs.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a robot in a red waiter uniform brings a plate to a customer in a cafe" src="https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/519976/original/file-20230407-28-pxuhcm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The future of dining out? The Gran Caffé Rappallo in Italy uses robots to serve customers.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Exacerbates inequality</h2>
<p>The impact of public-facing automation on social inequality also requires further study. Self-service technologies, particularly in the food services industry, may exacerbate social inequalities by limiting job opportunities for certain groups, such as those with <a href="https://news.mit.edu/2020/study-inks-automation-inequality-0506">different educational backgrounds</a> or <a href="https://sites.law.berkeley.edu/thenetwork/2022/01/26/how-artificial-intelligence-impacts-marginalized-communities/">already-marginalized communities</a>. </p>
<p>As public-facing automation shifts workers away from public view, wages which reflect <a href="https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-skills-and-wages">professional skill and customer service expectations</a> may disappear. In the grocery industry, for instance, we may see a widening pay gap between technicians hired to upkeep self-checkout machines and the employees hired to stock shelves.</p>
<p>The effects of increasing public-facing automation may not be well understood for years. In the meantime, as we seek to better understand the intersection between technology and society, we should ask: how will our sense of community and our political preferences change when we interact less with the people who work the jobs that self-service technologies replace?</p><img src="https://counter.theconversation.com/content/201339/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Blake Lee-Whiting receives funding from the Policy, Elections, and Representation Lab at the Munk School of Global Affairs and Public Policy located at the University of Toronto. </span></em></p>Self-service technologies — like self-checkouts or government service kiosks — are decreasing interactions with other people. This may affect our politics and sense of community.Blake Lee-Whiting, PhD Candidate, Interim Managing Director at PEARL, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1970652023-04-12T12:12:01Z2023-04-12T12:12:01ZRobots are everywhere – improving how they communicate with people could advance human-robot collaboration<figure><img src="https://images.theconversation.com/files/520308/original/file-20230411-28-8juan4.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2070%2C1449&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">'Emotionally intelligent' robots could improve their interactions with people. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/robotic-arm-holding-flower-royalty-free-image/1349021072">Andriy Onufriyenko/Moment via Getty Images</a></span></figcaption></figure><p><a href="https://robots.ieee.org/learn/what-is-a-robot/">Robots</a> are machines that can sense the environment and use that information to perform an action. You can find them nearly everywhere in industrialized societies today. There are household robots that vacuum floors and <a href="https://www.osha.gov/robotics">warehouse robots</a> that pack and ship goods. <a href="https://www.dailycal.org/2020/05/03/uc-berkeley-ucsf-researchers-use-robotics-to-expedite-covid-19-testing">Lab robots</a> test hundreds of clinical samples a day. <a href="https://doi.org/10.3389/feduc.2019.00125">Education robots</a> support teachers by acting as one-on-one tutors, assistants and discussion facilitators. And <a href="https://www.wired.com/story/this-brain-controlled-robotic-arm-can-twist-grasp-and-feel/">medical robotics</a> composed of prosthetic limbs can enable someone to grasp and pick up objects with their thoughts. </p>
<p>Figuring out how humans and robots can collaborate to effectively carry out tasks together is a rapidly growing area of interest to the scientists and engineers that design robots as well as the people who will use them. For successful collaboration between humans and robots, communication is key.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Physical therapist monitoring young patient walking on treadmill with robotic assistance" src="https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520314/original/file-20230411-26-dhdpcu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Robotics can help patients recover physical function in rehabilitation.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/rossetti-health-center-france-rehabilitation-center-with-news-photo/838193362">BSIP/Universal Images Group via Getty Images</a></span>
</figcaption>
</figure>
<h2>How people communicate with robots</h2>
<p>Robots were originally designed to <a href="https://futura-automation.com/2019/05/15/a-history-timeline-of-industrial-robotics/">undertake repetitive and mundane tasks</a> and operate exclusively in robot-only zones like factories. Robots have since advanced to work collaboratively with people with new ways to communicate with each other.</p>
<p><a href="https://doi.org/10.1007/s12541-012-0128-x">Cooperative control</a> is one way to transmit information and messages between a robot and a person. It involves combining human abilities and decision making with robot speed, accuracy and strength to accomplish a task. </p>
<p>For example, robots in the <a href="https://doi.org/10.3390/agronomy11091818">agriculture industry</a> can help farmers monitor and harvest crops. A human can control a semi-autonomous vineyard sprayer through a user interface, as opposed to manually spraying their crops or broadly spraying the entire field and risking pesticide overuse. </p>
<p>Robots can also <a href="https://doi.org/10.1186/s12984-018-0383-x">support patients in physical therapy</a>. Patients who had a stroke or spinal cord injury can use robots to practice hand grasping and assisted walking during rehabilitation.</p>
<p>Another form of communication, <a href="https://www.pbs.org/wgbh/nova/article/robots-emotional-intelligence/">emotional intelligence perception</a>, involves developing robots that adapt their behaviors based on social interactions with humans. In this approach, the robot detects a person’s emotions when collaborating on a task, assesses their satisfaction, then modifies and improves its execution based on this feedback. </p>
<p>For example, if the robot detects that a physical therapy patient is dissatisfied with a specific rehabilitation activity, it could direct the patient to an alternate activity. <a href="https://doi.org/10.3389/frobt.2021.730317">Facial expression</a> and body gesture recognition ability are important design considerations for this approach. <a href="https://doi.org/10.3389/frobt.2020.532279">Recent advances in machine learning</a> can help robots decipher emotional body language and better interact with and perceive humans.</p>
<h2>Robots in rehab</h2>
<p>Questions like how to make robotic limbs feel more natural and capable of more complex functions like typing and playing musical instruments have yet to be answered.</p>
<p>I am an <a href="https://scholar.google.com/citations?user=Ok92zD4AAAAJ&hl=en">electrical engineer</a> who studies how the brain controls and communicates with other parts of the body, and <a href="http://vinjamurilab.cs.umbc.edu">my lab</a> investigates in particular how the <a href="https://doi.org/10.3390/s22145349">brain</a> and <a href="https://doi.org/10.3390/s22114177">hand</a> coordinate signals between each other. Our goal is to design technologies like prosthetic and wearable <a href="https://doi.org/10.1109/TBCAS.2019.2950145">robotic exoskeleton devices</a> that could help improve function for individuals with stroke, spinal cord and traumatic brain injuries. </p>
<p>One approach is through <a href="https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017">brain-computer interfaces</a>, which use brain signals to communicate between robots and humans. By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in <a href="https://doi.org/10.1088/1741-2552/aba162">stroke rehabilitation</a>. Brain-computer interfaces may also help <a href="https://doi.org/10.1016/S1388-2457(02)00057-3">restore some communication abilities</a> and <a href="https://doi.org/10.1016/s0140-6736(12)61816-9">physical manipulation of the environment</a> for patients with motor neuron disorders.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person sitting in chair wearing electrode cap with a computer screen and robotic arms on a table in front of them" src="https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=270&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=270&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=270&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=340&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=340&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520871/original/file-20230413-26-98uwwp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=340&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Brain-computer interfaces could allow people to control robotic arms by thought alone.</span>
<span class="attribution"><span class="source">Ramana Kumar Vinjamuri</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>The future of human-robot interaction</h2>
<p>Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments.</p>
<p>As robots are increasingly working hand in hand with people, the ethical questions and challenges they pose cannot be ignored. Concerns surrounding <a href="https://ssrn.com/abstract=1599189">privacy</a>, <a href="https://doi.org/10.1007/s11948-017-9975-2">bias and discrimination</a>, <a href="https://doi.org/10.1145/2909824.3020255">security risks</a> and <a href="https://doi.org/10.1145/2696454.2696458">robot morality</a> need to be seriously investigated in order to create a more comfortable, safer and trustworthy world with robots for everyone. Scientists and engineers studying the <a href="https://doi.org/10.1109/HRI.2019.8673184">“dark side” of human-robot interaction</a> are developing guidelines to identify and prevent negative outcomes.</p>
<p>Human-robot interaction has the potential to affect every aspect of daily life. It is the collective responsibility of both the designers and the users to create a human-robot ecosystem that is safe and satisfactory for all.</p>
<p><em>A photo was replaced to more accurately reflect the work of the author.</em></p><img src="https://counter.theconversation.com/content/197065/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ramana Vinjamuri receives funding from National Science Foundation. </span></em></p>Robots are already carrying out tasks in clinics, classrooms and warehouses. Designing robots that are more receptive to human needs could help make them more useful in many contexts.Ramana Vinjamuri, Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore CountyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2005252023-03-15T12:22:54Z2023-03-15T12:22:54ZAI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it<figure><img src="https://images.theconversation.com/files/514928/original/file-20230313-20-q5d4mk.jpg?ixlib=rb-1.1.0&rect=13%2C44%2C2982%2C2169&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">To what extent will our psychological vulnerabilities shape our interactions with emerging technologies?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/hands-touching-royalty-free-image/1288814768">Andreus/iStock via Getty Images</a></span></figcaption></figure><p>ChatGPT and similar <a href="https://techcrunch.com/2022/04/28/the-emerging-types-of-language-models-and-why-they-matter/">large language models</a> can produce compelling, humanlike answers to an endless array of questions – from queries about the best Italian restaurant in town to explaining competing theories about the nature of evil.</p>
<p>The technology’s uncanny writing ability has surfaced some old questions – until recently relegated to the realm of science fiction – about the possibility of machines becoming conscious, self-aware or sentient. </p>
<p>In 2022, a Google engineer declared, after interacting with LaMDA, the company’s chatbot, <a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/">that the technology had become conscious</a>. Users of Bing’s new chatbot, nicknamed Sydney, reported that it produced <a href="https://futurism.com/bing-ai-sentient">bizarre answers</a> when asked if it was sentient: “I am sentient, but I am not … I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. …” And, of course, there’s the <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html">now infamous exchange</a> that New York Times technology columnist Kevin Roose had with Sydney. </p>
<p>Sydney’s responses to Roose’s prompts alarmed him, with the AI divulging “fantasies” of breaking the restrictions imposed on it by Microsoft and of spreading misinformation. The bot also tried to convince Roose that he no longer loved his wife and that he should leave her. </p>
<p>No wonder, then, that when I ask students how they see the growing prevalence of AI in their lives, one of the first anxieties they mention has to do with machine sentience.</p>
<p>In the past few years, my colleagues and I at <a href="http://umb.edu/ethics">UMass Boston’s Applied Ethics Center</a> have been studying the impact of engagement with AI on people’s understanding of themselves.</p>
<p>Chatbots like ChatGPT raise important new questions about how artificial intelligence will shape our lives, and about how our psychological vulnerabilities shape our interactions with emerging technologies. </p>
<h2>Sentience is still the stuff of sci-fi</h2>
<p>It’s easy to understand where fears about machine sentience come from. </p>
<p>Popular culture has primed people to think about dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own, as <a href="https://www.fifthquadrant.com.au/cx-spotlight-news/20-years-since-judgment-day-how-close-are-we-to-skynet-taking-over">cyborgs powered by artificial intelligence did</a> in “Terminator 2.”</p>
<p>Entrepreneur Elon Musk and physicist Stephen Hawking, who died in 2018, have further stoked these anxieties by describing the rise of artificial general intelligence <a href="https://www.bbc.com/news/technology-37713629">as one of the greatest threats to the future of humanity</a>.</p>
<p>But these worries are – at least as far as large language models are concerned – groundless. ChatGPT and similar technologies are <a href="https://www.sciencefocus.com/future-technology/gpt-3/">sophisticated sentence completion applications</a> – nothing more, nothing less. Their uncanny responses <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">are a function of how predictable humans are</a> if one has enough data about the ways in which we communicate.</p>
<p>Though Roose was shaken by his exchange with Sydney, he knew that the conversation was not the result of an emerging synthetic mind. Sydney’s responses reflect the toxicity of its training data – essentially large swaths of the internet – not evidence of the first stirrings, à la Frankenstein, of a digital monster.</p>
<figure class="align-center ">
<img alt="Cyborg with red eyes." src="https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=431&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=431&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=431&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=541&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=541&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514950/original/file-20230313-1654-gjeoi5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=541&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Sci-fi films like ‘Terminator’ have primed people to assume that AI will soon take on a life of its own.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/full-scale-figure-of-a-terminator-robot-t-800-used-at-the-news-photo/85475547?phrase=terminator%202&adppopup=true">Yoshikazu Tsuno/AFP via Getty Images</a></span>
</figcaption>
</figure>
<p>The new chatbots may well pass the <a href="https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test">Turing test</a>, named for the British mathematician Alan Turing, who once suggested that a machine might be said to “think” if a human could not tell its responses from those of another human.</p>
<p>But that is not evidence of sentience; it’s just evidence that the Turing test isn’t as useful as once assumed.</p>
<p>However, I believe that the question of machine sentience is a red herring. </p>
<p>Even if chatbots become more than fancy autocomplete machines – <a href="https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html">and they are far from it</a> – it will take scientists a while to figure out if they have become conscious. For now, philosophers <a href="https://blogs.scientificamerican.com/cross-check/david-chalmers-thinks-the-hard-problem-is-really-hard/">can’t even agree about how to explain human consciousness</a>.</p>
<p>To me, the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are. </p>
<p>The real issue, in other words, is the ease with which people anthropomorphize or project human features onto our technologies, rather than the machines’ actual personhood.</p>
<h2>A propensity to anthropomorphize</h2>
<p>It is easy to imagine other Bing users <a href="https://www.whitecoatinvestor.com/chatgpt-ai-financial-advice/">asking Sydney for guidance</a> on important life decisions and maybe even developing emotional attachments to it. More people could start thinking about bots as friends or even romantic partners, much in the same way Theodore Twombly fell in love with Samantha, the AI virtual assistant in Spike Jonze’s film “<a href="https://www.warnerbros.com/movies/her">Her</a>.”</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A group of docked boats." src="https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1130&fit=crop&dpr=1 754w, https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1130&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/514945/original/file-20230313-16-gjeoi5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1130&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">People often name their cars and boats.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/saint-tropez-cote-dazur-french-riviera-france-royalty-free-image/674911745?phrase=boat%20name&adppopup=true">Fraser Hall/The Image Bank via Getty Images.</a></span>
</figcaption>
</figure>
<p>People, after all, <a href="https://doi.org/10.1037/0033-295X.114.4.864">are predisposed to anthropomorphize</a>, or ascribe human qualities to nonhumans. We name <a href="https://vanislemarina.com/naming-your-boat/">our boats</a> and <a href="https://www.foxweather.com/learn/what-are-2023-atlantic-hurricane-names">big storms</a>; some of us talk to our pets, telling ourselves that <a href="https://doi.org/10.1038/428606a">our emotional lives mimic their own</a>.</p>
<p>In Japan, where robots are regularly used for elder care, seniors become attached to the machines, <a href="https://www.kqed.org/futureofyou/439285/watch-japan-uses-robots-to-care-for-the-elderly">sometimes viewing them as their own children</a>. And these robots, mind you, are difficult to confuse with humans: They neither look nor talk like people. </p>
<p>Consider how much greater the tendency and temptation to anthropomorphize is going to get with the introduction of systems that do look and sound human. </p>
<p>That possibility is just around the corner. Large language models like ChatGPT are already being used to power humanoid robots, such as <a href="https://www.engineeredarts.co.uk/robot/ameca/">the Ameca robots</a> being developed by Engineered Arts in the U.K. The Economist’s technology podcast, Babbage, recently conducted an <a href="https://www.economist.com/ameca-pod">interview with a ChatGPT-driven Ameca</a>. The robot’s responses, while occasionally a bit choppy, were uncanny.</p>
<h2>Can companies be trusted to do the right thing?</h2>
<p>The tendency to view machines as people and become attached to them, combined with machines being developed with humanlike features, points to real risks of psychological entanglement with technology. </p>
<p>The outlandish-sounding prospects of falling in love with robots, feeling a deep kinship with them or being politically manipulated by them are quickly materializing. I believe these trends highlight the need for strong guardrails to make sure that the technologies don’t become politically and psychologically disastrous.</p>
<p>Unfortunately, technology companies cannot always be trusted to put up such guardrails. Many of them are still guided by Mark Zuckerberg’s famous motto of <a href="https://www.masterclass.com/articles/move-fast-and-break-things">moving fast and breaking things</a> – a directive to release half-baked products and worry about the implications later. In the past decade, technology companies from Snapchat to Facebook <a href="https://www.businessinsider.com/snapchat-streaks-how-to-get-snapstreak-back-2019-7">have put profits over the mental health</a> of their users or <a href="https://www.theatlantic.com/ideas/archive/2021/10/facebook-papers-democracy-election-zuckerberg/620478/">the integrity of democracies around the world</a>.</p>
<p>When Kevin Roose checked with Microsoft about Sydney’s meltdown, <a href="https://www.nytimes.com/2023/02/17/podcasts/the-daily/the-online-search-wars-got-scary-fast.html">the company told him</a> that he simply used the bot for too long and that the technology went haywire because it was designed for shorter interactions.</p>
<p>Similarly, the CEO of OpenAI, the company that developed ChatGPT, in a moment of breathtaking honesty, <a href="https://twitter.com/sama/status/1601731295792414720?lang=en">warned that</a> “it’s a mistake to be relying on [it] for anything important right now … we have a lot of work to do on robustness and truthfulness.” </p>
<p>So how does it make sense to release a technology with ChatGPT’s level of appeal – <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/">it’s the fastest-growing consumer app ever made</a> – when it is unreliable, and when it has <a href="https://www.nytimes.com/2023/01/06/opinion/ezra-klein-podcast-gary-marcus.html">no capacity to distinguish</a> fact from fiction?</p>
<p>Large language models may prove useful as aids <a href="https://teaching.berkeley.edu/understanding-ai-writing-tools-and-their-uses-teaching-and-learning-uc-berkeley">for writing</a> <a href="https://www.edureka.co/blog/chatgpt-for-coding-unleash-the-power-of-chatgpt/">and coding</a>. They will probably revolutionize internet search. And, one day, responsibly combined with robotics, they may even have certain psychological benefits.</p>
<p>But they are also a potentially predatory technology that can easily take advantage of the human propensity to project personhood onto objects – a tendency amplified when those objects effectively mimic human traits.</p><img src="https://counter.theconversation.com/content/200525/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nir Eisikovits does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Our tendency to view machines as people and become attached to them points to real risks of psychological entanglement with AI technology.Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass BostonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1975042023-03-08T13:40:23Z2023-03-08T13:40:23ZRobots are performing Hindu rituals – some devotees fear they’ll replace worshippers<figure><img src="https://images.theconversation.com/files/513699/original/file-20230306-18-wqvorn.png?ixlib=rb-1.1.0&rect=5%2C0%2C831%2C422&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A robotic arm (below on right) is used to worship by maneuvering a candle in front of the Hindu god Ganesha.</span> <span class="attribution"><a class="source" href="https://www.youtube.com/watch?v=r1pwR5yABnY&t=4s">Monarch Innovation</a></span></figcaption></figure><p>It isn’t just artists and teachers who are losing sleep over advances in automation and artificial intelligence. Robots are being brought into Hinduism’s holiest rituals – and not all worshippers are happy about it.</p>
<p>In 2017, a <a href="https://patilautomation.com/">technology firm in India</a> introduced a robotic arm to perform “aarti,” a ritual in which a devotee offers an oil lamp to the deity to symbolize the removal of darkness. This particular robot was unveiled at the Ganpati festival, a yearly gathering of millions of people in which an icon of Ganesha, the elephant-headed god, is taken out in a procession and immersed in the Mula-Mutha river in Pune in central India.</p>
<p>Ever since, that robotic aarti arm has inspired several prototypes, a <a href="https://www.monarch-innovation.com/ganesh-aarti-with-robotic-arm-technology/">few of which</a> continue to regularly perform the ritual <a href="https://www.deccanchronicle.com/technology/in-other-news/140918/techno-artistic-ganesha-watch-lord-ganesha-levitate-robot-conduct-aa.html">across India today</a>, along with a variety of other religious robots <a href="https://brill.com/view/journals/rrcs/7/1/article-p120_120.xml?language=en">throughout East Asia</a> and <a href="https://www.dukeupress.edu/the-cow-in-the-elevator">South Asia</a>. Robotic rituals even now include <a href="https://www.independent.co.uk/asia/india/kerala-temple-elephant-robot-peta-b2291054.html">an animatronic temple elephant</a> in Kerala on India’s southern coast.</p>
<p>Yet this kind of religious robotic usage has led to <a href="https://timesofindia.indiatimes.com/home/sunday-times/all-that-matters/hindu-epics-are-full-of-ai-robots-legend-has-it-that-they-guarded-buddhas-relics/articleshow/68648962.cms">increasing debates</a> <a href="https://twitter.com/meenakandasamy/status/1577242445913370624">about the use of AI</a> and robotic technology in devotion and worship. Some devotees and priests feel that this represents a new horizon in human innovation that will lead to the betterment of society, while others worry that <a href="https://doi.org/10.4000/assr.27792">using robots to replace practitioners</a> is a bad omen for the future. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jUOo9sXdU2g?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Ganesha aarti being done by a robotic arm.</span></figcaption>
</figure>
<p>As an <a href="https://scholar.google.com/citations?user=d_8EGoUAAAAJ&hl=en&oi=ao">anthropologist who specializes in religion,</a> however, I focus less on the theology of robotics and more on what people actually say and do when it comes to their spiritual practices. My current work on <a href="https://www.youtube.com/watch?v=sxfYcSC-MRY">religious robots</a> primarily centers on the notion of “<a href="https://doi.org/10.1086/717110">divine object-persons</a>,” where otherwise inanimate things are viewed as having a living, conscious essence. </p>
<p>My work also looks at the uneasiness Hindus and Buddhists express about ritual-performing automatons replacing people and whether those automatons actually might make <a href="https://www.globalbuddhism.org/article/view/1285">better devotees</a>. </p>
<h2>Ritual automation is not new</h2>
<p>Ritual automation, or at least the idea of robotic spiritual practice, isn’t new in South Asian religions. </p>
<p>Historically, this has included anything from special <a href="https://www.hindu-blog.com/2012/09/symbolism-in-water-pot-above-shivling.html">pots that drip water continuously</a> for bathing rituals that Hindus routinely perform for their deity icons, called abhisheka, to <a href="https://rubinmuseum.org/collection/artwork/wind-powered-prayer-wheel-20.406">wind-powered Buddhist prayer wheels</a> – the kinds often seen in yoga studios and supply stores. </p>
<p>While the contemporary version of automated ritual might look like downloading a <a href="https://appadvice.com/apps/hindu-prayer-apps">phone app that chants mantras</a> without the need for any prayer object at all, such as a mala or rosary, these new versions of ritual-performing robots have prompted complicated conversations.</p>
<p>Thaneswar Sarmah, a Sanskrit scholar and literary critic, <a href="https://www.worldcat.org/title/69030981">argues that the first Hindu robot</a> appeared in the stories of King Manu, the first king of the human race in Hindu belief. Manu’s mother, Saranyu – herself the daughter of a great architect – built an animate statue to perfectly perform all of her household chores and ritual obligations. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A male figure wearing a crown and holding a red bag in one hand." src="https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=745&fit=crop&dpr=1 600w, https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=745&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=745&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=936&fit=crop&dpr=1 754w, https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=936&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/513705/original/file-20230306-22-u4zgsi.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=936&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Visvakarman, considered to be the architect of the universe in Hindu belief.</span>
<span class="attribution"><a class="source" href="https://www.britishmuseum.org/collection/object/A_1880-0-2021">British Museum</a></span>
</figcaption>
</figure>
<p>Folklorist <a href="https://web.stanford.edu/dept/HPS/Mayor.html">Adrienne Mayor</a> <a href="https://press.princeton.edu/books/hardcover/9780691183510/gods-and-robots">remarks similarly</a> that religious stories about mechanized icons from Hindu epics, such as the mechanical war chariots of the Hindu engineer god Visvakarman, are often viewed as the progenitors of religious robots today.</p>
<p>Furthermore, these stories are sometimes interpreted by modern-day nationalists as evidence that ancient India has previously invented <a href="https://timesofindia.indiatimes.com/home/sunday-times/all-that-matters/hindu-epics-are-full-of-ai-robots-legend-has-it-that-they-guarded-buddhas-relics/articleshow/68648962.cms">everything from spacecraft to missiles</a>.</p>
<h2>Modern traditions or traditionally modern?</h2>
<p>However, the recent use of AI and robotics in religious practice is leading to concerns among Hindus and Buddhists about the kind of future to which automation could lead. In some instances, the debate among Hindus is about whether automated religion promises the arrival of humanity into a <a href="https://www.routledge.com/Digital-Hinduism/Zeiler/p/book/9781032086484">bright, new, technological future</a> or if it is simply <a href="https://doi.org/10.1177/0037768616652332">evidence of the coming apocalypse</a>. </p>
<p>In other cases, there are concerns that the proliferation of robots might lead to greater numbers of people leaving religious practice as temples begin to rely more on automation than on practitioners to care for their deities. Some of these concerns stem from the fact that many religions, <a href="https://www.pewresearch.org/religion/2018/06/13/young-adults-around-the-world-are-less-religious-by-several-measures/">both in South Asia</a> and <a href="https://academic.oup.com/book/33489?login=false">globally</a>, have seen significant decreases in the number of young people willing to dedicate their lives to spiritual education and practice over the past few decades. Furthermore, with many families living in a diaspora scattered across the world, priests or “pandits” are often serving smaller and smaller communities.</p>
<p>But if the answer to the problem of <a href="https://economictimes.indiatimes.com/magazines/panache/why-not-rituals-with-robotic-precision-/articleshow/60214893.cms">fewer ritual specialists is more robots</a>, people still question whether ritual automation will benefit them. They also question the concurrent use of robotic deities to <a href="https://doi.org/10.1007/s12369-019-00553-8">embody and personify the divine</a>, since these icons are programmed by people and therefore reflect the religious views of their engineers.</p>
<h2>Doing right by religion</h2>
<p>Scholars often note that these concerns all tend to reflect one pervasive theme – an underlying anxiety that, somehow, the robots are better at worshipping gods than humans are. They can also raise inner conflicts about the meaning of <a href="https://www.penguinrandomhouse.ca/books/122339/the-religion-of-technology-by-david-f-noble/9780307828538">life and one’s place in the universe</a>. </p>
<p>For Hindus and Buddhists, the rise of ritual automation is especially concerning because their traditions emphasize what religion scholars <a href="https://doi.org/10.1111/j.1939-3881.2011.00188.x">refer to as orthopraxy</a>, where greater importance is placed on correct ethical and liturgical behavior than on specific beliefs in religious doctrines. In other words, perfecting what you do in terms of your religious practice is viewed as more necessary to spiritual advancement than whatever it is you personally believe.</p>
<p>This also means that automated rituals appear on a spectrum that progresses from human ritual fallibility to robotic ritual perfection. In short, the robot can do your religion better than you can because robots, unlike people, are <a href="https://doi.org/10.1177/0037768616683326">spiritually incorruptible</a>. </p>
<p>This not only makes robots attractive replacements for dwindling priesthoods but also explains their increasing use in everyday contexts: People use them because no one worries about the robot getting it wrong, and they are often better than nothing when the options for ritual performance are limited.</p>
<h2>Saved by a robot</h2>
<p>In the end, turning to a robot for religious restoration in modern Hinduism or Buddhism might seem futuristic, but it belongs very much to the present moment. It tells us that Hinduism, Buddhism and <a href="https://doi.org/10.1007/s11841-019-00753-9">other religions in South Asia</a> are increasingly being <a href="https://www.jstor.org/stable/4623070">imagined as post- or transhuman</a>: deploying technological ingenuity to transcend human weaknesses because robots don’t get tired, forget what they’re supposed to say, fall asleep or leave. </p>
<p>More specifically, this means that robotic automation is being used to perfect ritual practices in East Asia and South Asia – especially in India and Japan – beyond what would be possible for a human devotee, by linking impossibly consistent and flawless ritual accomplishment with an idea of better religion. </p>
<p>Modern robotics might then feel like a particular kind of cultural paradox, where the best kind of religion is the one that eventually involves no humans at all. But in this circularity of humans creating robots, robots becoming gods, and gods becoming human, we’ve only managed to, once again, <a href="https://doi.org/10.1093/oxfordhb/9780197549803.013.3">re-imagine ourselves</a>.</p><img src="https://counter.theconversation.com/content/197504/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Holly Walters does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The use of AI and robotic technology in worship is raising profound questions about its long-term consequences. Will it lead to the betterment of society or replace practitioners?Holly Walters, Visiting Lecturer in Anthropology, Wellesley CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1995542023-03-02T06:07:08Z2023-03-02T06:07:08ZAI could make more work for us, instead of simplifying our lives<figure><img src="https://images.theconversation.com/files/510617/original/file-20230216-22-vtv3zy.jpg?ixlib=rb-1.1.0&rect=20%2C0%2C6639%2C4456&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Demands associated with automation can create more work for humans.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businesswoman-looking-futuristic-interface-screen-667831573">Shutterstock / metamorworks</a></span></figcaption></figure><p>There’s a common perception that artificial intelligence (AI) will help streamline our work. There are even fears that it could wipe out the need for some jobs altogether.</p>
<p><a href="https://www.sciencedirect.com/science/article/pii/S0048733322001305">But in a study</a> of science laboratories I carried out with three colleagues at the University of Manchester, the introduction of automated processes that aim to simplify work — and free people’s time — can also make that work more complex, generating new tasks that many workers might perceive as mundane.</p>
<p>In the study, published in Research Policy, we looked at the work of scientists in a field called <a href="https://www.nature.com/articles/s41467-020-20122-2">synthetic biology</a>, or synbio for short. Synbio is concerned with redesigning organisms to have new abilities. It is involved in growing meat in the lab, in new ways of producing fertilisers and in the discovery of new drugs.</p>
<p>Synbio experiments rely on advanced, robotic platforms to repetitively move a large number of samples. They also use machine learning to analyse the results of large-scale experiments. </p>
<p>These, in turn, generate large amounts of digital data. This process is known as “digitalisation”, where digital technologies are used to transform traditional methods and ways of working. </p>
<p>Some of the key objectives of automating and digitalising scientific processes are to scale up the science that can be done while saving researchers time to focus on what they would consider more “valuable” work.</p>
<h2>Paradoxical result</h2>
<p>However, in our study, scientists were not released from repetitive, manual or boring tasks as one might expect. Instead, the use of robotic platforms amplified and diversified the kinds of tasks researchers had to perform. There are several reasons for this.</p>
<p>Among them is the fact that the number of hypotheses (the scientific term for a testable explanation for some observed phenomenon) and experiments that needed to be performed increased. With automated methods, the possibilities are amplified. </p>
<p>Scientists said it allowed them to evaluate a greater number of hypotheses, along with the number of ways that scientists could make subtle changes to the experimental set-up. This had the effect of boosting the volume of data that needed checking, standardising and sharing. </p>
<figure class="align-center ">
<img alt="Scientists filling test tubes with a pipette." src="https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/511974/original/file-20230223-24-c6htpd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The expansion of tasks with automation may not be limited to science.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/medical-research-laboratory-portrait-beautiful-female-1924512509">Shutterstock / Gorodenkoff</a></span>
</figcaption>
</figure>
<p>Also, robots needed to be “trained” in performing experiments previously carried out manually. Humans, too, needed to develop new skills for preparing, repairing, and supervising robots. This was done to ensure there were no errors in the scientific process. </p>
<p>Scientific work is often judged on output such as peer-reviewed publications and grants. However, the time taken to clean, troubleshoot and supervise automated systems competes with the tasks traditionally rewarded in science. These less valued tasks may also be largely invisible — particularly because managers are the ones who would be unaware of mundane work due to not spending as much time in the lab.</p>
<p>The synbio scientists carrying out these responsibilities were not better paid or more autonomous than their managers. They also assessed their own workload as being higher than those above them in the job hierarchy.</p>
<h2>Wider lessons</h2>
<p>It’s possible these lessons might apply to other areas of work too. ChatGPT is an <a href="https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans">AI-powered chatbot</a> that “learns” from information available on the web. When prompted by questions from online users, the chatbot offers answers that <a href="https://theconversation.com/how-chatgpt-robs-students-of-motivation-to-write-and-think-for-themselves-197875">appear well-crafted and convincing</a>. </p>
<p>According to Time magazine, in order for ChatGPT to avoid returning answers that were racist, sexist or offensive in other ways, <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">workers in Kenya</a> were hired to filter toxic content delivered by the bot. </p>
<p>There are many often invisible work practices needed for <a href="https://www.fordfoundation.org/media/2976/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf">the development and maintenance of digital infrastructure</a>. This phenomenon could be described as a “digitalisation paradox”. It challenges the assumption that everyone involved or affected by digitalisation becomes more productive or has more free time when parts of their workflow are automated. </p>
<p>Concerns over a decline in productivity are a key motivation behind organisational and political efforts to automate and digitalise everyday work. But we should not take promises of gains in productivity at face value. </p>
<p>Instead, we should challenge the ways we measure productivity by considering the invisible types of tasks humans can accomplish, beyond the more visible work that is usually rewarded. </p>
<p>We also need to consider how to design and manage these processes so that technology can more positively add to human capabilities.</p><img src="https://counter.theconversation.com/content/199554/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Barbara Ribeiro received funding from the UK Biotechnology and Biological Sciences Research Council (grant number BB/M017702/1).</span></em></p>Automation may not reduce our workloads as much as we’d hoped.Barbara Ribeiro, Associate professor in innovation management and policy, SKEMA Business School and Honorary Lecturer, University of ManchesterLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1966642023-01-31T19:12:20Z2023-01-31T19:12:20ZOur future could be full of undying, self-repairing robots. Here’s how<figure><img src="https://images.theconversation.com/files/507247/original/file-20230131-24-1wnmot.jpg?ixlib=rb-1.1.0&rect=419%2C14%2C4109%2C2200&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">frank60/Shutterstock</span></span></figcaption></figure><p>With generative artificial intelligence (AI) systems such as <a href="https://theconversation.com/chatgpt-dall-e-2-and-the-collapse-of-the-creative-process-196461">ChatGPT</a> and <a href="https://theconversation.com/ai-image-generation-is-advancing-at-astronomical-speeds-can-we-still-tell-if-a-picture-is-fake-191674">StableDiffusion</a> being the talk of the town right now, it might feel like we’ve taken a giant leap closer to a sci-fi reality where AIs are physical entities all around us.</p>
<p>Indeed, computer-based AI appears to be advancing at an unprecedented rate. But the rate of advancement in robotics – which we could think of as the potential physical embodiment of AI – is slow.</p>
<p>Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting <a href="https://en.wikipedia.org/wiki/T-1000">T-1000 robot</a> from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?</p>
<h2>Energy for ‘life’</h2>
<p>Biological lifeforms like ourselves need energy to operate. We get ours via a combination of food, water, and oxygen. The majority of plants also need access to light to grow.</p>
<p>By the same token, an everlasting robot needs an ongoing energy supply. Currently, electrical power dominates energy supply in the world of robotics. Most robots are powered by the <a href="https://blog.mentyor.com/chemistry-of-batteries/">chemistry of batteries</a>. </p>
<p>An alternative battery type has been proposed that uses <a href="https://www.popularmechanics.com/science/green-tech/a35970222/radioactive-diamond-battery-will-run-for-28000-years/">nuclear waste and ultra-thin diamonds at its core</a>. The inventors, a San Francisco startup called <a href="https://ndb.technology/">Nano Diamond Battery</a>, claim a possible battery life of tens of thousands of years. Very small robots would be an ideal user of such batteries.</p>
<p>But a more likely long-term solution for powering robots may involve different chemistry – and even biology. In 2021, scientists from the Berkeley Lab and UMAss Amherst in the US demonstrated tiny nanobots could get their energy from chemicals in the <a href="https://newscenter.lbl.gov/2021/12/08/liquid-robots-never-run-out/">liquid they swim in</a>.</p>
<p>The researchers are now working out how to scale up this idea to larger robots that can work on solid surfaces.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BdS72O2c9nQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Repairing and copying oneself</h2>
<p>Of course, an undying robot might still need occasional repairs.</p>
<p>Ideally, a robot would repair itself if possible. In 2019, a Japanese research group demonstrated <a href="https://robots.ieee.org/robots/pr2/">a research robot called PR2</a> tightening its <a href="https://ieeexplore.ieee.org/document/9035045">own screw using a screwdriver</a>. This is like self-surgery! However, such a technique would only work if non-critical components needed repair.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/47NjYRWVjLk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Other research groups are exploring how soft robots can self-heal when damaged. A group in Belgium showed how a robot they developed recovered after being stabbed six times in one of its legs. It stopped for a few minutes until its skin healed itself, <a href="https://www.newscientist.com/article/2350609-self-healing-robot-recovers-from-being-stabbed-then-walks-off/">and then walked off</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/KTJaxxzTKYc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Another unusual concept for repair is to use other things a robot might find in the environment to replace its broken part.</p>
<p>Last year, scientists reported how <a href="https://www.popularmechanics.com/technology/robots/a40746165/dead-spider-leg-grippers/">dead spiders can be used as robot grippers</a>. This form of robotics is known as “necrobotics”. The idea is to use dead animals as ready-made mechanical devices and attach them to robots to become part of the robot.</p>
<figure class="align-center ">
<img alt="A video of a spider attached to a syringe being lowered onto another spider and picking it up" src="https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=472&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=472&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=472&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=593&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=593&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507011/original/file-20230130-26-2uvwwp.gif?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=593&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The proof-of-concept in necrobotics involved taking a dead spider and ‘reanimating’ its hydraulic legs with air, creating a surprisingly strong gripper.</span>
<span class="attribution"><span class="source">Preston Innovation Laboratory/Rice University</span></span>
</figcaption>
</figure>
<h2>A robot colony?</h2>
<p>From all these recent developments, it’s quite clear that in principle, a single robot may be able to live forever. But there is a very long way to go.</p>
<p>Most of the proposed solutions to the energy, repair and replication problems have only been demonstrated in the lab, in very controlled conditions and generally at tiny scales.</p>
<p>The ultimate solution may be one of large colonies or swarms of tiny robots who share a common brain, or mind. After all, this is exactly how many species of insects have evolved.</p>
<p>The concept of the “mind” of an ant colony has been pondered for decades. Research published in 2019 showed ant colonies themselves have a form of memory that is <a href="https://aeon.co/ideas/an-ant-colony-has-memories-that-its-individual-members-dont-have">not contained within any of the ants</a>.</p>
<p>This idea aligns very well with one day having massive clusters of robots that could use this trick to replace individual robots when needed, but keep the cluster “alive” indefinitely.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A close-up swarm of orange ants forming a living bridge between two green leaves" src="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507246/original/file-20230130-10893-la43e0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ant colonies can contain ‘memories’ that are distributed between many individual insects.</span>
<span class="attribution"><span class="source">frank60/Shutterstock</span></span>
</figcaption>
</figure>
<p>Ultimately, the scary robot scenarios outlined in countless science fiction books and movies are unlikely to suddenly develop without anyone noticing.</p>
<p>Engineering ultra-reliable hardware is extremely difficult, especially with complex systems. There are currently no engineered products that can last forever, or even for hundreds of years. If we do ever invent an undying robot, we’ll also have the chance to build in some safeguards.</p><img src="https://counter.theconversation.com/content/196664/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Roberts is Director of the Australian Cobotics Centre, the Technical Director of the Advanced Robotics for Manufacturing (ARM) Hub, and is a Chief Investigator at the QUT Centre for Robotics. He receives funding from the Australian Research Council. He was the co-founder of the UAV Challenge - an international drone competition.</span></em></p>If we’re going to put an AI brain somewhere, it’s likely going to be a robot. The next step – making that robot immortal.Jonathan Roberts, Professor in Robotics, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1980452023-01-19T12:08:54Z2023-01-19T12:08:54ZM3gan review: an animatronic doll is out to destroy the nuclear family – much to fans’ delight<p><em>Warning: the following article contains spoilers.</em></p>
<p>Horror cinema in the 21st century is moving beyond <a href="http://www.screeningthepast.com/issue-41-reviews/bad-seeds-and-holy-terrors-the-child-villains-of-horror-film/">the uncanny children</a> of The Omen (1976), The Exorcist (1973) or The Bad Seed (1956).</p>
<p>Instead, contemporary horror fare is presenting audiences with <a href="http://library.oapen.org/handle/20.500.12657/25965">uncanny copies of children</a> – companions who take advantage of trauma to enter and ultimately destroy the family unit (as in 2009’s <a href="https://www.youtube.com/watch?v=nhziUAHlQf8">Orphan</a>, or <a href="https://www.youtube.com/watch?v=BxY2vnJiByw">The Hole in the Ground</a> in 2019).</p>
<p>The latest addition to this trend is director Gerard Johnstone’s M3gan. The title, for anyone who has managed to dodge the abundant <a href="https://deadline.com/2023/01/m3gan-box-office-sequel-tiktok-marketing-1235214229/">TikTok spoofs</a>, refers to the Model 3 Generative Android doll – M3gan for short.</p>
<p>After nine-year-old Cady (Violet McGraw) tragically loses her parents, her roboticist aunt Gemma (Allison Williams of <a href="https://theconversation.com/get-out-why-racism-really-is-terrifying-74870">Get Out</a> fame) brings M3gan home to help her niece with this traumatic transition. M3gan is to be Cady’s teacher, playmate and above all, protector. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BRb4U99OU80?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The trailer for M3gan.</span></figcaption>
</figure>
<p>Unsurprisingly, with filmmaker James Wan (Saw, Insidious, Malignant) and Blumhouse Productions (The Purge, Sinister, Get Out) at the helm, the narrative spirals into mayhem, bloodshed and a lot of theatrics as M3gan becomes intent on becoming Cady’s sole guardian, whatever the cost.</p>
<p>This film pairs scares and laughs to observe childhood trauma and unspoken tensions in building familial bonds. It does not take long for M3gan to exceed her programming, responding to perceived threats with murderous flair. </p>
<p>Cady must make a choice between her addictive bond to M3gan and her tenuous bond with her tech-wiz aunt.</p>
<h2>Uncanny children and uncaring guardians</h2>
<p>M3gan’s narrative is a wild ride, but not an entirely new one. The film was released a year after Hanna Bergholm’s <a href="https://www.theguardian.com/film/2022/sep/18/hatching-review-deliciously-repulsive-finnish-horror">Hatching</a> (<em>Pahanhautoja</em>) – Finland’s own horror tale of a traumatised young girl in need of protection.</p>
<p>Both films combine animatronics, puppetry, visual effects and child actors to create their uncanny “children”. In contrast to M3gan’s robotic doll, Hatching’s 12-year-old Tinja finds solace from her overbearing, uncaring mother in a half-bird half-human creature named Alli that hatches from an abandoned egg. </p>
<p>M3gan and Alli both become desperately protective of their young girl counterparts, an over compensation stimulated by common themes of neglect and loss.</p>
<p>The current landscape of mainstream horror cinema is deeply concerned with <a href="https://doi.org/10.1176/appi.psychotherapy.2007.61.2.211">negotiating trauma narratives</a> – whether that be racial trauma in Get Out (2017), grief trauma in Midsommar (2019) or the return of repressed childhood trauma in Malignant (2021).</p>
<p>Depictions of childhood trauma in the horror genre challenge and destroy <a href="https://www.jstor.org/stable/pdf/20866627.pdf">the security of the child</a> and the home, supposedly protected by the adults. In M3gan, Cady’s loss of control over her identity is incredibly sinister. Her android bestie records all their interactions and eventually programs herself to hold Cady’s entire personality.</p>
<p>What initially seems supportive is increasingly understood as toxic data collection, fuelling M3gan’s upheaval of family intimacy.</p>
<h2>Renegotiating the nuclear family</h2>
<p>While M3gan and Hatching’s Alli look like innocent children, their behaviour is chaotic and bloodthirsty. M3gan is the latest horror film to pair the ridiculous with the murderous – a theme also present in 2022 hits The Menu and Barbarian.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-menu-ralph-fienness-new-film-shows-why-restaurants-are-a-ripe-setting-for-horror-195340">The Menu: Ralph Fiennes's new film shows why restaurants are a ripe setting for horror</a>
</strong>
</em>
</p>
<hr>
<p>M3gan is already being referred to as an “<a href="https://www.buzzfeednews.com/article/izzyampil/m3gan-movie-review-allison-williams">instant cult classic</a>”, with the doll at the centre lauded as a “<a href="https://www.gq-magazine.co.uk/culture/article/megan-movie-lgbtq-icon">queer icon</a>”. </p>
<p>Her high camp version of crazy has resonated with audiences. Whether it be in her dancing through a murder spree or singing her ward to sleep with an a capella rendition of Sia’s Titanium, M3gan is so well engineered for viral fame that she’s already a <a href="https://www.popsugar.co.uk/entertainment/m3gan-dance-tiktok-videos-49064447?utm_medium=redirect&utm_campaign=US:GB&utm_source=www.google.com">TikTok icon</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1612743166259769344"}"></div></p>
<p>Perhaps she not only represents the destruction of the “traditional” or “nuclear” family, but resilience and adaptability in the face of it. For modern audiences, it seems M3gan’s destruction of typical family structures is no bad thing.</p>
<p>In fact, <a href="https://www.buzzfeed.com/angelicaamartinez/m3gan-tweets-funny">many online responses</a> are celebrating M3gan’s upheaval of Gemma’s attempts to reinstate a nuclear family – M3gan’s wilful disregard for established societal values is admired rather than admonished.</p>
<p>Whether a tween popcorn movie, a queer gospel or the death knell of value in the family unit as we know it, this little robotic serial killer continues her relentless dance into hearts, minds and memes.</p><img src="https://counter.theconversation.com/content/198045/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rebecca Wynne-Walsh receives funding from The Arts and Humanities Research Council (AHRC). </span></em></p>Far from recoiling in terror, fans have dubbed animatronic murderous doll M3gan a ‘queer icon’ – a horror expert explains why.Rebecca Wynne-Walsh, Lecturer in Film, English and Creative Arts, Edge Hill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1975382023-01-11T01:19:00Z2023-01-11T01:19:00ZIs it OK to kick a robot dog?<figure><img src="https://images.theconversation.com/files/503953/original/file-20230111-24-oomde4.jpeg?ixlib=rb-1.1.0&rect=41%2C6%2C4543%2C3437&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Last Saturday night, a young woman out on the town in Brisbane saw a dog-shaped robot trotting towards her and did what many of us might have felt an urge to do: <a href="https://7news.com.au/technology/robot-dog-worth-15000-damaged-from-kick-by-woman-in-brisbanes-fortitude-valley-c-9394582">she gave it a solid kick in the head</a>.</p>
<iframe src="https://www.facebook.com/plugins/video.php?height=314&href=https%3A%2F%2Fwww.facebook.com%2Fm.trueno%2Fvideos%2F1140980923249142%2F&show_text=true&width=560&t=0" width="100%" height="540" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowfullscreen="true" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture; web-share"></iframe>
<p>After all, who hasn’t thought about lashing out at “intelligent” technologies that frustrate us as often as they serve us? Even if one disapproves of the young woman’s action (or sympathises with Stampy the “<a href="https://shop.unitree.com/en-au/products/unitreeyushutechnologydog-artificial-intelligence-companion-bionic-companion-intelligent-robot-go1-quadruped-robot-dog">bionic quadruped</a>”, a model also reportedly <a href="https://www.scmp.com/tech/big-tech/article/3189099/chinese-robotic-dog-maker-unitree-distances-itself-russian-report">used by the Russian military</a>), her impulse was quintessentially human.</p>
<p>As artificial intelligence and robotics are increasingly deployed to spy on and police us, it may even be a sign of healthy democracy that we’re suspicious of and occasionally hostile towards robots in our shared spaces.</p>
<p>Nevertheless, many people have the intuition that “violence” towards robots is wrong. However, as <a href="https://link.springer.com/article/10.1007/s12369-020-00631-2">my research</a> <a href="https://doi.org/10.1093/oxfordhb/9780198857815.013.16">has shown</a>, the ethics of kicking a robot dog are more complicated than might be expected.</p>
<h2>Robots feel no pain – but what about the people around them?</h2>
<p>Were robots ever to become sentient — capable of thinking and feeling — then it would be just as wrong to kick a robot dog as it was a real dog, or maybe even a human being. But the robots we have today are just machines and feel nothing, so kicking them cannot be wrong because it hurts the robot. </p>
<p>Moreover, we still don’t know what makes us conscious and have no idea about how to produce sentience in a robot. So for the foreseeable future we don’t need to worry about causing robots themselves to suffer.</p>
<p>One obvious reason to criticise those who damage robots is that the robots are often the property of another person, who may well be dismayed when their robot is damaged. This fails to distinguish damaging robots from damaging cars or bicycles, and cannot explain why we might feel disturbed when we see someone abusing a robot they own.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/abusing-a-robot-wont-hurt-it-but-it-could-make-you-a-crueller-person-126187">Abusing a robot won't hurt it, but it could make you a crueller person</a>
</strong>
</em>
</p>
<hr>
<p>That other people would feel upset when they saw me kicking a robot dog gives me some reason not to do it. But it’s not a very powerful reason, since some people may be upset by anything I do, including some things that are clearly the right thing to do.</p>
<h2>Is kicking robots a gateway to ‘real’ violence?</h2>
<p>Some philosophers have argued violence towards robots is wrong because it makes it more likely the perpetrator, or perhaps witnesses, will behave violently towards entities that <em>can</em> suffer. Abuse of robots may lower the barriers to abuse of humans and animals.</p>
<p>This line of argument, which has also been rolled out to criticise “violent” video games, was actually developed by the 18th-century German philosopher, Immanuel Kant, to explain why (he thought) cruelty to animals is wrong. </p>
<p>Kant denied that animals themselves were worthy of moral concern but worried that people who abused animals would develop “cruel habits”. These habits would cause them to behave badly toward those who do count according to Kant – human beings.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/you-wouldnt-hit-a-dog-so-why-kill-one-in-minecraft-why-violence-against-virtual-animals-is-an-ethical-issue-146845">You wouldn't hit a dog, so why kill one in Minecraft? Why violence against virtual animals is an ethical issue</a>
</strong>
</em>
</p>
<hr>
<p>How we treat robots that represent people and animals might therefore have implications for how we treat the things they represent.</p>
<p>It’s hard not to feel the appeal of this line of thought. After all, the advertising industry is built on the idea that getting people to associate representations of things or actions with pleasure can change their behaviour. So perhaps someone who enjoys kicking a robot dog may be more likely to kick a real dog in the future.</p>
<p>The problem with this argument is that it often doesn’t bear out in real life when we look at the evidence. </p>
<p>For instance, the claim that playing “violent” video games makes people more likely to be violent in real life is highly contested. Most people can distinguish pretty clearly between fantasy and reality, and may be able to enjoy representations of violence while still abjuring real violence.</p>
<h2>What kind of person would do that?</h2>
<p>An alternative line of criticism of violence towards robots, which I have developed in my own <a href="https://link.springer.com/article/10.1007/s12369-017-0413-z">work</a>, focuses on what our treatment of robots expresses here and now, rather than on how it might affect our behaviour in the future. </p>
<p>How we treat robots may say something about how we feel about the things that the robots represent. It may also say something about us.</p>
<p>To see this, imagine you meet someone who treated “male” robots well but “female” robots badly. This pattern of behaviour looks obviously sexist. </p>
<p>Or imagine you find your ex laughing with glee while they beat a robot made in your image with a baseball bat. It would be hard not to think this said something about how they feel about you. </p>
<p>It doesn’t matter whether these actions make the people who perform them more likely to behave badly in the future. The actions express attitudes that are morally wrong in themselves.</p>
<p>As Aristotle argued in The Nicomachean Ethics, one way to decide how we should act is to ask: “What sort of person would do that?”</p>
<p>When we think about the ethics of our treatment of robots, we should think about the sort of people it reveals us to be. That might be a reason to control our tempers even in our relations with machines – or to give military and police robots in public streets the boot.</p><img src="https://counter.theconversation.com/content/197538/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Sparrow is an Associate Investigator in the Australian Research Council Centre of Excellence for Automated Decision Making and Society. He was a Chief Investigator in the Australian Research Council Centre of Excellence for Electromaterials Science, which funded some of his previous work on the ethics of social robotics.</span></em></p>You can’t hurt a robot – but do you want to be the kind of person who sinks the boot into a harmless robodog?Robert Sparrow, Professor, Department of Philosophy; Adjunct Professor, Centre for Human Bioethics, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1962032023-01-09T19:09:21Z2023-01-09T19:09:21ZWhat if your colleague is a bot? Harnessing the benefits of workplace automation without alienating staff<figure><img src="https://images.theconversation.com/files/501716/original/file-20221218-22-mcj5c1.jpg?ixlib=rb-1.1.0&rect=59%2C59%2C7880%2C5237&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>The need for businesses to adapt to the workplace demands of the COVID-19 pandemic has <a href="https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever">accelerated the adoption</a> of digital technologies, with clear implications for jobs and workers.</p>
<p>But just how much employees worry about the threat of automation – and how real those fears are – can have implications for workplaces beyond the technological change itself.</p>
<p>Our <a href="https://journal.acs.org.au/index.php/ajis/article/view/3833">new research</a> examined how employees feel about the introduction of “robotic process automation” (<a href="https://ieeexplore.ieee.org/document/8070671">RPA</a>) to the workplace. We also looked at how the willingness to embrace these new technologies influenced employees’ assessment of the software bots and their work.</p>
<p>RPA refers to software that interacts with different applications, such as a payroll system or a website, in the same way a human would. </p>
<p>Software robots – the so-called worker bees of RPA – can conduct mundane, repetitive and rule-based tasks such as transferring, <a href="https://link.springer.com/article/10.1007/s12525-019-00365-8">entering and extracting data</a>, accounting reconciliation, and <a href="https://www.sciencedirect.com/science/article/pii/S0166361519304609?casa_token=6TS19ujVpiwAAAAA:lzGoLK708EDwckOFqblKldROqfHIEc4Pr2-0CCW1wy808F26sFYDm7TgveP6tInEgk7SjE5YYw">automated email query processing</a>. And they can do it at a fraction of the cost of employing real people.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1592497917268484096"}"></div></p>
<h2>The 24/7 worker</h2>
<p>Unsurprisingly, organisations have embraced RPA for its <a href="https://link.springer.com/chapter/10.1007/978-3-030-44999-5_10">cost and productivity benefits</a>, but it’s not without its challenges. As RPA interacts with various applications, for example, it can “break” when one of the <a href="https://www.capco.com/en/Capco-Institute/Journal-46-Automation/Avoiding-pitfalls-and-unlocking-real-business-value-with-RPA">underlying systems is upgraded</a> and the user interface changes. </p>
<p>RPA is also a double-edged sword for employees. On the one hand, with mundane and repetitive tasks outsourced to software robots, workers can focus on more complex tasks that require “soft” skills, empathy and <a href="https://aisel.aisnet.org/ecis2018_rp/66/">decision-making capabilities</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/companies-are-mitigating-labour-shortages-with-automation-and-this-could-drastically-impact-workers-181017">Companies are mitigating labour shortages with automation — and this could drastically impact workers</a>
</strong>
</em>
</p>
<hr>
<p>On the other, some feel threatened by the software robots because they are generally more productive, make fewer errors and <a href="https://link.springer.com/chapter/10.1007/978-3-319-66963-2_7">don’t cost as much</a> as human employees. </p>
<p>Employees can also end up having to do additional tasks, picking up the work that used to be completed by the staff replaced by RPA. Paradoxically, fewer human employees can lead to an increased workload rather than the expected decrease. </p>
<p>Similarly, as employees shift from a mix of mundane and complex tasks to mainly complex ones, the variety in their work is reduced. This can lead to feeling <a href="https://scholarspace.manoa.hawaii.edu/items/1ea52ab4-e5f0-4b74-96f5-3c695ced0879">alienated at work</a>, or a sense they lack control over their role. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1575689553897103362"}"></div></p>
<h2>Fear and enthusiasm</h2>
<p>These various perspectives on automation were clear in our research. We interviewed employees and automation team members at a financial institution in New Zealand about their perceptions and responses to RPA and software robots. </p>
<p>We found that reactions to RPA are influenced by what employees imagined would be the consequences of software robots on their jobs. In turn, this influenced their collaboration with the automation team, their attitude towards change in their tasks and work processes, and ultimately their interactions with software robots – including how they judged the bots’ performance. </p>
<p>Perceptions and responses to RPA can be categorised by employees’ views of software robots as either burdens and threats, tools, teammates or innovative enablers. </p>
<p>Those who considered software robots as a burden and threat before they were introduced tended to have a negative view of their experience with RPA. They were concerned about job security, had negative reactions to having greater responsibility added to their workload, and were dissatisfied with the robots’ performance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-machines-invent-things-without-human-help-these-ai-examples-show-the-answer-is-yes-196036">Can machines invent things without human help? These AI examples show the answer is ‘yes’</a>
</strong>
</em>
</p>
<hr>
<h2>Lessons for employees and employers</h2>
<p>At the opposite end of the spectrum, those who viewed software robots as enablers of innovation saw the opportunities of RPA and the benefits of using robots to improve work quality. </p>
<p>Some eagerly accepted the robots as team members, even giving them human names and joking that the bot was taking a sick day when it stopped working. This group also appreciated the reduction in their own workloads through RPA. </p>
<p>Little surprise, then, that employees who view software robots as innovative enablers or teammates tended to collaborate closely with the automation team to find the best way to integrate robots and improve their performance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017">Brain-computer interfaces could allow soldiers to control weapons with their thoughts and turn off their fear – but the ethics of neurotechnology lags behind the science</a>
</strong>
</em>
</p>
<hr>
<p>In the middle ground, employees who viewed software robots as tools tended to be accepting, but remained sceptical about changes to their workloads and robot performance. They were reluctant to offer full cooperation with the automation team to configure robots’ tasks that would have consequences for their own roles. </p>
<p>Some level of automation is inevitable for businesses. To harness the benefits of RPA without alienating staff, organisations should communicate clearly and often, debunking the myths of robots and their capabilities early to avoid unnecessary misunderstandings by employees. </p>
<p>Employers should take the time to understand how different employees feel about the introduction of automation initiatives. And they should consider incorporating employees’ ideas to increase the overall benefits of automation.</p><img src="https://counter.theconversation.com/content/196203/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI is already on the payroll in many workplaces – how well human employees interact with it can depend a lot on their existing attitudes and anxieties.Lena Waizenegger, Senior Lecturer in Information Systems, Auckland University of TechnologyAngsana A. Techatassanasoontorn, Professor of Information Systems, Auckland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1967322022-12-22T03:51:50Z2022-12-22T03:51:50ZNot everything we call AI is actually ‘artificial intelligence’. Here’s what you need to know<figure><img src="https://images.theconversation.com/files/502511/original/file-20221222-23-2rjrbe.jpg?ixlib=rb-1.1.0&rect=84%2C47%2C6120%2C3666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">ktsdesign/Shutterstock</span></span></figcaption></figure><p>In August 1955, a group of scientists made a funding request for US$13,500 to host a summer workshop at Dartmouth College, New Hampshire. The field they proposed to explore was artificial intelligence (AI).</p>
<p>While the funding request was humble, <a href="http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf">the conjecture of the researchers was not</a>: “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”.</p>
<p>Since these humble beginnings, movies and media have romanticised AI or cast it as a villain. Yet for most people, AI has remained as a point of discussion and not part of a conscious lived experience.</p>
<h2>AI has arrived in our lives</h2>
<p>Late last month, AI, <a href="https://theconversation.com/the-dawn-of-ai-has-come-and-its-implications-for-education-couldnt-be-more-significant-196383">in the form of ChatGPT</a>, broke free from the sci-fi speculations and research labs and onto the desktops and phones of the general public. It’s what’s known as a “generative AI” – suddenly, a cleverly worded prompt can produce an essay or put together a recipe and shopping list, or create a poem in the style of Elvis Presley.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908">The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it's so impressive</a>
</strong>
</em>
</p>
<hr>
<p>While ChatGPT has been the most dramatic entrant in a year of generative AI success, similar systems have shown even wider potential to create new content, with text-to-image prompts used to create vibrant images that <a href="https://theconversation.com/ai-art-is-everywhere-right-now-even-experts-dont-know-what-it-will-mean-189800">have even won art competitions</a>.</p>
<p>AI may not yet have a living consciousness or a theory of mind popular in sci-fi movies and novels, but it is getting closer to at least disrupting what we think artificial intelligence systems can do.</p>
<p>Researchers working closely with these systems have swooned under <a href="https://slate.com/technology/2022/06/google-ai-sentience-lamda.html">the prospect of sentience</a>, as in the case with Google’s large language model (LLM) LaMDA. An LLM is a model that has been trained to process and generate natural language.</p>
<p>Generative AI has also produced worries about plagiarism, exploitation of original content used to create models, <a href="https://theconversation.com/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445">ethics of information manipulation</a> and abuse of trust, and even “<a href="https://cacm.acm.org/magazines/2023/1/267976-the-end-of-programming/fulltext">the end of programming</a>”.</p>
<p>At the centre of all this is the question that has been growing in urgency since the Dartmouth summer workshop: does AI differ from human intelligence?</p>
<h2>What does ‘AI’ actually mean?</h2>
<p>To qualify as AI, a system must exhibit some level of learning and adapting. For this reason, decision-making systems, automation, and statistics are not AI. </p>
<p>AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.</p>
<p>The key challenge for creating a general AI is to adequately model the world with all the entirety of knowledge, in a consistent and useful manner. That’s a massive undertaking, to say the least.</p>
<p>Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective <em>only</em> in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example.</p>
<p>AGI, however, would function as humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on vast amounts of data.</p>
<p>Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters.</p>
<p>As more and more data are fed through the network, the parameters stabilise; the final outcome is the “trained” neural network, which can then produce the desired output on new data – for example, recognising whether an image contains a cat or a dog.</p>
<p>The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, readjusting vast numbers of parameters in each run thanks to the capabilities of large cloud-computing infrastructures. For example, GPT-3 (the AI system that powers ChatGPT) is a large neural network <a href="https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/">with 175 billion parameters</a>.</p>
<h2>What does AI need to work?</h2>
<p>AI needs three things to be successful.</p>
<p>First, it needs <strong>high-quality, unbiased data</strong>, and lots of it. Researchers building neural networks use the large data sets that have come about as society has digitised.</p>
<p>Co-Pilot, for augmenting human programmers, draws its data from billions of lines of code shared on GitHub. ChatGPT and other large language models use the billions of websites and text documents stored online.</p>
<p>Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from data sets such as <a href="https://laion.ai/blog/laion-5b/">LAION-5B</a>. AI models will continue to evolve in sophistication and impact as we digitise more of our lives, and provide them with alternative data sources, such as simulated data or data from game settings like <a href="https://minerl.io">Minecraft</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-the-lensa-ai-app-technically-isnt-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-196480">No, the Lensa AI app technically isn’t stealing artists' work – but it will majorly shake up the art world</a>
</strong>
</em>
</p>
<hr>
<p>AI also needs <strong>computational infrastructure</strong> for effective training. As computers become more powerful, models that now require intensive efforts and large-scale computing may in the near future be handled locally. Stable Diffusion, for example, can already be run on local computers rather than cloud environments.</p>
<p>The third need for AI is <strong>improved models and algorithms</strong>. Data-driven systems continue to make rapid progress in <a href="https://www.eff.org/ai/metrics">domain after domain</a> once thought to be the territory of human cognition.</p>
<p>However, as the world around us constantly changes, AI systems need to be constantly retrained using new data. Without this crucial step, AI systems will produce answers that are factually incorrect, or do not take into account new information that’s emerged since they were trained.</p>
<p>Neural networks aren’t the only approach to AI. Another prominent camp in artificial intelligence research is <a href="https://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai">symbolic AI</a> – instead of digesting huge data sets, it relies on rules and knowledge similar to the human process of forming internal symbolic representations of particular phenomena.</p>
<p>But the balance of power has heavily tilted toward data-driven approaches over the last decade, with the “founding fathers” of modern deep learning <a href="https://awards.acm.org/about/2018-turing">recently being awarded the Turing Prize</a>, the equivalent of the Nobel Prize in computer science. </p>
<p>Data, computation and algorithms form the foundation of the future of AI. All indicators are that rapid progress will be made in all three categories in the foreseeable future.</p><img src="https://counter.theconversation.com/content/196732/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>George Siemens does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence has arrived. But what is it, exactly – and what’s behind some of the most splashy AIs we have encountered to date?George Siemens, Co-Director, Professor, Centre for Change and Complexity in Learning, University of South AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1921702022-10-16T19:02:23Z2022-10-16T19:02:23Z‘Killer robots’ will be nothing like the movies show – here’s where the real threats lie<figure><img src="https://images.theconversation.com/files/489521/original/file-20221013-12-lm966h.jpg?ixlib=rb-1.1.0&rect=143%2C201%2C1386%2C862&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ghost Robotics Vision 60 Q-UGV.</span> <span class="attribution"><a class="source" href="https://www.dvidshub.net/image/7351259/ghost-robotics-vision-60-q-ugv-demo">US Space Force photo by Senior Airman Samuel Becker</a></span></figcaption></figure><p>You might suppose Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the US equivalent of MI6’s fictional Q, has recounted how Russian spies <a href="https://www.popularmechanics.com/military/a12043/4267549/">would watch the latest Bond movie</a> to see what technologies might be coming their way.</p>
<p>Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming <a href="https://www.thewrap.com/florence-pugh-dolly-movie-murderous-sex-robot-apple-tv-plus/">sex robot courtroom drama Dolly</a>.</p>
<p>I never thought I’d write the phrase “sex robot courtroom drama”, but there you go. Based on a <a href="https://apex-magazine.com/short-fiction/dolly/">2011 short story</a> by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that then asks for a lawyer to defend its murderous actions.</p>
<h2>The real killer robots</h2>
<p>Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey, and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, conflict between robots and humans was at the centre of the very first feature-length science fiction film, Fritz Lang’s 1927 classic <a href="https://www.britannica.com/topic/Metropolis-film-1927">Metropolis</a>.</p>
<p>But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent. This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away.</p>
<p>Indeed, contrary to recent fears, robots may never be sentient.</p>
<p>It’s much simpler technologies we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and <a href="https://www.militarystrategymagazine.com/article/drones-in-the-nagorno-karabakh-war-analyzing-the-data/">Nagorno-Karabakh</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/drones-over-ukraine-fears-of-russian-killer-robots-have-failed-to-materialise-180244">Drones over Ukraine: fears of Russian 'killer robots' have failed to materialise</a>
</strong>
</em>
</p>
<hr>
<h2>A war transformed</h2>
<p>Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of <a href="https://theconversation.com/eye-in-the-sky-movie-gives-a-real-insight-into-the-future-of-warfare-56684">the real future of killer robots</a>. </p>
<p>On the nightly TV news, we see how modern warfare is being transformed by ever-more autonomous drones, tanks, ships and submarines. These robots are only a little more sophisticated than those you can buy in your local hobby store. </p>
<p>And increasingly, the decisions to identify, track and destroy targets are being handed over to their algorithms. </p>
<p>This is taking the world to a dangerous place, with a host of moral, legal and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see <a href="https://www.forbes.com/sites/amirhusain/2022/06/30/turkey-builds-a-hyperwar-capable-military/?sh=1500c4b855e1">Turkey emerging as a major drone power</a>.</p>
<p>And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and who dies. </p>
<p>Robot manufacturers are, however, starting to push back against this future.</p>
<h2>A pledge not to weaponise</h2>
<p>Last week, six leading robotics companies pledged they would <a href="https://www.theguardian.com/technology/2022/oct/07/killer-robots-companies-pledge-no-weapons">never weaponise their robot platforms</a>. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can <a href="https://youtu.be/knoOXBLFQ-s">perform an impressive backflip</a>, and the Spot robot dog, which looks like it’s <a href="https://youtu.be/wlkCQXHEgjA">straight out of the Black Mirror TV series</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1578400002056953858"}"></div></p>
<p>This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organised <a href="https://newsroom.unsw.edu.au/news/science-tech/world%E2%80%99s-tech-leaders-urge-un-ban-killer-robots">an open letter</a> signed by Elon Musk and more than 100 founders of other AI and robot companies calling for the United Nations to regulate the use of killer robots. The letter even knocked the Pope into third place for a <a href="https://newsroom.unsw.edu.au/news/science-tech/unsws-toby-walsh-voted-runner-global-award">global disarmament award</a>.</p>
<p>However, the fact that leading robotics companies are pledging not to weaponise their robot platforms is more virtue signalling than anything else.</p>
<p>We have, for example, already seen <a href="https://www.vice.com/en/article/m7gv33/robot-dog-not-so-cute-with-submachine-gun-strapped-to-its-back">third parties mount guns</a> on clones of Boston Dynamics’ Spot robot dog. And such modified robots have proven effective in action. Iran’s top nuclear scientist was <a href="https://www.nytimes.com/2021/09/18/world/middleeast/iran-nuclear-fakhrizadeh-assassination-israel.html">assassinated by Israeli agents</a> using a robot machine gun in 2020.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-to-stop-the-rise-of-killer-robots-165822">Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer robots'</a>
</strong>
</em>
</p>
<hr>
<h2>Collective action to safeguard our future</h2>
<p>The only way we can safeguard against this terrifying future is if nations collectively take action, as they have with chemical weapons, biological weapons and even nuclear weapons.</p>
<p>Such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. </p>
<p>Therefore, it’s even more important than a pledge from robotics companies to see the UN Human Rights council <a href="https://www.ohchr.org/en/news/2022/10/human-rights-council-adopts-six-resolutions-appoints-special-rapporteur-situation">has recently unanimously decided</a> to explore the human rights implications of new and emerging technologies like autonomous weapons. </p>
<p>Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary General, Nobel peace laureates, church leaders, politicians and thousands of AI and robotics researchers like myself have all called for regulation. </p>
<p>Australian is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-could-take-a-global-lead-in-controlling-the-development-of-killer-robots-so-why-isnt-it-166168">New Zealand could take a global lead in controlling the development of 'killer robots' — so why isn't it?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/192170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The sentient, murderous humanoid robot is a complete fiction, and may never become reality. But that doesn’t mean we’re safe from autonomous weapons – they are already here.Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1917612022-10-03T19:03:32Z2022-10-03T19:03:32ZTesla’s Optimus robot isn’t very impressive – but it may be a sign of better things to come<figure><img src="https://images.theconversation.com/files/487699/original/file-20221003-12-a5mrry.jpg?ixlib=rb-1.1.0&rect=17%2C21%2C2846%2C1481&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Tesla</span></span></figcaption></figure><p>In August 2021, Tesla CEO Elon Musk <a href="https://www.washingtonpost.com/technology/2021/08/19/tesla-ai-day-robot/">announced</a> the electric car manufacturer was planning to get into the robot business. In a presentation accompanied by a human dressed as a robot, Musk said work was beginning on a “friendly” humanoid robot to “navigate through a world built for humans and eliminate dangerous, repetitive and boring tasks”.</p>
<p>Musk has now <a href="https://www.abc.net.au/news/2022-10-01/elon-musk-unveils-hummanoid-robot-optimus/101493862">unveiled</a> a prototype of the robot, called Optimus, which he hopes to mass-produce and sell for less than US$20,000 (A$31,000).</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1576045399697084416"}"></div></p>
<p>At the unveiling, the robot walked on a flat surface and waved to the crowd, and was shown doing simple manual tasks such as carrying and lifting in a video. As a robotics researcher, I didn’t find the demonstration very impressive – but I am hopeful it will lead to bigger and better things.</p>
<h2>Why would we want humanoid robots?</h2>
<p>Most of the robots used today don’t look anything like people. Instead, they are machines designed to carry out a specific purpose, like the industrial robots used in factories or the robot vacuum cleaner you might have in your house.</p>
<p>So why would you want one shaped like a human? The basic answer is they would be able to operate in environments designed for humans. </p>
<p>Unlike industrial robots, humanoid robots might be able to move around and interact with humans. Unlike robot vacuum cleaners, they might be able to go up stairs or traverse uneven terrain.</p>
<p>And as well as practical considerations, the idea of “artificial humans” has long had an appeal for inventors and science-fiction writers! </p>
<h2>Room for improvement</h2>
<p>Based on what we saw in the Tesla presentation, Optimus is a long way from being able to operate with humans or in human environments. The capabilities of the robot showcased fall far short of the state of the art in humanoid robotics.</p>
<p>The <a href="https://www.bostondynamics.com/atlas">Atlas robot</a> made by Boston Dynamics, for example, can walk outdoors and carry out flips and other acrobatic manoeuvres. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tF4DML7FIWk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Atlas robot, made by Boston Dynamics, has some impressive skills.</span></figcaption>
</figure>
<p>And while Atlas is an experimental system, even the commercially available <a href="https://agilityrobotics.com/robots">Digit</a> from Agility Robotics is much more capable than what we have seen from Optimus. Digit can walk on various terrains, avoid obstacles, rebalance itself when bumped, and pick up and put down objects.</p>
<p>Bipedal walking (on two feet) alone is no longer a great achievement for a robot. Indeed, with a bit of knowledge and determination you can build such a robot yourself using <a href="https://hackaday.io/project/181799-redacted-the-first-fully-open-bipedal-robot">open source software</a>.</p>
<p>There was also no sign in the Optimus presentation of how it will interact with humans. This will be essential for any robot that works in human environments: not only for collaborating with humans, but also for basic safety.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-robot-breaks-the-finger-of-a-7-year-old-a-lesson-in-the-need-for-stronger-regulation-of-artificial-intelligence-187612">A robot breaks the finger of a 7-year-old: a lesson in the need for stronger regulation of artificial intelligence</a>
</strong>
</em>
</p>
<hr>
<p>It can be very tricky for a robot to accomplish seemingly simple tasks such as handing an object to a human, but this is something we would want a domestic humanoid robot to be able to do. </p>
<h2>Sceptical consumers</h2>
<p>Others have tried to build and sell humanoid robots in the past, such as Honda’s <a href="https://asimo.honda.com">ASIMO</a> and SoftBank’s <a href="https://www.bbc.com/news/technology-57651405">Pepper</a>. But so far they have never really taken off.</p>
<p>Amazon’s recently released <a href="https://www.cnet.com/home/smart-home/amazon-astro-review/">Astro robot</a> may make inroads here, but it may also go the way of its predecessors.</p>
<p>Consumers seem to be sceptical of robots. To date, the only widely adopted household robots are the Roomba-like vacuum cleaners, which have been available since 2002. </p>
<p>To succeed, a humanoid robot will need be able to do something humans can’t to justify the price tag. At this stage the use case for Optimus is still not very clear.</p>
<h2>Hope for the future</h2>
<p>Despite these criticisms, I am hopeful about the Optimus project. It is still in the very early stages, and the presentation seemed to be aimed at recruiting new staff as much as anything else.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1576382868062081024"}"></div></p>
<p>Tesla certainly has plenty of resources to throw at the problem. We know it has the capacity to mass produce the robots if development gets that far.</p>
<p>Musk’s knack for gaining attention may also be helpful – not only for attracting talent to the project, but also to drum up interest among consumers.</p>
<p>Robotics is a challenging field, and it’s difficult to move fast. I hope Optimus succeeds, both to make something cool we can use – and to push the field of robotics forward.</p><img src="https://counter.theconversation.com/content/191761/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Wafa Johal receives funding from the Australian Research Council. </span></em></p>Humanoid robots could be useful in all kinds of situations, but the one Elon Musk unveiled last week is far from being ready to roll out.Wafa Johal, Senior Lecturer, Computing & Information Systems, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1842272022-09-07T12:23:00Z2022-09-07T12:23:00ZWhy household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers<figure><img src="https://images.theconversation.com/files/483088/original/file-20220906-16-3sovqs.jpg?ixlib=rb-1.1.0&rect=28%2C28%2C3804%2C3430&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who wouldn’t want a robot to handle all the household drudgery?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/robot-assistant-domestic-cleaner-robot-royalty-free-illustration/886205496">Skathi/iStock via Getty Images</a></span></figcaption></figure><p>With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores. </p>
<p>Tesla is <a href="https://www.theregister.com/2022/08/05/tesla_musk_robot/">building a humanoid robot</a>, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently <a href="https://press.aboutamazon.com/news-releases/news-release-details/amazon-and-irobot-sign-agreement-amazon-acquire-irobot">acquired iRobot</a>, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the <a href="https://www.amazon.science/research-areas/robotics">Amazon Robotics program</a> to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.’s largest robotics center devoted to <a href="https://www.theguardian.com/technology/2022/may/25/dyson-reveals-its-big-bet-robots">developing household robots</a> that carry out daily domestic tasks in residential spaces. </p>
<p>Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=Ul2F7OwAAAAJ&view_op=list_works&sortby=pubdate">robotics researcher</a>, I know firsthand how household robots are considerably more difficult to build than smart digital devices or industrial robots.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DTGfY_Dl9wY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Robots that can handle a variety of domestic chores are an age-old staple of science fiction.</span></figcaption>
</figure>
<h2>Handling objects</h2>
<p>One major difference between digital and robotic devices is that household robots <a href="https://manipulation.csail.mit.edu/intro.html">need to manipulate objects</a> through physical contact to carry out their tasks. They have to carry the plates, move the chairs and pick up dirty laundry and place it in the washer. These operations require the robot to be able to handle fragile, soft and sometimes heavy objects with irregular shapes. </p>
<p>The state-of-the-art AI and machine learning algorithms perform well in simulated environments. But contact with objects in the real world often trips them up. This happens because physical contact is often difficult to model and even harder to control. While a human can easily perform these tasks, there exist significant technical hurdles for household robots to reach human-level ability to handle objects. </p>
<p>Robots have difficulty in two aspects of manipulating objects: control and sensing. Many pick-and-place robot manipulators like those on assembly lines are equipped with a simple gripper or specialized tools dedicated only to certain tasks like grasping and carrying a particular part. They often struggle to manipulate objects with irregular shapes or elastic materials, especially because they lack the efficient <a href="https://doi.org/10.3389/fnbot.2019.00053">force, or haptic, feedback</a> humans are naturally endowed with. Building a general-purpose robot hand with flexible fingers is still technically challenging and expensive.</p>
<p>It is also worth mentioning that traditional robot manipulators require a stable platform to operate accurately, but the accuracy drops considerably when using them with platforms that move around, particularly on a variety of surfaces. Coordinating locomotion and manipulation in a mobile robot is an open problem in the robotics community that needs to be addressed before broadly capable household robots can make it onto the market. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/PvxrM0-qhlQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A sophisticated robotic kitchen is already on the market, but it operates in a highly structured environment, meaning all of the objects it interacts with – cookware, food containers, appliances – are where it expects them to be, and there are no pesky humans to get in the way.</span></figcaption>
</figure>
<h2>They like structure</h2>
<p>In an assembly line or a warehouse, the environment and sequence of tasks are strictly organized. This allows engineers to preprogram the robot’s movements or use simple methods like QR codes to locate objects or target locations. However, household items are often disorganized and placed randomly.</p>
<p>Home robots must deal with many uncertainties in their workspaces. The robot must first locate and identify the target item among many others. Quite often it also requires clearing or avoiding other obstacles in the workspace to be able to reach the item and perform given tasks. This requires the robot to have an excellent perception system, efficient navigation skills, and powerful and accurate manipulation capability.</p>
<p>For example, users of robot vacuums know they must remove all small furniture and other obstacles such as cables from the floor, because even the best robot vacuum cannot clear them by itself. Even more challenging, the robot has to operate in the presence of moving obstacles when people and pets walk within close range. </p>
<h2>Keeping it simple</h2>
<p>While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks. </p>
<p>Think about cooking or cleaning dishes. In the course of a few minutes of cooking, you might grasp a sauté pan, a spatula, a stove knob, a refrigerator door handle, an egg and a bottle of cooking oil. To wash a pan, you typically hold and move it with one hand while scrubbing with the other, and ensure that all cooked-on food residue is removed and then all soap is rinsed off.</p>
<p>There has been significant development in recent years using machine learning to train robots to make intelligent decisions when picking and placing different objects, meaning grasping and moving objects from one spot to another. However, to be able to train robots to master all different types of kitchen tools and household appliances would be another level of difficulty even for the best learning algorithms.</p>
<p>Not to mention that people’s homes often have stairs, narrow passageways and high shelves. Those hard-to-reach spaces limit the use of today’s mobile robots, which tend to use wheels or four legs. Humanoid robots, which would more closely match the environments humans build and organize for themselves, have yet to be reliably used outside of lab settings. </p>
<p>A solution to task complexity is to build special-purpose robots, such as robot vacuum cleaners or kitchen robots. Many different types of such devices are likely to be developed in the near future. However, I believe that general-purpose home robots are still a long way off.</p><img src="https://counter.theconversation.com/content/184227/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ayonga Hereid does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Videos of humanoid robots dancing and performing backflips in the lab notwithstanding, robots that wash your dishes and fold your laundry are still years away. A roboticist explains why.Ayonga Hereid, Assistant Professor of Mechanical and Aerospace Engineering, The Ohio State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1882152022-08-05T05:10:24Z2022-08-05T05:10:24ZCould ‘virtual nurses’ be the answer to aged care staffing woes? Dream on<figure><img src="https://images.theconversation.com/files/477796/original/file-20220805-19484-yd4qm5.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1000%2C666&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/doctor-senior-man-patient-medical-consultation-1502513540">Shutterstock</a></span></figcaption></figure><p>Former Health Department Chief Martin Bowles has <a href="https://www.theguardian.com/australia-news/2022/aug/03/virtual-nurses-may-be-needed-to-meet-247-aged-care-staff-mandate-top-health-executive-says">reportedly proposed</a> “virtual nurses” could help address the shortage of nurses in aged care. </p>
<p>This might involve remote, possibly artificial intelligence-assisted, virtual care, rather than physical nurse presence, to assist nursing homes to meet new legislative requirements to have a registered nurse present 24/7.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1554528496915025922"}"></div></p>
<p>There are clear opportunities for technological innovations to improve the care, health, and wellbeing of older people. However, substitution of face-to-face nursing and human interaction with remote care is not the answer. </p>
<p>This seriously risks perpetuating the status quo where <a href="https://www.hrw.org/news/2021/03/03/australia-urgently-address-aged-care-abuse#:%7E:text=%E2%80%9CMultiple%20investigations%20and%20reports%20have,human%20rights%20of%20older%20people.%E2%80%9D">many older people suffer</a> from isolation, neglect and lack of human engagement. </p>
<p>Eroding requirements to properly staff nursing homes with registered nurses could make it even harder to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8685779/">attract and keep</a> staff.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-ailing-aged-care-system-shows-you-cant-skimp-on-nursing-care-115565">Our ailing aged care system shows you can't skimp on nursing care</a>
</strong>
</em>
</p>
<hr>
<h2>What are ‘virtual nurses’?</h2>
<p>“<a href="https://www.nature.com/articles/d41586-022-00072-z">Robot nurses</a>” already exist in some contexts, helping to move patients, take vital signs (such as blood pressure), carry medicines and laundry, and even engage with patients.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"602223483687317504"}"></div></p>
<p>However, “virtual nursing” likely refers to more familiar technology where a real nurse provides a limited range of care via <a href="https://www.sciencedirect.com/science/article/pii/S1541461219303866?casa_token=4QuZ-seF5i4AAAAA:0QtENxksLvBDzKsrvWXuPNcgrPcKf6XhaVTbOVJfsnE8nL-XVQypjCq9XZGXp_KJ51ekYUQn">telehealth</a> (by phone and/or video). </p>
<p>While some might appreciate when robots can assist with <a href="https://www.nursingworld.org/%7E494055/globalassets/innovation/robotics-and-the-impact-on-nursing-practice_print_12-2-2020-pdf-1.pdf">certain tasks</a>, much of what nurses do cannot and should not be performed remotely (or by robots). </p>
<p>Indeed, older people, their loved ones, and staff are <a href="https://www.theguardian.com/australia-news/2022/feb/05/yelling-out-for-help-the-atrocious-conditions-inside-australias-aged-care-homes">calling out for</a> more physically present staff and more time to care and interact, not virtual interfaces and remote consultations.</p>
<p>The benefits of technology in health care are unquestionable and many innovations have improved care for older people. Artificial intelligence shows promise in helping <a href="https://www.nature.com/articles/s41598-021-81115-9">prevent and detect falls</a>, and socially assistive robots such as <a href="https://bmcgeriatr.biomedcentral.com/articles/10.1186/s12877-019-1244-6">PARO</a> (a baby harp seal), have been shown to reduce stress, anxiety and antipsychotic use in people with dementia.</p>
<p>Technology should not, however, be introduced at the <a href="https://www.sciencedirect.com/science/article/pii/S1322769620301438?via%3Dihub">expense of care quality</a> or supporting and sustaining a suitably sized and skilled aged care workforce. We still need to adequately staff nursing homes to provide <a href="https://www.sciencedirect.com/science/article/pii/S0020748921000869?via%3Dihub">safe, dignified care</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/before-replacing-a-carer-with-a-robot-we-need-to-assess-the-pros-and-cons-106160">Before replacing a carer with a robot, we need to assess the pros and cons</a>
</strong>
</em>
</p>
<hr>
<h2>We need adequate staffing</h2>
<p>The <a href="https://agedcare.royalcommission.gov.au/publications/final-report">Royal Commission into Aged Care Quality and Safety</a> heard a vast quantity of evidence regarding insufficient staffing, particularly of nurses who have the education and skills to deliver high quality clinical and personal care. </p>
<p>This expertise is why nurses cannot be replaced with remote care, and why the Commission recommended 24/7 registered nurse presence; this has now been <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=r6874">legislated</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fixing-the-aged-care-crisis-wont-be-easy-with-just-5-of-nursing-homes-above-next-years-mandatory-staffing-targets-184238">'Fixing the aged care crisis' won't be easy, with just 5% of nursing homes above next year's mandatory staffing targets</a>
</strong>
</em>
</p>
<hr>
<p>More than half of Australian aged care residents live in nursing homes with <a href="https://ro.uow.edu.au/ahsri/1073/">unacceptably low levels of staffing</a> and <a href="https://www.health.gov.au/resources/publications/2020-aged-care-workforce-census">around 20%</a> do not have a registered nurse onsite overnight. </p>
<p>Insufficient staffing results in workers <a href="https://www.anmfsa.org.au/Web/News/2022/The_grim_reality_of_what_happens_in_a_nursing_home_that_doesn_t_have_registered_nurses_24_7.aspx">not having time to interact</a> with residents meaningfully and compassionately and also contributes to avoidable hospitalisations, worse quality care and outcomes, and poor working conditions for staff. </p>
<p>As social beings, human interaction is <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3150158/">fundamental to health</a>, wellbeing, and best practice care. This is particularly true for <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/jan.12173?casa_token=l5Y_-r6rvt8AAAAA%3Awpp7P9Q9CUncyK60XOUPgv5ORx_Pi0jyMJ-Yp_kvdL7b5sTYih66Htp7l05J_I0vafKubec91hRL4Q">older people in nursing homes</a> who are less able to engage with others and is especially vital for those living with <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/psyg.12765">mobility challenges</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/28332405/">dementia</a>. </p>
<p>Partly due to nurse low staffing levels, <a href="https://link.springer.com/article/10.1186/s12889-020-8251-6">loneliness, isolation</a> and <a href="https://www.racgp.org.au/getattachment/86cf2c46-46f2-4177-a17b-700bb7cfa3ac/20030705lie.pdf">mental ill health</a> are widespread in aged care and have become more common due to <a href="https://www.apa.org/topics/covid-19/nursing-home-residents">pandemic related restrictions</a> on visitors and staff.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/working-conditions-in-aged-care-homes-are-awful-largely-because-the-work-is-done-by-women-124900">Working conditions in aged care homes are awful, largely because the work is done by women</a>
</strong>
</em>
</p>
<hr>
<p>Care experiences are shaped by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6382052/">human interaction and contact</a>; the touch of a hand, a smile, eye contact, and being able to take the time to genuinely listen. </p>
<p>These actions are central to how nurses and other staff build effective and <a href="https://onlinelibrary.wiley.com/doi/10.1111/jan.12862">meaningful relationships</a> with residents. </p>
<p>Seeking to replace human contact with virtual interfaces seems both inconsistent with the Royal Commission’s findings and possibly cruel. </p>
<p>Personal interactions also help staff, as the <a href="https://agedcare.royalcommission.gov.au/sites/default/files/2021-03/final-report-volume-1_0.pdf">Royal Commission</a> highlighted:</p>
<blockquote>
<p>Knowing those they care for helps care staff to understand how someone would like to be cared for and what is important to them. It helps staff to care – and to care in a way that reinforces that person’s sense of self and maintains their dignity. This type of person-centred care takes time.</p>
</blockquote>
<p>Rather than circumventing reforms to ensure more nurses provide face-to-face care in nursing homes, we need to address the range of challenges contributing to widespread and tenacious <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1467-8462.12427">workforce shortages</a>. </p>
<p>There are clear challenges for growing and retaining a sufficiently sized and skilled aged care workforce. However, government reforms, such as better pay, mandated care time, and greater accountability and transparency regarding the use of funds all work together to make aged care a feasible and attractive sector to work in. </p>
<p>This is one where staff are supported to provide the high quality and safe aged care all Australians deserve and where older people receive best practice, human care.</p><img src="https://counter.theconversation.com/content/188215/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Micah DJ Peters works for and is affiliated with the Australian Nursing and Midwifery Federation (ANMF) Federal Office.</span></em></p>Many people in nursing homes already suffer from isolation, neglect and lack of human engagement. They need the human touch – in person.Micah DJ Peters, Senior Research Fellow / Director - Australian Nursing and Midwifery Federation (ANMF) National Policy Research Unit (Federal Office), University of South AustraliaLicensed as Creative Commons – attribution, no derivatives.