tag:theconversation.com,2011:/fr/topics/human-factors-16474/articlesHuman factors – The Conversation2023-07-21T12:28:17Ztag:theconversation.com,2011:article/2058192023-07-21T12:28:17Z2023-07-21T12:28:17ZGliding, not searching: Here’s how to reset your view of ChatGPT to steer it to better results<figure><img src="https://images.theconversation.com/files/538613/original/file-20230720-19-7o5oi7.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5202%2C3346&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Thinking of ChatGPT as a glider you pilot can help you use it more effectively.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/couple-flying-glider-airplane-royalty-free-image/601799725">Colin Anderson Productions pty ltd/DigitalVision via Getty Images</a></span></figcaption></figure><p>ChatGPT has exploded in popularity, and people are using it to write <a href="https://www.forbes.com/sites/jodiecook/2023/06/15/train-chatgpt-to-write-like-you-in-5-easy-steps/?sh=5cd66aff530f">articles</a> and <a href="https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/">essays</a>, <a href="https://seo.ai/blog/chatgpt-copywriting">generate marketing copy</a> and <a href="https://www.zdnet.com/article/how-to-use-chatgpt-to-write-code/">computer code</a>, or simply as a <a href="https://www.apa.org/monitor/2023/06/chatgpt-learning-tool">learning</a> or <a href="https://www.nichepursuits.com/chatgpt-for-research/">research tool</a>. However, most people don’t understand how it works or what it can do, so they are either not happy with its results or not using it in a way that can draw out its best capabilities. </p>
<p>I’m a <a href="https://engineering.tufts.edu/me/people/faculty/james-intriligator">human factors engineer</a>. A core principle in my field is <a href="https://uxmag.com/articles/human-factor-principles-in-ux-design">never blame the user</a>. Unfortunately, the ChatGPT search-box interface elicits the wrong <a href="https://doi.org/10.1177/001872088903100601">mental model</a> and leads users to believe that entering a simple question should lead to a comprehensive result, but that’s not how ChatGPT works. </p>
<p>Unlike a search engine, with static and stored results, ChatGPT never copies, retrieves or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, <a href="https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286">it creates an original answer</a>.</p>
<p>Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation <a href="https://www.zdnet.com/article/how-to-write-better-chatgpt-prompts/">will inform responses it generates later</a>. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.</p>
<p>Your mental model of a machine – how you conceive of it – is important for using it effectively. To understand how to shape a productive session with ChatGPT, think of it as a glider that takes you on journeys through knowledge and possibilities.</p>
<h2>Dimensions of knowledge</h2>
<p>You can begin by thinking of a specific dimension or space in a topic that intrigues you. If the topic were chocolate, for example, you might ask it to write a tragic love story about Hershey’s Kisses. The glider has been trained on essentially everything ever written about Kisses, and similarly it “knows” how to glide through all kinds of story spaces - so it will confidently take you on a flight through Hershey’s Kisses space to produce the desired story. </p>
<p>You might instead ask it to explain five ways in which chocolate is healthy and give the response in the style of Dr. Seuss. Your requests will launch the glider through different knowledge spaces – chocolate and health – toward a different destination – a story in a specific style.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="sections of a chocolate bar sit on top of a pile of cocoa beans" src="https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=409&fit=crop&dpr=1 600w, https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=409&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=409&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=514&fit=crop&dpr=1 754w, https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=514&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/538604/original/file-20230720-17-qpgxqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=514&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Your explorations with ChatGPT can span multiple areas of knowledge – for example, crossing chocolate with climate change, cuisine, health, international trade or romance fiction.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/DEU/b19f79456b8d4abc8c98b4e3ad1bc29c/photo">AP Photo/Hermann J. Knippertz</a></span>
</figcaption>
</figure>
<p>To unlock ChatGPT’s full potential, you can learn to fly the glider through “<a href="https://www.mathopenref.com/transversal.html">transversal</a>” spaces – areas that cross multiple domains of knowledge. By guiding it through these domains, ChatGPT will learn both the scope and angle of your interest and will begin to adjust its response to provide better answers.</p>
<p>For example, consider this prompt: “Can you give me advice on getting healthy.” In that query, ChatGPT does not know who the “you” is, nor who “me” is, nor what you mean by “getting healthy.” Instead, try this: “Pretend you are a medical doctor, a nutritionist and a personal coach. Prepare a two-week food and exercise plan for a 56-year-old man to increase heart health.” With this, you have given the glider a more specific flight plan spanning areas of medicine, nutrition and motivation. </p>
<p>If you want something more precise, then you can activate a few more dimensions. For example, add in: “And I want to lose some weight and build muscle, and I want to spend 20 minutes a day on exercise, and I cannot do pull-ups and I hate tofu.” ChatGPT will provide output taking into account all of your activated dimensions. Each dimension can be presented together or in sequence. </p>
<h2>Flight plan</h2>
<p>The dimensions you add through prompts can be informed by answers ChatGPT has given along the way. Here’s an example: “Pretend you are an expert in cancer, nutrition and behavior change. Propose 8 behavior-change interventions to reduce cancer rates in rural communities.” ChatGPT will dutifully present eight interventions. </p>
<p>Let’s say three of the ideas look the most promising. You can follow up with a prompt to encourage more details and start putting it in a format that could be used for public messaging: “Combine concepts from ideas 4, 6 and 7 to create 4 new possibilities – give each a tagline, and outline the details.” Now let’s say intervention 2 seems promising. You can prompt ChatGPT to make it even better: “Offer six critiques of intervention 2 and then redesign it to address the critiques.”</p>
<p>ChatGPT does better if you first focus on and highlight dimensions you think are particularly important. For example, if you really care about the behavior-change aspect of the rural cancer rates scenario, you could force ChatGPT to get more nuanced and add more weight and depth to that dimension before you go down the path of interventions. </p>
<p>You could do this by first prompting: “Classify behavior-change techniques into 6 named categories. Within each, describe three approaches and name two important researchers in the category.” This will better activate the behavior-change dimension, letting ChatGPT incorporate this knowledge in subsequent explorations.</p>
<p>There are many categories of prompt elements you can include to activate dimensions of interest. One is domains, like “machine learning approaches.” Another is expertise, like “respond as an economist with Marxist leanings.” And another is output style, like “write it as an essay for The Economist.” You can also specify audiences, like “create and describe 5 clusters of our customer-types and write a product description targeted to each one.”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cfqtFvWOfg0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">ChatGPT and its cousins often just make up incorrect answers, reason enough to avoid thinking of them as search engines.</span></figcaption>
</figure>
<h2>Explorations, not answers</h2>
<p>By rejecting the search engine metaphor and instead embracing a transdimensional glider metaphor, you can better understand how ChatGPT works and navigate more effectively toward valuable insights.</p>
<p>The interaction with ChatGPT is best performed not as a simple or undirected question-and-answer session, but as an interactive conversation that progressively builds knowledge for both the user and the chatbot. The more information you provide to it about your interests, and the more feedback it gets on its responses, the better its answers and suggestions. The richer the journey, the richer the destination.</p>
<p>It is important, however, to use the information provided appropriately. The facts, details and references ChatGPT presents are not taken from verified sources. They are conjured based on its training on a <a href="https://datascience.columbia.edu/news/2023/columbia-perspectives-on-chatgpt/">vast but non-curated set of data</a>. ChatGPT will generate a medical diagnosis the same way it writes a Harry Potter story, which is to say it is a bit of an improviser. </p>
<p>You should always critically evaluate the specific information it provides and consider its output as explorations and suggestions rather than as hard facts. Treat its content as imaginative conjectures that require further verification, analysis and filtering by you, the human pilot.</p>
<p><em>This article was updated to include disclosure of the author’s consulting business.</em></p><img src="https://counter.theconversation.com/content/205819/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Intriligator provides consulting services on business innovation, marketing and human factors, including the use of AI technologies. He plans to add prompt engineering services to this consultancy.</span></em></p>ChatGPT can be very useful – if you shift how you view it. The first step is to stop thinking of it as a chatty search engine.James Intriligator, Professor of the Practice, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1988272023-01-31T19:31:56Z2023-01-31T19:31:56ZIn a world of limited resources, low-tech solutions are the future – providing we make them more user-friendly<figure><img src="https://images.theconversation.com/files/507068/original/file-20230130-18-wtdp0g.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C1007%2C660&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Prototyping of a self-built stove.</span> <span class="attribution"><a class="source" href="https://photo.lowtechlab.org/picture.php?/712/category/43">Julien Lemaistre/Low-tech Lab</a></span></figcaption></figure><p>Semiconductors for electric vehicles have been in <a href="https://www.jpmorgan.com/insights/research/supply-chain-chip-shortage">short supply since 2020</a>. The causes are multiple, including water shortages in producing countries and increasingly high-tech models in Europe.</p>
<p>Could this be the opportunity to rethink our reliance on these technologies? Indeed, we are facing a paradox. In response to the ecological crisis, we tend to favour high-tech solutions, even though they increase the pressure on living environments, take a long time to implement, and are often produced in poor working conditions. In a world of limited resources, it is therefore appropriate to question our view of technology as the go-to-solution to environmental challenges.</p>
<p>So-called <em>appropriate technologies</em> are those that are less complex, consume fewer resources, and have the least possible negative impact at a human and environmental level. They are one avenue of technical frugality to explore. This approach is becoming increasingly credible, with the emergence of a <a href="https://librairie.ademe.fr/cadic/6916/demarches_low-tech-rapport_publicv2.pdf">structured ecosystem in France</a> and its inclusion in one of the government’s plan to reach <a href="https://transitions2050.ademe.fr/generation-frugales">carbon neutrality by 2050</a>.</p>
<h2>A technological change but also a human one</h2>
<p>In a <a href="https://hal.science/hal-03598528">study we conducted among experts in France</a>, we propose to define appropriate technologies as:</p>
<blockquote>
<p>“A set of objects, services and practices whose design is constrained by the need to care for humans and the environments of production/use of which they are part.” </p>
</blockquote>
<p>We also sought to identify the characteristics that would help define the appropriate technology approach. These include a renewal of design methods, a psychological transformation, user empowerement, a tendency to favour de-automation, as well as a taste for radical usefulness and technical sustainability.</p>
<p>These characteristics reveal that appropriate technologies are not defined solely by a difference in technical intensity with high-techs, but rather by a global approach that also includes strong human and social dimensions. Since, in France, the appropriate-technology movement was popularised and developed by engineers, human factors are still rarely taken into account or are limited to the question of social acceptability. Their impact on the organisation of work, ease of use issues and the way they change the satisfaction of human needs remain to be explored.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507374/original/file-20230131-12-gb670b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Infographics representing the eight characteristics of appropriate technologies and their definition.</span>
<span class="attribution"><span class="source">Authors</span></span>
</figcaption>
</figure>
<h2>Barriers to use</h2>
<p>In a <a href="https://librairie.ademe.fr/cadic/6916/demarches_low-tech-rapport_publicv2.pdf">recent report</a>, France’s ecological transition agency <a href="https://www.ademe.fr/en/frontpage/">Ademe</a> identifies four barriers to the deployment of appropriate technologies: regulatory, cultural, economic and semantic. What’s missing are usability barriers.</p>
<p>Theoretical models of technology acceptance – models that explain the factors that drive technology adoption – emphasise that the quality of interaction between users and artefacts can be a major barrier. Indeed, they identify utility and usability as significant determinants of technology use. In the case of appropriate technologies, they seem all the more important. </p>
<p>To identify these obstacles in more details, <a href="https://hal.science/hal-03206053">we conducted a study</a> on the public’s representations. It was done in partnership with the <a href="https://lowtechlab.org/en">Low-tech Lab</a>, an association that aims to disseminate the appropriate technology approach. (In French, the term “low-techs” is commonly used to refer to “appropriate technologies”; the equivalent of the Low-tech Lab in the US would be the <a href="https://www.ncat.org">NCAT</a>.) As conditions for the transition to appropriate technologies, those surveyed saw the following factors as being important: their accessibility, the ability to use them autonomously, and the need for a psychological change, which confirms the importance of the obstacles related to use.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=283&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=283&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=283&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=356&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=356&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507067/original/file-20230130-14-fmvzo7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=356&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The public’s representations of appropriate technologies based on a textual analysis. In addition to technical issues, appropriate technologies must respond to a societal transition issue. To make this transition toward the wider use of appropriate technologies, the participants mainly identify obstacles related to accessibility and ease of use.</span>
<span class="attribution"><span class="source">Authors</span></span>
</figcaption>
</figure>
<p>To go further, in <a href="https://hal.science/hal-03716402">another study</a>, we categorised the problems faced by users (actual and potential) of appropriate technologies.</p>
<h2>Recommendations for appropriate technologies designers</h2>
<p>In total, 14 categories of problems appear, such as performance, usefulness, pleasure, production/installation, know-how, safety and legal compliance.</p>
<p>These and others stem from two main factors. First, appropriate technologies require more user involvement than conventional technologies. They’re less automated and less digital, as the user takes over much of what is currently handled by automation or standardised industrial processes. These can be compensated for by, for example, making the unusual parts of the device with which the user must interact clearly visible and understandable.</p>
<p>Furthermore, appropriate technologies sometimes have a rudimentary/makeshift aspect. In a perspective of empowerment, but also of environmental sustainability, appropriate technologies are not necessarily manufactured and installed by professionals. This can have consequences on their safe use, the understandability of their functioning, and other aspects.</p>
<p>This is why we have formalised <a href="https://hal.science/hal-03716402">seven design recommendations</a> to allow those making appropriate technologies to become aware of these usability issues and thus avoid them. Our goal was to guide practitioners on the aspects related to the interaction between humans and appropriate technologies, while remaining in line with the appropriate technology “philosophy”. If some recommendations tackle the right level of technical intensity to propose, others aim at facilitating their use.</p>
<h2>Changing the way we design technologies and interactions</h2>
<p>While appropriate technologies appear relevant for addressing the ecological and social transition, the “human” obstacles to their wider adoption are not only caused by “mental representation” issues but also by concrete questions of use (accessibility, usability, etc.) that require us to think about technical sobriety in terms of user experience.</p>
<p>Other obstacles exist as well. For example, designers may not be inclined to design appropriate technologies, because this approach calling for “technological discernment” is not the classic way for them to showcase their skills or those of their companies.</p>
<p>In recent years, <a href="https://www.sciencedirect.com/science/article/abs/pii/S0959652617305528">many</a> <a href="https://www.tandfonline.com/doi/full/10.1080/09537325.2021.1914834">studies</a> have tried to shed light on the appropriate technology approach. Let us however remain <a href="https://doi.org/10.46692/9781529213294">modest</a>. It will not be possible, nor desirable, to “solve” all user-experience problems. Indeed, appropriate technologies invite us to accept a measure of “friction”, to take time, to question needs and choose priorities. In short, they invite us not to reproduce the design practices that have contributed to the current environmental crisis.</p>
<hr>
<p><em>This article was co-written with Antoine Martin, PhD in human factors/ergonomics and cofounder of Sentier Ergonomie.</em></p><img src="https://counter.theconversation.com/content/198827/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Clément Colin co-founded a research agency that provides advice, research and training on appropriate technologies.</span></em></p>The deployment of low-tech requires taking into account the human factor and changing design practices.Clément Colin, Doctorant en ergonomie, Université de LorraineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1905272023-01-10T13:30:30Z2023-01-10T13:30:30ZThe safer you feel, the less safely you might behave – but research suggests ways to counteract this tendency<figure><img src="https://images.theconversation.com/files/501595/original/file-20221216-27-821uce.jpg?ixlib=rb-1.1.0&rect=0%2C580%2C5615%2C3152&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Work-related safety precautions can lead to riskier behaviors on the job.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/dangerous-jobs-royalty-free-image/157646267">TerryJ/E+ via Getty Images</a></span></figcaption></figure><p>Interventions designed to keep people safe can have hidden side effects. With an increased perception of safety, some people are more likely to take risks.</p>
<p>For example, some vehicle drivers <a href="https://doi.org/10.1016/0001-4575(88)90055-3">take more risks when they are buckled up</a> in a shoulder-and-lap belt. <a href="https://doi.org/10.1061/(ASCE)CO.1943-7862.0001812">Some construction workers step closer to the edge</a> of the roof because they are hooked to a fall-protection rope. Some parents of young children <a href="https://www.jstor.org/stable/1816378">take less care with medicine bottles</a> that are “childproof” and thus difficult to open.</p>
<p>Techniques designed to reduce harm can promote a false sense of security and increase risky behavior and unintentional injuries.</p>
<p>As <a href="https://scholar.google.com/citations?user=_bQ06DAAAAAJ&hl=en&oi=ao">civil</a> <a href="https://scholar.google.com/citations?user=bruDeeAAAAAJ&hl=en&oi=sra">engineers</a> and <a href="https://scholar.google.com/citations?user=H0ye5TgAAAAJ&hl=en&oi=ao">applied behavioral scientists</a>, we are interested in ways to improve workplace safety. Our ongoing research suggests that employers need to do more than provide injury-protection devices and mandate safety rules and procedures to follow. Job-site mottos like “safety is our priority” are not enough. Employers need to consider the crucial human dynamic that can counteract their desired injury-prevention effects – and tap into strategies that might get around this safety paradox.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="woman in car's driver's seat fastens her seat belt" src="https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501878/original/file-20221219-18-n9w9cz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Infamously, people may drive more recklessly after buckling up.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/female-caregiver-fastening-seat-belt-royalty-free-image/1343267514">Klaus Vedfelt/DigitalVision via Getty Images</a></span>
</figcaption>
</figure>
<h2>Why precautions can trigger more risks</h2>
<p>A well-established psychological phenomenon known as <a href="https://doi.org/10.1086/260352">risk compensation</a> or <a href="https://doi.org/10.1111/j.1539-6924.1982.tb01384.x">risk homeostasis</a> explains this safety paradox. An intervention designed to prevent or reduce unintentional injury decreases one’s perception of risk. Then that perception increases the person’s risk-taking behavior, especially when taking a risk has a benefit, such as comfort, convenience or getting a job done faster.</p>
<p>Just as thermostats have a set point and activate when the temperature deviates from normal, people maintain a target level of risk by adjusting their behavior. They balance potential risks and perceived benefits. </p>
<p>For instance, a driver may compensate for safety interventions like a vehicle shoulder-and-lap belt, an energy-absorbing steering column and an airbag by driving faster – trading off personal safety for time saved. The heightened odds of a crash at higher driving speeds don’t affect only the driver; they also put other vehicles, pedestrians and cyclists at more risk. An individual’s risk compensation can influence the injury-prevention impact of protective devices and safety-related rules and regulations for the population overall.</p>
<p>In our own research, we investigated the risk compensation phenomenon among construction workers using an immersive mixed-virtual reality scenario that simulated a roofing task. We asked participants to install asphalt shingles on a real 27-degree sloped roof within a virtual environment that conveyed the sense of being 20 feet off the ground. Then we monitored the workers’ actions and physiological responses while they completed roofing tasks under three levels of safety protection.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man roped in with safety equipment uses a hammer on a sloped surface with virtual background" src="https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501554/original/file-20221216-21-yu1bs2.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Inside a mixed-virtual reality world, roofers performed tasks that are normal parts of their job.</span>
<span class="attribution"><span class="source">Jesus M. de la Garza</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>As expected, more safety interventions created a false sense of invulnerability in participants. Adding guardrails to the roof’s edge and providing a fall-arrest system for the roofer provided real protection and rightfully increased a sense of security, which resulted in participants’ stepping closer to the edge of the virtual roof, leaning over the edge, and spending more time exposing themselves to the risk of falling. Participants <a href="https://doi.org/10.1061/(ASCE)CO.1943-7862.0001812">increased their risk-taking behavior by as much as 55%</a>. This study provided empirical evidence that safety devices can implicitly encourage workers to take more risks.</p>
<p>One hypothesis that flows from our research is that educating people about the risk compensation effect could reduce their vulnerability to this phenomenon. Future studies are needed to test this possibility.</p>
<h2>A perception of choice matters</h2>
<p>A crucial consideration is whether people feel the decision to take precautions is their own.</p>
<p>In studies one of us conducted with a colleague, pizza-delivery drivers demonstrated <a href="https://doi.org/10.1901/jaba.1991.24-31">safer driving overall when they chose</a> <a href="https://doi.org/10.1037/0021-9010.82.2.253">to increase particular safe-driving behaviors</a>. For instance, drivers at one store participated in setting a goal to stop completely at intersections at least 80% of the time, while at another store management assigned drivers the 80% complete stopping goal. Drivers from both groups met that goal. But among the drivers who self-selected the target, there was a spillover effect: They increased their use of turn signals and lap-and-shoulder belts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="two women in masks sit outside talking" src="https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/501880/original/file-20221219-20-tjdv46.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rather than get closer because masks provided a level of protection, people appeared to extend their safety behavior by maintaining social distancing.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/colleagues-meet-at-outdoor-cafe-during-covid-19-royalty-free-image/1285308734">SDI Productions/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p><a href="https://doi.org/10.51390/vajbts.v1i1.17">A study early in the COVID-19 pandemic</a> identified a similar spillover or response generalization effect. People who wore a face mask outdoors where mask wearing was not mandated also maintained a greater interpersonal distance from others than did people without masks.</p>
<p>In this case, as with the delivery drivers, one safe behavior spilled over to another safe behavior – the opposite of risk compensation – when people had the perception of personal choice. We believe perceived choice was the critical human dynamic that influenced people to generalize their safety behavior rather than compensate for the reduction in risk.</p>
<p>Top-down rules and regulations can <a href="https://hackettpublishing.com/beyond-freedom-and-dignity">stifle a perception of choice</a> and actually motivate people to intentionally do things that flout a safety mandate in order to <a href="https://psycnet.apa.org/record/1967-08061-000">assert their individual freedom or personal choice</a>. People tend to bridle against the feeling of having a freedom taken away and will do what they can to regain it.</p>
<p>“Click It or Ticket” and other management attempts to dictate safety come with disadvantages that might negate any safety gains. Letting people feel they have a say in the matter can decrease the amount of risk compensation they experience and increase a safety spillover effect.</p><img src="https://counter.theconversation.com/content/190527/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jesus M. de la Garza is a subject matter expert for ARTBA’s Safety Certification for Transportation Project Professionals (SCTPP) program.</span></em></p><p class="fine-print"><em><span>E. Scott Geller is an Alumni Distinguished Professor at Virginia Tech; a Senior Partner with Safety Performance Solutions, Inc., President of Make-A-Difference, LLC; and Co-Founder of GellerAC4P,Inc.</span></em></p><p class="fine-print"><em><span>Sogand Hasanzadeh received funding from National Science Foundation (Award Number 2049711 and
2049842) and the Electri International (NECA). </span></em></p>If you feel safer, you might take more risks – canceling out the benefits of various safety interventions. But educating people about this paradox and allowing for some personal choice might help.Jesus M. de la Garza, Professor of Civil Engineering and Director of the School of Civil & Environmental Engineering and Earth Sciences, Clemson UniversityE. Scott Geller, Alumni Distinguished Professor of Psychology and Director of the Center for Applied Behavior Systems, Virginia TechSogand Hasanzadeh, Assistant Professor of Civil Engineering, Purdue UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/711282017-01-12T13:38:58Z2017-01-12T13:38:58ZWhat a TV mixup can teach us about preventing fatal accidents<figure><img src="https://images.theconversation.com/files/152528/original/image-20170112-25850-6tz7q5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You're a mountaineer, right?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>When Todd Landman, a professor of political science, was recently invited to appear on the BBC Breakfast TV show, he didn’t expect to be asked to talk about mountaineering. He’d originally been invited on to the tightly scheduled and carefully planned programme to discuss Donald Trump.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/158lk0zci2M?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But a series of simple mistakes led to an awkward on-air exchange as the presenters realised he wasn’t Leslie Binns, mountain climber and former British soldier from North Yorkshire, but an academic with an American accent – much to the delight of the <a href="http://www.telegraph.co.uk/news/2017/01/07/bbc-breakfast-blunder-think-have-wrong-guest/">rest of the press</a> and social media.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"817658114392604672"}"></div></p>
<p>These kind of mistakes aren’t rare. Many will remember a similar mixup when the BBC got the wrong guy when it infamously tried to interview Congalese IT worker Guy Goma instead of British journalist Guy Kewney about a court case between Apple and the Beatles.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/e6Y2uQn_wvc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>No individual was to blame for Todd’s mixup. Due to a bizarre set of coincidences and misunderstandings, the BBC researchers and presenters genuinely thought Todd was Leslie Binns, despite the careful planning that had gone into the programme. So why do simple mistakes lead to major errors in such a complicated system with so many checks? Answering this question can provide vital insight into how to prevent much more serious problems occurring, for example in medical operations or air traffic control.</p>
<p>The discipline of <a href="http://www.ergonomics.org.uk/what-is-ergonomics/">human factors</a> can help us here. Studying human factors helps us to understand how we can design products and systems to take into account human capabilities and limitations. People frequently make decisions based on incomplete information, and use rough rules known as “heuristics” to jump to the most likely decision or hypothesis. The problem with the decisions we make using these rules is that they can be susceptible to our <a href="http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/kahneman-facts.html">personal biases</a>. </p>
<p>In Todd’s case, both he and the BBC staff and presenters did not expect there to be a mix up. This led them to experience bias in the way that they interpreted each other during short conversations before the interview. Todd was greeted by a female staff member saying “Hello, Leslie, you’re wearing a suit”, referring to his lack of mountaineering equipment. Todd interpreted this as meaning “Hello, [my name is] Lesley”, and a conversational comment about his attire. Due to the biases in their perception and decision making, as well as the time pressure in the studio, neither realised a mistake had been made. </p>
<h2>Confirmation bias</h2>
<p>In systems where safety is critical, the consequences of such biases or behaviours can be much more serious than the mild embarrassment and amusement that resulted from the BBC Breakfast mix up. In 1989, the pilots of <a href="http://www.bbc.co.uk/news/uk-england-leicestershire-25548016">British Midland Flight 92</a> believed that they had received an alert of a fire in the right-hand engine after misinterpreting their displays. When they shut off this engine, the vibration that they had previously been experiencing stopped.</p>
<p>This led to a confirmation bias in their decision making – their action led to the result they expected so they thought they had solved the problem. Sadly they had misinterpreted the information they had received in the cockpit and shut off the wrong engine. The plane crashed on the approach to the airport and 47 passengers lost their lives. </p>
<p>Decision making does not happen in a social vacuum. We conform to social norms and behave in a way that fits in with others within our social or work setting. Just as Todd had to work out how to confront the misunderstanding as he realised it was happening, just before his interview was about to start, we can feel uncomfortable about challenging or discussing decisions in some settings where we feel intimidated, or where others are clearly in positions of authority.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/152523/original/image-20170112-25847-6l0fgr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Er, guys, this is a takeaway menu.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>For example, in hospitals, patients and junior staff can sometimes treat senior doctors as infallible. The case of <a href="http://www.chfg.org/wp-content/uploads/2010/11/ElaineBromileyAnonymousReport.pdf">Elaine Bromiley</a>, who died after her airway became blocked during a routine operation, sadly demonstrated the impact that failure in communications in the operating theatre can have. Many factors contributed to this incident, but one <a href="http://lifeinthefastlane.com/lessons-bromiley-case/">that was highlighted</a> was that the nurses in the operating theatre became aware of the problem before doctors had acknowledged the seriousness of the situation. Unfortunately, the nurses felt unable to broach the issue with the doctors.</p>
<p>In UK hospitals, effort is now made to ensure that medical decisions are discussed by staff and are more likely to be challenged if someone – even more junior colleagues – thinks those decisions are wrong.</p>
<p>In the Flight 92 accident, passengers heard the pilot announce that there was a problem with the right-hand engine, but could see through their windows that the problem was on the left. Survivors of the crash later said that they <a href="http://www.bbc.co.uk/news/uk-england-leicestershire-25548016">noticed this mismatch</a> between what they could see and what the captain had said, but that it didn’t cross their mind that an error like this could happen, or that the action of the captain should be challenged. </p>
<p>When something unexpected happens in a resilient system, we use our cognitive and social skills to respond in a way to try to ensure that no harm is done. This is known as <a href="http://www.independent.co.uk/sport/olympics/clive-woodward-from-world-cup-to-t-cup-7213915.html">thinking clearly under pressure</a> or unconscious competence. The most resilient complex systems take both human and technological capabilities into account. They are designed to be efficient while anticipating the human behaviours that might occur and incorporating design features that try to prevent errors, such as formal checks and structured communication.</p>
<p>The challenge is to make sure those design features prevent errors without slowing people down or introducing social awkwardness. When we find ourselves confronted with a potential mistake, we need to feel comfortable enough to pluck up the courage to politely but firmly say: “I think you’ve got the wrong guest, sir”.</p><img src="https://counter.theconversation.com/content/71128/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sarah Sharples receives funding from the Engineering and Physical Sciences Research Council and The Health Foundation. She is Past President of the Chartered Institute of Ergonomics and Human Factors. </span></em></p><p class="fine-print"><em><span>Todd Landman receives funding from the Economic and Social Research Council. He is affiliated with the Royal Society of Arts and the Magic Circle. </span></em></p>Complex systems, from TV shows to hospitals, have plenty of checks and procedures, so why do things still go wrong?Sarah Sharples, Professor of Human Factors and Associate Pro-Vice-Chancellor for Research and Knowledge Exchange, Past President of Chartered Institute of Ergonomics and Human Factors, University of NottinghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/588462016-05-13T00:58:56Z2016-05-13T00:58:56ZIs it time for a presidential technoethics commission?<figure><img src="https://images.theconversation.com/files/121764/original/image-20160509-20590-1ueg43t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Who owns your thoughts? And other important questions raised by technology.</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-416811445/stock-photo-bring-creativity-for-your-business-business-vision-headhunter-concepts-business-intelligence.html?src=Q3JdP8-jxUAT_Bk6U07tyg-1-31">Hands and brain via shutterstock.com</a></span></figcaption></figure><p>A recent New York Times article highlighted the <a href="http://www.nytimes.com/2016/04/01/technology/us-textile-industry-turns-to-tech-as-gateway-to-revival.html">growing integration of technologies and textiles</a>, displaying a photograph of a delicate golden nest of optical fiber. The article reported that this new “functional fabric” has the added quality that it “acts as an optical bar code to identify who is wearing it.”</p>
<p>Is this a feature or a bug? This smart material would certainly be a new milestone in the march of technology and the marketplace to erode personal privacy. Would a suit made of this material need to come with a warning label? Just because we have the technological capability to do something like this, should we?</p>
<p>Similar questions could have been asked about putting GPS technology in our mobile phones, <a href="http://www.nytimes.com/2016/01/10/opinion/sunday/drone-regulations-should-focus-on-safety-and-privacy.html?_r=0">drones in the air</a> and the “cookies” resident on our devices to dutifully record and transmit our online activity. Right now, those conversations happen in corporate boardrooms, the media, fictional books and films, and academic settings. But there isn’t a broad national conversation around the ethics of the steady digital encroachment on our lives. Is it time to create a presidential commission on technoethics? </p>
<h2>Elevating the discussion</h2>
<p>Such a commission might take its cue from the presidential committees that have been <a href="http://bioethics.gov/history">put in place over the past 50 years</a> to study the issues that have come up about biological research. In 2001, President George W. Bush created the President’s Council on Bioethics (PCBE) to address concerns about genomics work and genetic engineering, largely inspired by advances in stem cell research and cloning. </p>
<p>The PCBE couldn’t halt research, and neither can its successor, the current <a href="http://www.bioethics.gov">Presidential Commission for the Study of Bioethical Issues</a>. But it does provide a high-profile forum for important conversations among a broad and changing group of scientists, ethicists and humanists. Their discussions in turn inform local, state and national policymakers about the possible ethical implications of groundbreaking research in biology. </p>
<p>Results take the form of commission votes and reports. For example, after six months of lively public debate, the 2001 PCBE ultimately voted 10-7 to recommend allowing biomedical cloning (with regulations) based on stem cells, an outcome that seemed to have <a href="http://www.nytimes.com/2002/07/11/us/bush-s-bioethics-advisory-panel-recommends-moratorium-not-ban-cloning-research.html">great influence in the national conversation</a>. </p>
<p>University groups and other review boards overseeing research projects look to the commission’s reports as indicators of the best thinking about moral, social and ethical considerations. At Dartmouth, for example, we have a <a href="https://www.dartmouth.edu/%7Ecphs/">Committee for the Protection of Human Subjects</a> that regularly discusses wider issues during their review of research proposals that involve people. As corresponding groups at universities nationwide consider approving or suggesting modifications to proposed study designs, they can guide, or at least nudge, researchers toward the norms identified by the commission.</p>
<h2>Turning to technology</h2>
<p>When it comes to modern technologies, those types of conversations seem to be less dialogue and more broad statement. For example, <a href="http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html">various scientists in the news increasingly warn us</a> about the dangers of artificial intelligence. </p>
<p>They express concern about the coming of “<a href="http://singularity.com/">the singularity</a>,” the term coined by inventor and technology pioneer Ray Kurzweil to denote the time at which machine intelligence surpasses human intelligence. It’s unclear how that moment would be determined: for example, while some forms of intelligence are ready-made for a “<a href="http://chronicle.com/article/Never-Mind-Turing-Tests-What/232183/">Terminator Test</a>” (machines are already better than humans at doing arithmetic and <a href="http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234">playing Go</a>), others (such as <a href="https://theconversation.com/looking-for-art-in-artificial-intelligence-56335">artistic creation</a> or social intelligence) seem outside that kind of competitive context. But the fact remains that we are thoughtlessly deploying technologies with little concern for, or debate around, their context and implications for society.</p>
<p>At what point does concern trump convenience? For a little “thought experiment” consider the “<a href="http://dx.doi.org/10.1038/nature17637">mind-reading</a>” technologies being researched in neuroscience labs around the world. They aim to recognize brain activity related to identifying images and words.</p>
<p>One day, a technology could automatically take that activity measurement, interpret it and store it on your computer or in the cloud. (Imagine the marketing potential of such a service: “With DayDreamer, you’ll never lose a single great idea!”) </p>
<p>How does society decide who owns the thoughts? Right now it might depend on whether you had those thoughts at work or at home. Maybe your employer wants you to use this software because of the fear of missing a good idea that you may have dismissed. It’s easy to imagine government agencies having similar desires.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=273&fit=crop&dpr=1 600w, https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=273&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=273&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=343&fit=crop&dpr=1 754w, https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=343&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/121768/original/image-20160509-20595-wqcft3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=343&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Movies like ‘Ex Machina’ address how humans interact with artificial intelligences.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/bagogames/16153704398">bagogames/flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Literature and film seem to be the formats that of late have done the most to expose the possibilities of our new technologies. Movies like <a href="http://www.imdb.com/title/tt0470752/">“Ex Machina”</a> and <a href="http://www.imdb.com/title/tt1798709/?ref_=fn_al_tt_1">“Her”</a> provide much to think about regarding our interactions with machine intelligence. Books like <a href="http://supersadtruelovestory.com/">“Super Sad Sweet Love Story”</a> and <a href="http://www.mcsweeneys.net/articles/a-brief-q-a-with-dave-eggers-about-his-new-novel-the-circle">“The Circle”</a> (coming out soon as a movie) raise all kinds of questions about privacy and transparency. Works like these continue a long tradition of art as a spur to broad societal (and classroom) debate. Indeed, I even made written responses to those books part of a network-data analysis course I taught last winter at Dartmouth College. Why not couple such informal and episodic exposures with formal open and considered debate? </p>
<p>A presidential technoethics commission would provide that opportunity. Worries about “the singularity” might lead to a report on the implications of an unregulated Internet of Things as well as a <a href="https://theconversation.com/are-robots-taking-our-jobs-56537">larger robotic workforce</a>. Privacy presents another obvious topic of conversation. As more of our lives are lived online (either knowingly or unknowingly), the basic definition of “private life” has been slowly transformed both by security concerns and corporate interests. The steady erosion of “self” in this regard has been subtle but pervasive, cloaked as a collection of “enhancements of user experiences” and “security concerns.” </p>
<p>With initiatives such as “<a href="https://www.theguardian.com/technology/right-to-be-forgotten">the right to be forgotten</a>” and <a href="http://money.cnn.com/2015/04/15/technology/google-europe-anti-trust-lawsuit/">recent search-fixing lawsuits against Google</a>, the European Union has taken to the courts for its discussion of the societal implications of information collection and hegemony of its management. In the U.S., too, there are obvious and significant legal considerations, <a href="https://www.aclu.org/issues/national-security/privacy-and-surveillance">especially around civil liberties</a> and possibly even <a href="http://www.huffingtonpost.com/dan-rockmore/too-big-to-search_b_7211898.html">implications for the pursuit of knowledge</a>. The Apple-FBI clash over phone unlocking suggests that <a href="http://www.people-press.org/2016/02/22/more-support-for-justice-department-than-for-apple-in-dispute-over-unlocking-iphone/">a significant fraction of the American public trusts industry more than the government</a> when it comes to digital policies. </p>
<p>If history is any guide, corporate interests are not always well aligned with social goods (see – among others – the <a href="http://fortune.com/2015/09/26/auto-industry-scandals/">auto</a> and <a href="http://www.who.int/tobacco/media/en/TobaccoExplained.pdf">tobacco</a> industries, and <a href="https://consumerist.com/2016/05/06/court-facebook-must-face-facial-recognition-privacy-lawsuit/">social media</a>), although of course <a href="http://money.cnn.com/2016/04/26/pf/chobani/">sometimes they are</a>. Regardless, a commission with representation from a range of constituencies, engaged in open conversation, might serve to illuminate the various interests and concerns. </p>
<h2>Discussion, not restriction</h2>
<p>All that said, it is important that the formation of a commission not serve as administrative and corporate cover for the imposition of controls and policies. The act of creating a forum for debate should not serve as justification for issuing orders.</p>
<p>A former colleague who served on the 2001 President’s Council on Bioethics tells me that in general, it was formed to consider the implications of advances in bioengineering related to “tampering with the human condition.” Even without going so far as considering personal thoughts, surely questions around topics like wearables, privacy, transparency, information ownership and access, workplace transformation, and the attendant implications for self-definition, self-improvement and human interaction are at the foundation of any consideration of the human condition. </p>
<p>There are important ethical implications for new and imagined digital technologies. People are beginning to address those implications head-on. But we should not leave the decisions to disconnected venues of limited scope. Rather, as with the inner workings of human biology, we should come up with social norms through a high-profile, public, collaborative process.</p><img src="https://counter.theconversation.com/content/58846/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel N. Rockmore is the William H. Neukom 1964 Professor of Computational Science at Dartmouth College where he is also a Professor of Mathematics and Computer Science and Director of The Neukom Institute for Computational Science. He is a member of the Science Steering Committee of The Santa Fe Institute and also sits on the Advisory Board of the Center for Brains, Minds and Machines at MIT. Some of his research is currently funded by the National Science Foundation. His work previously has been supported by the Sloan Foundation, National Institutes of Health and the Air Force Office of Scientific Research. </span></em></p>New and imagined digital technologies have important ethical implications. We should devise relevant social norms through a high-profile, public, collaborative process.Daniel N. Rockmore, Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/398352015-04-29T09:57:34Z2015-04-29T09:57:34ZSelf-driving cars will need people, too<figure><img src="https://images.theconversation.com/files/79524/original/image-20150427-18143-x7wk5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Don't do away with that human driver at the wheel.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/lokan/14503213895">LoKan Sardari</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span></figcaption></figure><p>Self-driving cars are expected to revolutionize the automobile industry. Rapid advances have led to working prototypes faster than most people expected. The anticipated benefits of this emerging technology include <a href="http://www.usatoday.com/story/money/cars/2015/03/04/mckinsey-self-driving-benefits/24382405/">safer, faster</a> and more <a href="http://www.scientificamerican.com/article/self-driving-cars-could-cut-greenhouse-gas-pollution/">eco-friendly</a> transportation.</p>
<p>Until now, the public dialogue about self-driving cars has centered mostly on technology. The public’s been led to believe that engineers will soon <a href="http://www.theregister.co.uk/2015/03/18/musk_self_driving_cars/">remove humans from driving</a>. But researchers in the field of human factors — experts on how people interact with machines — have shown that <a href="http://dx.doi.org/10.1518/001872008X312198">we shouldn’t ignore the human element</a> of automated driving.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/79526/original/image-20150427-18126-1jfw3ny.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hop in, I’ll give you a ride.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/gmanviz/16467802971">Gust</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<h2>High expectations for removing human drivers</h2>
<p><a href="http://hfs.sagepub.com/content/39/2/230.short">Automation</a> is the technical term for when a machine – here a complex array of sensors and computers – takes over a task that was formerly accomplished by a human being. Many people assume that automation can replace the person altogether. For example, Google, a leader in the self-driving car quest, has <a href="http://www.nytimes.com/2014/05/28/technology/googles-next-phase-in-driverless-cars-no-brakes-or-steering-wheel.html">removed steering wheels</a> from prototype cars. Mercedes-Benz <a href="https://www.mercedes-benz.com/en/mercedes-benz/innovation/research-vehicle-f-015-luxury-in-motion/">promotional materials</a> show self-driving vehicles with rear-facing front seats. The hype on self-driving cars implies that the driver will be unneeded and free to ignore the road. </p>
<p>The public also has begun to embrace this notion. <a href="http://dx.doi.org/10.2139/ssrn.2506579">Studies</a> show that people want to engage in activities such as reading, watching movies, or napping in self-driving cars, and also that automation <a href="http://drivingassessment.uiowa.edu/sites/default/files/DA2013/Papers/015_Llaneras_0.pdf">encourages these distractions</a>. A <a href="http://dx.doi.org/10.1016/j.trf.2014.04.009">study in France</a> even indicated that riding while intoxicated was a perceived benefit.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=495&fit=crop&dpr=1 600w, https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=495&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=495&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=622&fit=crop&dpr=1 754w, https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=622&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/79530/original/image-20150427-18126-xsvex7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=622&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Wheels squealing, passengers too?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/jurvetson/5499949739">Steve Jurvetson</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Automation still requires people</h2>
<p>Unfortunately, these expectations will be difficult to fulfill. Handing control of a process to a computer rarely eliminates the need for human involvement. The reliability of automated systems is imperfect. Tech innovators know from experience that automation will fail at least some of the time. Anticipating inevitable automation glitches, Google recently <a href="http://www.washingtonpost.com/blogs/innovations/wp/2015/04/07/how-google-is-making-sure-cows-wont-foil-its-self-driving-cars/">patented a system</a> in which the computers in “stuck” self-driving cars will contact a remote assistance center for human help.</p>
<p>Yet the perception that self-driving cars will perform flawlessly has a strong foothold in the public consciousness already. One commentator recently predicted <a href="https://orgtheory.wordpress.com/2015/04/06/driverless-cars-and-the-end-of-death/">the end of automotive deaths</a>. Another calculated <a href="http://www.forbes.com/sites/modeledbehavior/2014/11/08/the-massive-economic-benefits-of-self-driving-cars/">the economic windfall</a> of “free time” during the commute. Self-driving technologies will undoubtedly be engineered with high reliability in mind, but will it be high enough to cut the human out of the loop entirely? </p>
<p>A recent example was widely reported in the media as an indicator of the readiness of self-driving technology. A Delphi-engineered self-driving vehicle completed a <a href="http://www.wired.com/2015/04/delphi-autonomous-car-cross-country/">cross-country trip</a>. The technology drove 99% of the way without any problems. This sounds impressive — the human engineers watching at the wheel during the journey took emergency control of the vehicle in only a handful of instances, such as when a police car was present on the shoulder or a construction zone was painted with unusual line markings. </p>
<p>These scenarios are infrequent, but they’re not especially unusual for a long road trip. In large-scale deployment, however, a low individual automation failure rate multiplied by <a href="http://www.latimes.com/business/autos/la-fi-hy-ihs-automotive-average-age-car-20140609-story.html">hundreds of millions of vehicles on US highways</a> will result in a nontrivial number of problems. Further, today’s most advanced prototypes are supported by teams of engineers dedicated to keeping a single vehicle safely on the road. Individual high-tech pit crews won’t be possible for every self-driving car on the road of the future.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/79531/original/image-20150427-18128-167o6gf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How quickly can you seize the reins back from autopilot?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/mike_miley/3709623921">H Michael Miley</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>People need to be able to take control</h2>
<p>How will flaws in automation technology be addressed? Despite Google’s remote assistance center patent, the best option remains intervention by the human driver. But engineering human interactions with self-driving cars will be a significant challenge.</p>
<p>We can draw insights from aviation, as many elements of piloting planes already have been taken over by computers. Automation works well for <a href="http://dx.doi.org/10.1109/3468.844354">routine, repetitive tasks</a>, especially when the consequences of automation mistakes are minor – think automatic sewing machines or dishwashers. The stakes are higher when automation failures can cause harm. People may <a href="http://hfs.sagepub.com/content/39/2/230.short">rely too much</a> on imperfect automation or become out-of-practice and unable to perform tasks <a href="http://www.sciencedaily.com/releases/2014/12/141201125035.htm">the old-fashioned way</a> when needed.</p>
<p>Several recent plane accidents have been attributed to <a href="http://www.wsj.com/articles/SB10001424052702304439804579204202526288042">failures in the ways pilots interact with automation</a>, such as when pilots in correctable situations have <a href="http://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash">responded inappropriately</a> when automation fails. A term – <a href="http://hfs.sagepub.com/content/39/4/553.short">automation surprises</a> – has even been coined to describe when pilots lose track of what the automation is doing. This is a quintessential human factors problem, characterized not by flaws with either automation or pilots per se, but instead by failures in the design of the human-technology interaction.</p>
<p>When machines take over, the work required of the human is typically not removed — <a href="http://hfs.sagepub.com/content/37/2/381.abstract">sometimes it is not even reduced</a> — as compared to before the automation was implemented. Rather, the job becomes different. Instead of manual work, the human is relegated to the role of a monitor – one who constantly watches to detect and correct technology failures. The problem is that people are not especially well-suited for this <a href="http://dx.doi.org/10.1518/001872008X312152">tedious job</a>. It’s not surprising that drivers retaking manual control from automation <a href="http://dx.doi.org/10.1007/978-3-319-05990-7_11">need up to 40 seconds</a> to return to normal, baseline driving behaviors. </p>
<h2>Tech + driver = cooperative effort</h2>
<p>All of this is not to say that self-driving cars will fail to deliver benefits; they will undoubtedly transform the driving experience. But to develop this promising technology, human factors must be considered. For example, <a href="http://www.emeraldinsight.com/doi/abs/10.1016/S1479-3601%2802%2902004-0">multimodal displays</a> that use a combination of visual, auditory, and tactile (touch) information may be useful for keeping the driver informed about what the automation is doing. <a href="http://books.google.com/books?id=dElPH0ruR-sC&lpg=PA147&ots=EGiNn08QwF&dq=%22adaptive%20automation%22&lr&pg=PA147#v=onepage&q=%22adaptive%20automation%22&f=false">Adaptive automation</a> – where the computer strategically gives some control of the car back to the driver at regular intervals – may be able to keep the human engaged and ready to respond when needed.</p>
<p>The technology-centric expectations currently being fostered overlook the substantial body of science on the human element of automation. If other examples of automation, including aviation, can provide any insight, focusing on technology to the exclusion of the human it serves may be counterproductive.</p>
<p>Instead, engineers, researchers, and the general public should see vehicle automation as a cooperative effort between humans and technology — one where the human plays a vital, active role in systems that optimize the interaction between the driver and the technology. A key element will likely require designing new, innovative ways to keep the driver in the loop and informed about the status of automated systems. In other words, “self-driving” cars will need people, too.</p><img src="https://counter.theconversation.com/content/39835/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Nees is a member of the Human Factors and Ergonomics Society.</span></em></p>Experts in the field of human factors – how people interact with machines – warn that “self-driving” cars need to be more of a cooperative effort between human driver and tech than the hype would suggest.Michael Nees, Assistant Professor of Psychology, Lafayette College Licensed as Creative Commons – attribution, no derivatives.