tag:theconversation.com,2011:/us/topics/ibm-watson-40172/articlesIBM Watson – The Conversation2023-01-16T11:52:44Ztag:theconversation.com,2011:article/1893622023-01-16T11:52:44Z2023-01-16T11:52:44ZFrom a ‘deranged’ provocateur to IBM’s failed AI superproject: the controversial story of how data has transformed healthcare<figure><img src="https://images.theconversation.com/files/503380/original/file-20230106-16-mc22tn.png?ixlib=rb-1.1.0&rect=42%2C47%2C1076%2C845&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">US health data pioneer Ernest Codman at work on his national registry of patient outcomes, 1925.</span> <span class="attribution"><span class="source">Roy Mabrey/Boston Medical Library</span></span></figcaption></figure><p>Just over a decade ago, artificial intelligence (AI) made one of its showier forays into the public’s consciousness when <a href="https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?action=click&module=RelatedLinks&pgtype=Article">IBM’s Watson computer</a> appeared on the American quiz show <a href="https://en.wikipedia.org/wiki/Jeopardy!">Jeopardy!</a> The studio audience was made up of IBM employees, and Watson’s exhibition performance against two of the show’s most successful contestants was televised to a national viewership across three evenings. In the end, the machine triumphed comfortably.</p>
<hr>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/from-a-deranged-provocateur-to-ibms-failed-ai-superproject-the-controversial-story-of-how-data-has-transformed-healthcare-189362 &bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><em>You can listen to more articles from The Conversation, narrated by Noa, <a href="https://theconversation.com/us/topics/audio-narrated-99682">here</a>.</em></p>
<hr>
<p>One of Watson’s opponents <a href="https://www.jeopardy.com/about/cast/ken-jennings">Ken Jennings</a>, who went on to make a career on the back of his gameshow prowess, showed grace – or was it deference? – in defeat, jotting down this commentary to accompany his final answer: “I, for one, welcome our new computer overlords.”</p>
<p>In fact, his phrase had been poached from another American television mainstay, The Simpsons. Jennings’ wry pop culture reference signalled Watson’s reception less as computer overlord and more as technological curio. But that was not how IBM saw it. On the back of this very public success, in 2011 IBM turned Watson toward one of the most lucrative but untapped industries for AI: healthcare.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/P18EdAKuC1U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>What followed over the next decade was a series of ups and downs – but <a href="https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html">mostly downs</a> – that exemplified the promise, but also the numerous shortcomings, of applying AI to healthcare. The Watson health odyssey finally ended in 2022 when it was <a href="https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html">sold off “for parts”</a>.</p>
<p>There is much to learn from this story about why AI and healthcare seemed so well-suited, and why that potential has proved so difficult to realise. But first we need to revisit the controversial origins of data use in this field, long before electronic computers were invented, and meet one of its American pioneers, <a href="https://www.facs.org/about-acs/archives/past-highlights/codmanhighlight/">Ernest Amory Codman</a> – an elite by birth, a surgeon by training, and a provocateur by nature.</p>
<h2>Data’s role in the birth of modern medicine</h2>
<p>While the utility of data in a general way had already been clear for several centuries, its collection and use on a massive scale was a feature of the 19th century. By the 1850s, collecting census data had become commonplace. Its use was not merely descriptive; it formed a way to make determinations about how to govern.</p>
<p>The 19th century marked the first time that, as US systems expert <a href="https://medium.com/@sjtmartin/big-data-a-19th-century-problem-9d58c3e6495b">Shawn Martin</a> explains, “managers felt the need to tie the information that society collected to things like performance [and] productivity”. This applied to public health as well, where “big data” played a critical role in establishing relationships between populations, their habits and environment (both at home and work), and disease.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Old street map of London" src="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=563&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=563&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=563&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=707&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=707&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=707&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">John Snow’s groundbreaking map of cholera cases in central London, 1854.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Snow-cholera-map-1.jpg">Wikimedia</a></span>
</figcaption>
</figure>
<p>A well-known example is <a href="https://theconversation.com/sewage-alerts-the-long-history-of-using-maps-to-hold-water-companies-to-account-189013">John Snow’s discovery</a> of the source of a cholera outbreak in London’s Soho neighbourhood in 1854. Now considered one of epidemiology’s founding fathers, Snow canvassed door to door asking whether the families within had had cholera. His analysis came chiefly in the re-organisation of the data he collected – its plotting on a map – such that a pattern might emerge. This ultimately established not just the extent of the outbreak but also its source, the <a href="https://lookup.london/john-snow-water-pump/">Broad Street water pump</a>.</p>
<p>For Boston-born Codman, an outspoken medical reformer working at the beginning of the 20th century, such use of data to understand disease was up there as “one of the greatest moments in medicine”.</p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This article is part of Conversation Insights</em></strong>
<br><em>The Insights team generates <a href="https://theconversation.com/uk/topics/insights-series-71218">long-form journalism</a> derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.</em></p>
<hr>
<p>Though Codman was involved in <a href="https://qualitysafety.bmj.com/content/11/1/104">many data-driven reforms</a> during his controversial career, one of the most successful was the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2758960/">Registry of Bone Sarcoma</a>, which he established in 1920. His goal was to collect and analyse all of the cases of bone cancer (or suspected bone cancer) from across the US, and to use these to establish diagnostic criteria, therapeutic effectiveness and a standardised nomenclature.</p>
<p>There were a few rules for this registry. Individual doctors who contributed had to send x-rays, case reports and, if possible, tissue samples for examination by the registry’s consulting pathologists and Codman himself. This would ensure both the accuracy and uniformity of pathological analysis. The effort was a success which grew over time: by 1954, when the American College of Surgeons sought a new home for the registry, it contained an impressive 2,400 complete, cross-referenced cases.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Man's portrait" src="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=858&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=858&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=858&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1078&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1078&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1078&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ernest Codman.</span>
<span class="attribution"><a class="source" href="https://collections.nlm.nih.gov/catalog/nlm:nlmuid-101412473-img">National Library of Medicine</a></span>
</figcaption>
</figure>
<p>On the face of it, Codman’s decision to focus on bone cancer was baffling. It was neither a pressing nor a common concern for doctors across the US. But the disease’s relative rarity was one reason he chose it. Codman felt the amount of data received from his nationwide request would not be overwhelming for his small team of researchers to analyse.</p>
<p>Perhaps more importantly, he knew that studying bone cancer would raise the ire of far fewer of his colleagues than a more common disease might. In a clinical atmosphere in which expertise was understood as a combination of long experience with a dash of intuition – the physician’s “art” – Codman’s touting of data as a better way to obtain knowledge about a disease and its treatment was already being met with vociferous opposition.</p>
<p>It didn’t help that he tended to be inflammatory and provocative in the pursuit of his data-driven goals. At a medical meeting in Boston in 1915, he launched a surprise attack on his fellow practitioners. In the middle of this staid affair, Codman unveiled an <a href="https://protomag.com/policy/the-codman-affair/">8ft cartoon</a> lampooning his colleagues for their apathy toward healthcare reform and, as he saw it, their wilful ignorance of the limitations of the profession. As one (former) friend put it in the event’s aftermath, Codman’s only hope was that people would take the “charitable” view and consider him not an enemy of the profession but merely “mentally deranged”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Satirical cartoon titled The Back Bay Golden Goose Ostrich of an ostrich with head in the sand laying eggs being caught by group of men." src="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=284&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=284&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=284&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=357&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=357&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=357&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman’s 8ft cartoon lampooned medical practices in the early 20th century.</span>
<span class="attribution"><a class="source" href="https://ia800807.us.archive.org/1/items/b29812161/b29812161.pdf">From The Shoulder by E.A. Codman</a></span>
</figcaption>
</figure>
<p>Undeterred, Codman continued this pugnacious approach to his pioneering work. In a 1922 letter to the prestigious Boston Medical and Surgical Journal, he complained that the surgeons of Massachusetts had been particularly unhelpful to his registry. He explained that he had – politely – asked the 5,494 physicians in the state to “drop him a postal stating whether or not he knew of a case” so that Codman could acquire “the best statistics ever obtained on the frequency of the disease”. To his chagrin, he had received only 19 responses in nearly two years. Needling the journal’s editors and readers simultaneously, he asked:</p>
<blockquote>
<p>Is this because your Journal is not read? … [Or] because of the indifference of the medical profession as to whether the frequency of bone sarcoma is known or not?</p>
</blockquote>
<p>Codman proposed a questionnaire that would allow the journal to see whether the problem was its lack of readership, or his colleagues’ “inertia, procrastination, disapproval, opposition or disinterest”. A subsequent editorial in response to Codman’s proposal was surprisingly magnanimous:</p>
<blockquote>
<p>Whether we will it or not, we are obliged to be irritated, amused or instructed, according to our temperaments, by Dr Codman. Our advice is to be instructed.</p>
</blockquote>
<h2>An end to elitism?</h2>
<p>Despite the establishment’s resistance, submissions to Codman’s registry began to grow such that by 1924, he had enough material to make preliminary comments about bone cancer. For one thing, he had succeeded in standardising the much-contested matter of the proper nomenclature for the disease. This, he exulted, was so significant that it should be likened to the “rising of the sun”.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Hand-written data diary" src="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=892&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=892&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=892&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1121&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1121&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1121&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman made this chart of his own life in data.</span>
<span class="attribution"><a class="source" href="https://ia800807.us.archive.org/1/items/b29812161/b29812161.pdf">From The Shoulder by E.A.Codman</a></span>
</figcaption>
</figure>
<p>The registry also offered up many pieces of “impersonal proof”, as Codman called his data-driven findings, of the rightness of certain theories that individual physicians had promoted. Claims, for example, that combined treatments of “surgery, mixed toxins and radium” were more effective than treatments that relied on any of these alone were borne out by the data.</p>
<p>The registry, as Codman’s colleague <a href="https://en.wikipedia.org/wiki/Joseph_Colt_Bloodgood">Joseph Colt Bloodgood</a> <a href="https://www.nejm.org/doi/full/10.1056/NEJM192911142012003">put it</a>, “excited great interest” among practitioners, and not just because it had “influenced the entire medical world to pay more attention to bone tumours”. More importantly, it provided a new model for how to do medical work. Another admiring colleague responded to Bloodgood: </p>
<blockquote>
<p>The work of the registry [is] one of the outstanding American contributions to surgical pathology. As a method of study, it shows the necessity of very wide experience before a surgeon is capable of handling intelligently cases of this disease … [It] is impossible for any single individual to claim finality of this sort.</p>
</blockquote>
<p>This emphasis on “very wide experience” over the experience of “any single individual” points to another critical reason to prefer data, according to Codman. His goal in changing the method by which medical knowledge was made was not just to get better results. By seeking to undo the image of medicine as an “art” that depended on the wisdom of a select group of preternaturally talented individuals, Codman also threatened to undo the class-ridden reality that underlay this public veneer.</p>
<p>As the efficiency engineer Frank Gilbreth implied in a 1913 article in <a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015038046010&view=1up&seq=428&q1=Gilbreth">the American Magazine</a>, if it was true that medicine required no specific intrinsic gifts (monetary or otherwise), then absolutely anybody – whatever their class, race or background – could do it, including “bricklayers, shovellers and dock-wallopers” who were currently shut out of such “high-brow” occupations.</p>
<p>Codman was even more pointed. If data was used to evaluate the outcomes of his physician colleagues, he insisted, it would show that the quality of doctors and hospitals was generally poor. He <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2758959/">sniped</a> that they excelled chiefly in “making dying men think they are getting better, concealing the gravity of serious diseases, and exaggerating the importance of minor illnesses to suit the occasion”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Postcard of large, neoclassical, stone building." src="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=376&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=376&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=376&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=473&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=473&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=473&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman admitted his own social advantages in joining Harvard Medical School.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/f/fe/Harvard_Medical_School%2C_Boston%2C_Mass_%28NYPL_b12647398-74267%29.tiff">Detroit Publishing Company/Wikimedia</a></span>
</figcaption>
</figure>
<p>“Nepotism, pull and politics” were the order of the day in medicine, Codman wrote in one of his most scathing takedowns of his colleagues at the Massachusetts General Hospital. Yet he made himself the centrepiece of this critique, conceding that his <a href="https://www.shoulderdoc.co.uk/article/907">entrance to Harvard Medical School</a> had come on the back of “friends and relatives among the well-to-do”. The only difference, he suggested, was that he was willing to own up to it, and to subject himself and his work to the scrutiny of data.</p>
<h2>Data’s unflattering view of medicine</h2>
<p>Codman was not the only person having a come-to-Jesus moment with data over this period. In the 1920s, the American social science researchers <a href="https://www.britannica.com/biography/Robert-Lynd-and-Helen-Lynd">Robert and Helen Lynd</a> collected data in the small US town of Muncie, Indiana, as a way of creating a picture of the <a href="https://www.c-span.org/video/?197089-1/the-averaged-american">“averaged American”</a>.</p>
<p>By the 1930s, the similarly-minded <a href="http://www.massobs.org.uk/about/history-of-mo">Mass Observation project</a> took off in Britain, intending to collect data about everyday life so as to create an “anthropology of ourselves”. Crucially, both reflected the thinking that also drove Codman: that the right way to know something – a people, a disease – was to produce what seemed a suitably representative average. And this meant the amalgamation of often quite diverse and wide-ranging characteristics and their compression into a single, standard, efficient unit.</p>
<p>The turn from describing representative averages to learning from these averages is probably best articulated in the work of pollsters, whose door-to-door interrogations were aimed at helping a nation to know itself by statistics. In 1948, inspired by their failure to correctly predict the outcome of the US presidential election – one of the <a href="https://www.latimes.com/archives/la-xpm-1998-nov-01-mn-38174-story.html">most famous psephological errors</a> in the nation’s history – pollsters such as George Gallup and Elmo Roper began to rethink their analytic methods, spinning away from quota sampling and towards random sampling.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Satirical cartoon of Harry Truman looking at poll results showing he will lose election while his opponent says 'What's the use of going through with the election?'" src="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=575&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=575&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=575&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=723&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=723&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=723&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The 1948 election was one of the most famous psephological errors in US history.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/0/06/Truman-Dewey-polls-1948.jpg">Clifford K. Berryman/Wikimedia</a></span>
</figcaption>
</figure>
<p>At the same time, thanks primarily to its <a href="https://mitpress.mit.edu/9780262550284/the-closed-world/">military applications</a>, the science of computing began to gather pace. And the growing fascination with knowing the world via data combined with the unparalleled ability of computers to crunch it appeared a match made in heaven.</p>
<p>In a late-in-life <a href="https://archive.org/details/b29812161/page/n7/mode/2up">preface</a> to his 1934 data-driven magnum opus on the anatomy of the shoulder, Codman had comforted himself with the thought that he was a man ahead of his time. And indeed, just a few years after his death in 1940, statistical analysis began to pick up steam in medicine.</p>
<p>Over the next two decades, figures such as <a href="https://en.wikipedia.org/wiki/Ronald_Fisher">Sir Ronald Fisher</a>, the geneticist and statistician remembered for suggesting randomisation as an antidote to bias, and his English compatriot <a href="https://en.wikipedia.org/wiki/Austin_Bradford_Hill">Sir Austin Bradford Hill</a>, who demonstrated the connection between smoking and lung cancer, also pushed forward the integration of statistical analysis into medicine. </p>
<figure class="align-right ">
<img alt="Man's face" src="https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=661&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=661&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=661&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=830&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=830&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=830&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Archie Cochrane.</span>
<span class="attribution"><a class="source" href="https://community.cochrane.org/archie-cochrane-name-behind-cochrane">Cardiff University Library/Cochrane Archive</a></span>
</figcaption>
</figure>
<p>However, it would take many more years for word to finally leak out that, by data’s measure, both the methodologies of medical research and much of medicine itself was ineffective. In a movement led in part by outspoken Scottish epidemiologist <a href="https://en.wikipedia.org/wiki/Archie_Cochrane">Archie Cochrane</a>, this unflattering statistical view of medicine finally really saw the light of day in the 1960s and 70s.</p>
<p>Cochrane went so far as to say that medicine was based on “a level of guesswork” so great that any return to health after a medical intervention was more a “tribute to the sheer survival power of the minds and bodies” of patients than anything else. Aghast at the revelations embedded in Cochrane’s 1972 book, <a href="https://www.nuffieldtrust.org.uk/research/effectiveness-and-efficiency-random-reflections-on-health-services">Random Reflections on Health Services</a>, the Guardian journalist Ann Shearer <a href="https://www.proquest.com/docview/185551386/4D45D85AC6E94604PQ/1?accountid=11862">wrote</a>:</p>
<blockquote>
<p>Isn’t it … more than fair to ask what on Earth we – and more particularly, the medical They – have been doing all these years to let the health machine develop with such a lack of quality control?</p>
</blockquote>
<p>The answer dates back to Codman’s bone cancer registry half a century earlier. The medical establishment on both sides of the Atlantic had been avoiding with all their might the scrutiny that data would bring.</p>
<h2>Computers finally acquire medical currency</h2>
<p>Despite their increasing ubiquity in the 1970s and 80s, computers had still only haltingly joined the medical mainstream. Though a <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2793587/">smattering of AI applications</a> began to appear in healthcare in the 1970s, it was only in the 1990s that computers really started to acquire some medical currency.</p>
<p>In a page borrowed straight from Codman’s time, the pioneering American biomedical informatician <a href="https://en.wikipedia.org/wiki/Edward_H._Shortliffe">Edward Shortliffe</a> noted in 1993 that the <a href="https://pubmed.ncbi.nlm.nih.gov/8358494/">future of AI in medicine</a> depended on the realisation that “the practice of medicine is inherently an information-management task”.</p>
<p>In the US, the Institute of Medicine and the <a href="https://www.loc.gov/item/lcwaN0016849/">President’s Information Technology Advisory Council</a> released <a href="https://www.ncbi.nlm.nih.gov/books/NBK222268/">reports</a> highlighting the failures of medicine to fully embrace information technology. By 2004, a newly appointed national coordinator for health information technology was <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2793587/">charged with</a> the herculean task of establishing an electronic medical record for all Americans by 2014. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Man operating early computer" src="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An IBM System 360 computer in 1969.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/2/20/1969._IBM_System_360_computer._Automated_Data_Processing_Center._USDA_South_Building%2C_Washington%2C_DC._%2834271384522%29.jpg">USDA Forest Service via Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>This explosion of interest in bringing computers into healthcare made it an enticing and potentially lucrative area for investment. So it is no surprise that IBM celebrated Watson’s winning turn on Jeopardy! in 2011 by putting it to work on an <a href="https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html">oncology-focused programme</a> with multiple US-based clinical partners selected on the basis of their access to medical data.</p>
<p>The idea was laudable. Watson would do what <a href="https://theconversation.com/uk/topics/machine-learning-algorithms-103181">machine learning algorithms</a> do best: mining the massive amounts of data these institutions had at their disposal, searching for patterns that would help to improve treatment. But the complexity of cancer and the frustratingly unique responses of patients to it, yoked together by data systems that were sometimes incomplete and sometimes incompatible with each other or with machine learning’s methods more generally, limited Watson’s ability to be useful.</p>
<p>One sorry example was Watson’s <a href="https://www.mdanderson.org/publications/annual-report/annual-report-2013/the-oncology-expert-advisor.html">Oncology Expert Advisor</a>, a collaboration with the MD Anderson Cancer Center in Houston, Texas. This had begun its life as a “bedside diagnostic tool” that pored through patient records, scientific literature and doctors’ notes in order to make real-time treatment recommendations. Unfortunately, Watson couldn’t “read” the doctors’ notes. While good at mining the scientific literature, it couldn’t apply these large-scale discussions to the specifics of the individuals in front of it. By 2017, the project had been <a href="https://www.forbes.com/sites/matthewherper/2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-intelligence-in-medicine/?sh=202871377485">shelved</a>.</p>
<p>Elsewhere, at New York City’s famed Memorial Sloan Kettering Cancer Center, clinicians found a more elaborate – and infinitely more problematic – way forward. Rather than relying on the retrospective data that is machine learning’s usual fodder, clinicians invented <a href="https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882">new “synthetic” cases</a> that were, by virtue of having been invented, infinitely less messy and more complete than any real data could be.</p>
<p>The project re-litigated the “data v expertise” debate of Codman’s time – once more in Codman’s favour – since this invented data had built into it the specifics of cancer treatment as understood by a small group of clinicians at a single hospital. Bias, in other words, was programmed directly in, and those engaged in training the system knew it.</p>
<p>Viewing historical patient data as too narrow, they rationalised that replacing this with data that reflected their own collective experience, intuition and judgment could build into Watson For Oncology the latest and greatest treatments. Of course, this didn’t work any better in the early 21st century than it had in the early 20th. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Room-size black box behind glass lit with purple lights" src="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An early prototype of IBM Watson in 2011.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/2/22/IBM_Watson.PNG">Clockready/Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Furthermore, while these clinicians sidestepped the problem of real data’s impenetrable messiness, treatment options available at a wealthy hospital in Manhattan were far removed from those available in the other localities that Watson was meant to serve. The contrast was perhaps starkest when Watson was introduced to <a href="https://www.statnews.com/2016/08/19/ibm-watson-cancer-asia/">other parts of the world</a>, only to find the treatment regimens it recommended either didn’t exist or were not in keeping with the local and national infrastructures governing how healthcare was done there.</p>
<p>Even in the US, the consensus, as one unnamed physician in Florida reported back to IBM, was that Watson was a <a href="https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882">“piece of shit”</a>. Most of the time, it either told clinicians what they already knew or offered up advice that was incompatible with local conditions or the specifics of a patient’s illness. At best, it offered up a snapshot of the views of a select few clinicians at a moment in time, now reified as “facts” that ought to apply uniformly and everywhere they went.</p>
<p>Many of the elegies written to mark Watson’s selling-off in 2022, having failed to make good on its promise in healthcare, attributed its downfall to the same kind of overpromise and under-delivery that has spelled the end for many health technology start-ups.</p>
<p>Some maintained that the scaling-up of Watson from gameshow savant to oncological wunderkind might have been successful with more time. Perhaps. But in 2011, time was of the essence. To capitalise on the goodwill toward Watson and IBM that Jeopardy! had created, to be the trailblazer into the lucrative but technologically backward world of healthcare, had meant striking first and fast.</p>
<p>Watson’s high-profile failure highlights an overlooked barrier to modern, data-driven healthcare. In its encounters with real, human patients, Watson stirred up the same anxieties that Codman had encountered – difficult questions about what it is exactly that medicine produces: care, and the human touch that comes with it; or cure, and the information management tasks that play a critical role here?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-excel-at-medical-diagnosis-but-the-harder-task-is-to-win-hearts-and-minds-first-63782">AI can excel at medical diagnosis, but the harder task is to win hearts and minds first</a>
</strong>
</em>
</p>
<hr>
<p>A <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2791851?resultClick=1">2019 study</a> of US patient perspectives of AI’s role in healthcare gave these concerns some statistical shape. Though some felt optimistic about AI’s potential to improve healthcare, a vast majority gave voice to fundamental misgivings about relinquishing medicine to machine learning algorithms that could not explain the logic they employed to reach their diagnosis. Surely the absence of a physician’s judgment would increase the risk of misdiagnosis?</p>
<p>The persistence of this worry has quite often resulted in caveating the work of machine learning with reassurances that humans are still in charge. In a 2020 <a href="https://news.microsoft.com/en-gb/2020/12/09/a-microsoft-ai-tool-is-helping-to-speed-up-cancer-treatment-and-addenbrookes-will-be-the-first-hospital-in-the-world-to-use-it/">report</a> on the InnerEye project, for example, which used retrospective data to identify tumours on patient scans, Yvonne Rimmer, a clinical oncologist at Addenbrooke’s Hospital in Cambridge, addressed this concern:</p>
<blockquote>
<p>It’s important for patients to know that the AI is helping me in my professional role. It’s not replacing me in the process. I doublecheck everything the AI does, and can change it if I need to.</p>
</blockquote>
<h2>Data’s uncertain role in the future of healthcare</h2>
<p>Today, whether a doctor gives you your diagnosis or you get it from a computer, that diagnosis is not primarily based on the intuition, judgment or experience of either doctor or patient. It’s driven by data that has made our cultures of mainstream care relatively more uniform and of a higher standard. Just as Codman foresaw, the introduction of data in medicine has also forced a greater degree of transparency, both in terms of methodologies and effectiveness.</p>
<p>However, the more important – and potentially intractable – problem with this modern approach to health is its lack of representation. As the Sloan Kettering dalliance with Watson began to show, datasets are not the “impersonal proofs” that Codman took them to be.</p>
<p>Even under less egregiously subjective conditions, data undeniably replicates and concretises the biases of society itself. As MIT computer scientist Marzyeh Ghassemi explains, data offers the “<a href="https://news.mit.edu/2022/marzyeh-ghassemi-explores-downside-machine-learning-health-care-0201">sheen of objectivity</a>” while replicating the ethnic, racial, gender and age biases of institutionalised medicine. Thus the tools, tests and techniques that are based on this data are also not impartial. </p>
<p>Ghassemi highlights the inaccuracy of pulse oximeters, often calibrated on light-skinned individuals, for those with darker skin. Others might note the outcry over the <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(19)30510-0/fulltext">gender bias in cardiology</a>, spelled out especially in higher mortality rates for women who have heart attacks. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/C22JlzHlLJQ?wmode=transparent&start=8" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The landmark human genome announcement in 2000.</span></figcaption>
</figure>
<p>The list goes on and on. Remember the human genome project, that big data triumph which has, according to the US National Institutes of Health <a href="https://www.genome.gov/human-genome-project">website</a>, “accelerated the study of human biology and improved the practice of medicine”? It almost exclusively drew upon genetic studies of white Europeans. According to <a href="https://precisionmedicine.ucsf.edu/%E2%80%9Cwicked-problem%E2%80%9D-racism-and-race-precision-medicine">Esteban Burchard</a> at the University of California, San Francisco: </p>
<blockquote>
<p>96% of genetic studies have been done on people with European origin, even though Europeans make up less than 12% of the world’s population … The human genome project should have been called the European genome project.</p>
</blockquote>
<p>A lack of representative data has implications for big data projects across the board – not least for <a href="https://www.fda.gov/medical-devices/in-vitro-diagnostics/precision-medicine">precision medicine</a>, which is widely touted as the antidote to the problems of impersonal, algorithm-driven healthcare.</p>
<p>Precision or “personalised” medicine seeks to address one of the essential perceived drawbacks of data-based medicine by locating finer-grained commonalities between smaller and smaller subsets of the population. By focusing on data at a genetic and cellular level, it may yet counter the criticism that the data-driven approach of recent decades is too blunt and insensitive a tool, such that “even the most frequently prescribed drugs for the most common conditions have very limited efficacy”, according to computational biologist <a href="https://www.jstor.org/stable/26601761#metadata_info_tab_contents">Chloe-Agathe Azencott</a>. </p>
<p>But personalised medicine still feeds on the same depersonalised data as medicine more generally, so it too is handicapped by data’s biases. And even if it could step beyond the problems of biased data – <a href="https://theconversation.com/extent-of-institutional-racism-in-british-universities-revealed-through-hidden-stories-118097">and, indeed, institutions</a> – the question of its role in the future of our everyday healthcare does not end there.</p>
<p>Even taking the utopian view that personalised medicine might make possible treatments as individual as we are, pharmaceutical companies won’t develop these treatments unless they are profitable. And that requires either prices so high that only the wealthiest of us could afford them, or a market so big that these companies can “<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2918032/">achieve the requisite return on investment</a>”. Truly individualised care is not really on the table.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/in-defence-of-imprecise-medicine-the-benefits-of-routine-treatments-for-common-diseases-128440">In defence of ‘imprecise’ medicine: the benefits of routine treatments for common diseases</a>
</strong>
</em>
</p>
<hr>
<p>If our goal in healthcare is to help more people by being more representative, more inclusive and more attentive to individual difference in the medical everyday of diagnosis and treatment, big data isn’t going to help us out. At least not as things currently stand.</p>
<p>For the story of healthcare data to date has pointed us squarely in the other direction, towards homogenisation and standardisation as medical goals. Laudable as the rationales for such a focus for medicine have been at different moments in our history, our expectations for the potential for machine learning to enable all of us to live longer, healthier lives remain something of a pipe dream. Right now it is still us humans, not our computer overlords, who hold most sway over our individual health outcomes.</p>
<p><em>Dr Caitjan Gainty is a winner of The Conversation’s <a href="https://theconversation.com/sir-paul-curran-award-for-academic-communication-2021-goes-to-caitjan-gainty-175125">Sir Paul Curran award for academic communication</a></em></p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=112&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=112&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=112&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=140&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=140&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=140&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>For you: more from our <a href="https://theconversation.com/uk/topics/insights-series-71218?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Insights series</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/the-discovery-of-insulin-a-story-of-monstrous-egos-and-toxic-rivalries-172820?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The discovery of insulin: a story of monstrous egos and toxic rivalries
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/james-mccune-smith-new-discovery-reveals-how-first-african-american-doctor-fought-for-womens-rights-in-glasgow-166233?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">James McCune Smith: new discovery reveals how first African American doctor fought for women’s rights in Glasgow
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/drugs-robots-and-the-pursuit-of-pleasure-why-experts-are-worried-about-ais-becoming-addicts-163376?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Drugs, robots and the pursuit of pleasure – why experts are worried about AIs becoming addicts
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/the-inside-story-of-recovery-how-the-worlds-largest-covid-19-trial-transformed-treatment-and-what-it-could-do-for-other-diseases-184772?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The inside story of Recovery: how the world’s largest COVID-19 trial transformed treatment – and what it could do for other diseases
</a></em></p></li>
</ul>
<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK"><strong>Subscribe to our newsletter</strong></a>.</em></p><img src="https://counter.theconversation.com/content/189362/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Caitjan Gainty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To understand the potential for machine learning to transform medicine, we must go back to the controversial origins of data use in healthcareCaitjan Gainty, Senior Lecturer in the History of Science, Technology and Medicine, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/916722018-02-13T20:22:01Z2018-02-13T20:22:01ZNo, artificial intelligence won’t steal your children’s jobs – it will make them more creative and productive<figure><img src="https://images.theconversation.com/files/207946/original/file-20180226-120971-1ty599s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The South Korean go player Lee Sedol after a 2016 match against Google's artificial-intelligence program AlphaGo. Sedol, ranked 9th in the world, lost 4-1. </span> <span class="attribution"><span class="source">Lee Jin-man/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><blockquote>
<p>“Whatever your job is, the chances are that one of these machines can do it faster or better than you can.”</p>
</blockquote>
<p>No, this is not a 2018 headline about self-driving cars or one of IBM’s new supercomputers. Instead, it was published by the <a href="https://issuu.com/bloomsburypublishing/docs/electronic_dreams_extract"><em>Daily Mirror</em> in 1955</a>, when a computer took as much space as a large kitchen and had less power than a pocket calculator. They were called “electronic brains” back then, and evoked both hope and fear. And more than 20 years later, little had changed: In a <a href="http://podplayer.net/#/?id=9183280">1978 BBC documentary</a> about silicon chips, one commentator argued that “They are the reason why Japan is abandoning its shipbuilding and why our children will grow up without jobs to go to”.</p>
<h2>Artificial intelligence hype is not new</h2>
<p>If one types “artificial intelligence” (AI) on <a href="https://books.google.com/ngrams/graph?content=%22Artificial+Intelligence%22&year_start=1960&year_end=2010&corpus=15&smoothing=3&share=&direct_url=t1%3B%2C%22%20Artificial%20Intelligence%20%22%3B%2Cc0">Google Books’ Ngram Viewer</a> – a tool that allows us to check how often a term was printed in a book between 1800 and 2008 – we can clearly see that our modern-day hype, optimism and deep concern about AI are by no means a novelty.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=265&fit=crop&dpr=1 600w, https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=265&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=265&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=333&fit=crop&dpr=1 754w, https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=333&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/206209/original/file-20180213-44630-8tobd.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=333&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Searches for the term ‘artificial intelligence’ on Google Books’ Ngram viewer.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The history of AI is a long series of booms and busts. The first “AI spring” took place between 1956 and 1974, with pioneers such as the young <a href="https://web.media.mit.edu/%7Eminsky/">Marvin Minsky</a>. This was followed by the “first AI winter” (1974-1980), when disillusion with the gap between <a href="https://theconversation.com/what-is-machine-learning-76759">machine learning</a> and human cognitive capacities first led to disinvestment and disinterest in the topic. A second boom (1980-1987) was followed by another “winter” (1987-2001). Since the 2000s we’ve been surfing the third “AI spring”.</p>
<p>There’s plenty of reasons to believe this latest wave of interest for AI is going to be more durable. <a href="https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/">According to Gartner Research</a>, technologies typically go from a “peak of inflated expectations” through a “trough of disillusionment” until they finally reach a “plateau of productivity”. AI-intensive technologies such as virtual assistants, the Internet of Things, smart robots and augmented data discovery are about to reach the peak. <a href="https://theconversation.com/deep-learning-and-neural-networks-77259">Deep learning</a>, machine learning and cognitive expert advisors are expected to reach the plateau of mainstream applications in two to five years.</p>
<h2>Narrow intelligence</h2>
<p>We finally seem to have enough computing power to credibly develop what is called “narrow AI”, of which all the aforementioned technologies are an example. These are not to be confused with “artificial general intelligence” (AGI), which scientist and futurologist Ray Kurzweil called <a href="http://www.kurzweilai.net/are-we-spiritual-machines-ray-kurzweil-critics-strong-ai">“strong AI”</a>. Some of the most advanced AI systems to date, such as IBM’s Watson supercomputer or <a href="https://www.theguardian.com/technology/2017/may/23/alphago-google-ai-beats-ke-jie-china-go">Google’s AlphaGo</a>, are examples of narrow AI. They can be trained to perform complex tasks such as identifying cancerous skin patterns or playing the ancient Chinese strategy game of Go. They are very far, however, from being capable to do everyday general intelligence tasks such as gardening, arguing or inventing a children’s story. </p>
<p>The cautionary prophecies of visionaries like <a href="https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence">Elon Musk, Bill Gates and Stephen Hawking</a> against AI really are meant as an early warning against the dangers of AGI, but that is not something our children will be confronted with. Their immediate partners will be of the narrow AI kind. The future of labour depends on how well we equip them to use computers as cognitive partners .</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=382&fit=crop&dpr=1 600w, https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=382&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=382&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=480&fit=crop&dpr=1 754w, https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=480&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/206206/original/file-20180213-44627-13cbbah.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=480&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Hype cycle for emerging technologies.</span>
<span class="attribution"><a class="source" href="https://www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/">Gartner Research</a></span>
</figcaption>
</figure>
<h2>Better together</h2>
<p><a href="http://www.kasparov.com/">Garry Kasparov</a> – the chess grandmaster who was <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">defeated by IBM’s Deep Blue computer in 1997</a> – calls this human-machine cooperation “augmented intelligence”. He compares this “augmentation” to the mythic image of a centaur: combine a quadruped’s horsepower with the intuition of a human mind. To illustrate the potential of centaurs, he describes a freestyle chess tournament in 2005 in which any combination of human-machine teams was possible. <a href="http://www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/">In his words</a>:</p>
<blockquote>
<p>“The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and ‘coaching’ their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grand-master opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.”</p>
</blockquote>
<p>Human-machine cognitive partnerships can amplify what each partner does best: humans are great at making intuitive and creative decisions based on <em>knowledge</em> while computers are good at sifting through large amounts of data to produce information that will feed into human knowledge and decision making. We use this combination of narrow AI and human unique cognitive and motor skills every day, often without realising it. A few examples:</p>
<ul>
<li><p>Using Internet search engines to find content (videos, images, articles) that will be helpful in preparing for a school assignment. Then combining them in creative ways in a multimedia slide presentation. </p></li>
<li><p>Using a translation algorithm to produce a first draft of a document in a different language, then manually improving the style and grammar of the final document.</p></li>
<li><p>Driving a car to an unknown destination using a smartphone GPS application to navigate through alternative routes based on real-time traffic information;</p></li>
<li><p>Relying on a movie-streaming platform to shortlist films you are going to appreciate based on your recent history; making the final choice based on mood, social context, serendipity.</p></li>
</ul>
<p>Netflix is a <a href="https://blog.kissmetrics.com/how-netflix-uses-analytics/">great example</a> of this collaboration at its best. By using machine-learning algorithms to analyse how often and how long people watch their content, they can determine how engaging each story component is to certain audiences. This information is used by screenwriters, producers and directors to better understand what and how to create new content. Virtual-reality technology allows content creators to experiment with different storytelling perspectives before they ever shoot a single scene. </p>
<p>Likewise, architects can rely on computers to adjust the functional aspects of their work. Software engineers can focus on the overall systems structure while machines provide ready-to-use code snippets and libraries to speed up the process. Marketers rely on big data and visualisation tools to determine how to better understand customer needs and develop better products and services. None of these tasks could be accomplished by AI without human guidance. Conversely, human creativity and productivity have been enormously leveraged by this AI support, allowing to achieve better quality solutions at lower costs.</p>
<h2>Losses and gains</h2>
<p>As innovation accelerates, thousands of jobs <em>will</em> disappear, just as it has happened in the previous cycles of industrial revolutions. Machines powered by narrow AI algorithms can already perform certain 3D tasks (“dull, dirty and dangerous”) much better than humans. This may create enormous pain for those who are losing their jobs over the next few years, particularly if they don’t acquire the computer-related skills that would enable them to find more creative opportunities. We must learn from the previous waves of creative destruction if we are to mitigate human suffering and increasing inequality. </p>
<p>For example, some statistics indicate that as much as <a href="https://www.cnbc.com/2016/09/02/driverless-cars-will-kill-the-most-jobs-in-select-us-states.html">3% of the population</a> in developed countries work as drivers. When automated cars become a reality in the next 15 to 25 years, we must offer people who will be <a href="https://en.wikipedia.org/wiki/Structural_unemployment">“structurally unemployed”</a> some sort of compensation income, training and re-positioning opportunities.</p>
<p>Fortunately, the <a href="http://www.nytimes.com/2000/06/10/your-money/half-a-century-later-economists-creative-destruction-theory-is.html">Schumpeterian waves of destructive innovation</a> also create jobs. History has shown that disruptive innovations are not always a zero-sum game. On the long run, the loss of low-added-value jobs to machines can have a positive impact in the overall quality of life of most workers. The <a href="http://www.aei.org/publication/what-atms-bank-tellers-rise-robots-and-jobs/">ATM paradox</a> is a good example of this. As the use of automatic teller machines spread in the 1980s and ‘90s, many predicted massive unemployment in the banking sector. Instead, ATMs created more jobs as the cost of opening new agencies decreased. The number of agencies multiplied, as did the portfolio of banking products. Thanks to automation, going to the bank offers a much better customer experience than in previous decades. And the jobs in the industry became better paid and were of better quality. </p>
<p>A similar phenomenon happened with the textile industry in the 19th century. Better human-machine coordination leveraged productivity and created customer value, increasing the overall market size and creating new employment opportunities. Likewise, we may predict that as low-quality jobs continue to disappear, AI-assisted jobs will emerge to fulfil the increasing demand for more productive, ecological and creative products. More productivity may mean shorter work weeks and more time for family and entertainment, which may lead to more sustainable forms of value creation and, ultimately, more jobs. </p>
<h2>Adapting to the future</h2>
<p>This optimist scenario assumes, however, that education systems will do a better job of preparing our children to become good at what humans do best: creative and critical thinking. Less learning-by-heart (after all, most information is one Google search away) and more learning-by-doing. Fewer clerical skills and more philosophical insights about human nature and how to cater to its infinite needs for art and culture. As Apple founder and CEO Steve Jobs <a href="https://thenextweb.com/apple/2011/09/20/the-top-20-most-inspiring-steve-jobs-quotes/">famously said</a>:</p>
<blockquote>
<p>“What made the Macintosh great was that the people working on it were musicians and poets and artists and zoologists and historians who also happened to be the best computer scientists in the world.” </p>
</blockquote>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=300&fit=crop&dpr=1 600w, https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=300&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=300&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=377&fit=crop&dpr=1 754w, https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=377&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/206220/original/file-20180213-44636-1mqszle.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=377&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Apple founder and CEO Steve Jobs.</span>
</figcaption>
</figure>
<p>To become creative and critical thinkers, our children will need knowledge and wisdom more than raw data points. They need to ask “why?”, “how?” and “what if?” more often than “what?”, who?“ and "when?” And they must construct this knowledge by relying on databases as cognitive partners as soon as they learn how to read and write. Constructivist methods such as the <a href="https://theconversation.com/explainer-what-is-the-hybrid-classroom-and-is-it-the-future-of-education-37611">“flipped classroom”</a> approach are a good step in that direction. In flipped classrooms, students are told to search for specific contents on the web at home and to come to class ready to apply what they learned in a collaborative project supervised by the teacher. Thus they do their “homework” (exercise) in class and they have web “lectures” at home, optimising class time to do what computers cannot help them to do: create, develop and apply complex ideas collaboratively with their peers.</p>
<p>Thus, the future of human-machine collaboration looks less like the scenario in the <em>Terminator</em> movies and more like a <a href="https://www.youtube.com/watch?v=lG7DGMgfOb8"><em>Minority Report</em></a>-style of “augmented intelligence”. There will be jobs if we adapt the education system to equip our children to do what humans are good at: to think critically and creatively, to develop knowledge and wisdom, to appreciate and create beautiful works of art. That does not mean it will be a painless transition. Machines and automation will likely take away millions of low-quality jobs as it has happened in the past. But better-quality jobs will likely replace them, requiring less physical effort and shorter hours to deliver better results. At least until artificial general intelligence becomes a reality – then all bets are off. But this will likely be our great-grandchildren’s problem.</p><img src="https://counter.theconversation.com/content/91672/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marcos Lima ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>The history of human-machine collaboration suggests that AI will evolve into a “cognitive partner” to humankind rather than as all-powerful, all-knowing, labour replacing robots.Marcos Lima, Responsable de la filière Marketing Innovation and Distribution, EMLV (Ecole de Management Léonard de Vinci), Pôle Léonard de VinciLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/801532017-06-27T14:16:11Z2017-06-27T14:16:11ZTech firms want to detect your emotions and expressions, but people don’t like it<figure><img src="https://images.theconversation.com/files/175833/original/file-20170627-24776-a208m8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/businessman-using-mobile-phone-wearing-carton-178588133">Sergey Nivens</a></span></figcaption></figure><p>As revealed in a <a href="http://pdfaiw.uspto.gov/.aiw?docid=20150242679&PageNum=3&IDKey=47BC4614A23D&HomeUrl=http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1%26Sect2=HITOFF%26d=PG01%26p=1%26u=/netahtml/PTO/srchnum.html%26r=1%26f=G%26l=50%26s1=20150242679.PGNR.%26OS=%26RS=">patent filing</a>, Facebook is interested in using webcams and smartphone cameras to <a href="http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-plans-to-watch-users-through-webcams-spy-patent-application-social-media-a7779711.html">read our emotions, and track expressions and reactions</a>. The idea is that by understanding emotional behaviour, Facebook can show us more of what we react positively to in our Facebook news feeds and less of what we do not – whether that’s friends’ holiday photos, or advertisements.</p>
<p>This might appear innocuous, but consider some of the detail. In addition to smiles, joy, amazement, surprise, humour and excitement, the patent also lists negative emotions. Possibly being read for signs of disappointment, confusion, indifference, boredom, anger, pain and depression is neither innocent, nor fun.</p>
<p>In fact, Facebook is no stranger to using data about emotions. Some readers might remember the furore when <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4066473/">Facebook secretly tweaked user’s news feeds</a> to understand “emotional contagion”. This meant that when users logged into their Facebook pages, some were shown content in their news feeds with a greater number of positive words and others were shown content deemed as sadder than average. This changed the emotional behaviour of those users that were “infected”.</p>
<p>Given that Facebook has <a href="https://www.theverge.com/2017/2/1/14474534/facebook-earnings-q4-fourth-quarter-2016">around two billion users</a>, this patent to read emotions via cameras is important. But there is a bigger story, which is that the largest technology companies have been buying, researching and developing these applications for some time. </p>
<h2>Watching you feel</h2>
<p>For example, <a href="http://fortune.com/2016/01/07/apple-emotient-acquisition/">Apple</a> bought Emotient in 2016, a firm that pioneered facial coding software to read emotions. <a href="https://azure.microsoft.com/en-gb/services/cognitive-services/emotion/">Microsoft</a> offers its own “cognitive services”, and <a href="https://www.ibm.com/blogs/bluemix/2016/10/watson-has-more-accurate-emotion-detection/">IBM’s Watson</a> is also a key player in industrial efforts to read emotions. It’s possible that <a href="https://www.technologyreview.com/s/601654/amazon-working-on-making-alexa-recognize-your-emotions/">Amazon’s Alexa voice-activated assistant</a> could soon be listening for signs of emotions, too.</p>
<p>This is not the end though: interest in emotions is not just about screens and worn devices, but also our environments. Consider retail, where increasingly the goal is to understand who we are and what we think, feel and do. Somewhat reminiscent of Steven Spielberg’s 2002 film Minority Report, <a href="https://www.eyeqinsights.com/go/">eyeQ Go</a>, for example, measures facial emotional responses as people look at goods at shelf-level.</p>
<p>What these and other examples show is that we are witnessing a rise of interest in our emotional lives, encompassing any situation where it might be useful for a machine to know how a person feels. Some less obvious examples include <a href="https://www.mysteryvibe.com/en">emotion-reactive sex toys</a>, the use of video cameras by lawyers to <a href="https://www.affectiva.com/success-story/mediarebel/">identify emotions in witness testimony</a>, and <a href="https://www.affectiva.com/what/uses/automotive/">in-car cameras and emotion analysis</a> to prevent accidents (and presumably to lower insurance rates).</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=906&fit=crop&dpr=1 600w, https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=906&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=906&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1138&fit=crop&dpr=1 754w, https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1138&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/175876/original/file-20170627-24776-ldbbe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1138&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How long till machines can tell what we can?</span>
<span class="attribution"><a class="source" href="https://pixabay.com/en/girl-eyes-makeup-sexy-glamor-237871/">jura-photography</a></span>
</figcaption>
</figure>
<h2>Users are not happy</h2>
<p>In a report assessing the rise of <a href="https://drive.google.com/file/d/0BzU2NrGCFp7qd0tOWjJDcFgxdGc/view">“emotion AI” and what I term “empathic media”</a>, I point out that this is not innately bad. There are already <a href="http://www.gamasutra.com/blogs/ErinReynolds/20160511/272295/Biofeedback_and_Gaming_The_Future_Is_Upon_Us_Seriously.php">games that use emotion-based biofeedback</a>, which take advantage of eye-trackers, facial coding and wearable heart rate sensors. These are a lot of fun, so the issue is not the technology itself but how it is used. Does it enhance, serve or exploit? After all, the scope to make emotions and intimate human life machine-readable has to be treated cautiously.</p>
<p>The report covers views from industry, policymakers, lawyers, regulators and NGOs, but it’s useful to consider what ordinary people say. I conducted a survey of 2,000 people and asked questions about emotion detection in social media, digital advertising outside the home, gaming, interactive movies through tablets and phones, and using voice and emotion analysis through smartphones.</p>
<p>I found that more than half (50.6%) of UK citizens are “not OK” with any form of emotion capture technology, while just under a third (30.6%) feel “OK” with it, as long as the emotion-sensitive application does not identify the individual. A mere 8.2% are “OK” with having data about their emotions connected with personally identifiable information, while 10.4% “don’t know”. That such a small proportion are happy for emotion-recognition data to be connected with personally identifying information about them is pretty significant considering what Facebook is proposing.</p>
<p>But do the young care? I found that younger people are twice as likely to be “OK” with emotion detection than the oldest people. But we should not take this to mean they are “OK” with having data about emotions linked with personally identifiable information. Only 13.8% of 18- to 24-year-olds accept this. Younger people are open to new forms of media experiences, but they want meaningful control over the process. Facebook and others, take note.</p>
<h2>New frontiers, new regulation?</h2>
<p>So what should be done about these types of technologies? UK and European law is being strengthened, especially given the introduction of the <a href="https://ico.org.uk/for-organisations/data-protection-reform/overview-of-the-gdpr/">General Data Protection Regulation</a>. While this has little to say about emotions, there are strict codes on the use of personal data and information about the body (biometrics), especially when used to infer mental states (as Facebook have proposed to do). </p>
<p>This leaves us with a final problem: what if the data used to read emotions is not strictly personal? What if shop cameras pick out expressions in such a way as to detect emotion, but not identify a person? This is what retailers are proposing and, as it stands, there is nothing in the law to prevent them.</p>
<p>I suggest we need to tackle the following question: are citizens and the reputation of the industries involved best served by covert surveillance of emotions?</p>
<p>If the answer is no, then codes of practice need to be amended immediately. Questions of ethics, emotion capture and rendering bodies passively machine-readable is not contingent upon personal identification, but something more important. Ultimately, this is a matter of human dignity, and about what kind of environment we want to live in. </p>
<p>There’s nothing definitively wrong with technology that interacts with emotions. The question is whether they can be shaped to serve, enhance and entertain, rather than exploit. And given that survey respondents of all ages are rightfully wary, it’s a question that the people should be involved in answering.</p><img src="https://counter.theconversation.com/content/80153/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew McStay receives funding from AHRC and ESRC.</span></em></p>If Facebook already knows how you feel from reading what you post, soon it will know from reading the expressions on your face.Andrew McStay, Reader in Advertising and Digital Media, Bangor UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/654462016-09-26T15:50:23Z2016-09-26T15:50:23ZA supercomputer just made the world’s first AI-created film trailer – here’s how well it did<figure><img src="https://images.theconversation.com/files/139292/original/image-20160926-31870-1y6fvz6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><span class="source">20th Century Fox</span></span></figcaption></figure><p>More people have been talking about the trailer for the sci-fi/horror film Morgan than the movie itself. It’s partly because the commercial and <a href="https://www.theguardian.com/film/2016/sep/04/morgan-sci-fi-thriller-review-kate-mara-toby-jones-ai">critical response</a> to the film has been <a href="http://www.empireonline.com/movies/morgan/review/">less than lukewarm</a>, and partly because the clip was the first to be created entirely by artificial intelligence.</p>
<p>At the request of the filmmakers at 20th Century Fox, IBM used its <a href="http://www.ibm.com/watson/">supercomputer Watson</a> to <a href="http://www.wired.co.uk/article/ibm-watson-ai-film-trailer">build a trailer</a> from the final version of Morgan, which tells the story of an artificially created human. First Watson was fed background information on the horror genre in the form of a <a href="http://www.cbronline.com/news/big-data/analytics/horror-movie-morgan-trailer-gets-the-ibm-artificial-intelligence-treatment-4996128">hundred film trailers</a>. It used visual and aural analysis in order to identify the images, sounds, and emotions that are usually found in frightening and suspenseful trailers.</p>
<p>Watson then analysed Morgan and identified the key moments of plot action from which a trailer of the film could be generated. Only the final act of putting the sounds and images together to create the trailer required human intervention. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gJEzuYynaiw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>So how did Watson do? The trailer features the familiar visual and narrative devices that have been the staple of horror film: the reclusive “mad” scientist, the businesslike “investigator”, the eerie soundtrack including the main theme and a lullaby that evokes themes of childhood and innocence (contrasted with images of physical violence and bloodshed others). In fact, the iconography featured in Watson’s trailer reaffirms what many film theorists say are the <a href="https://books.google.co.uk/books?id=scE4AAAAQBAJ&pg=PA127&dq=generic+conventions+of+horror+films+mad+scientist&hl=en&sa=X&ved=0ahUKEwjcipilj6zPAhWDLMAKHVpsBOUQ6AEIHDAA#v=onepage&q=generic%20conventions%20of%20horror%20films%20mad%20scientist&f=false">generic conventions of horror films</a>, based on iconic examples such as the 1931 version of Frankenstein.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BN8K-4osNb0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>But is the purpose of a film trailer just to repeat the generic conventions that characterise a film? While some trailers clearly do this, or simply trumpet the presence of star actors, others highlight the film’s spectacular possibilities. Early film trailers often described the wonders of the emerging technology of cinema such as <a href="https://www.youtube.com/watch?v=mW6GfJ5Tvms">synchorised sound (Vitaphone)</a> <a href="https://www.youtube.com/watch?v=a-P_Ira6kgE">and Technicolor</a> and many still underline the historical moment of the film. Others focus on explaining the story and conveying <a href="https://books.google.co.uk/books?id=1xxRqocKb6cC&pg=PA30&dq=suggestive+aesthetic,+structural+and+thematic+motifs&hl=en&sa=X&ved=0ahUKEwiz7NDTl4_PAhWGBsAKHfW-BC4Q6AEIHjAA#v=onepage&q=suggestive%20aesthetic%2C%20structural%20and%20thematic%20motifs&f=false">the movie’s look, feel and themes</a>“ for the prospective audience. </p>
<h2>Capturing horror themes</h2>
<p>The Watson trailer for Morgan succeeds in identifying the aesthetic and thematic motifs of the film, as well as the emotional charges that underpin them. For example, it references a trope of the horror genre made familiar by films such as The Exorcist (1974) and The Omen (1976), which dispels the presumed innocence of children. In the Watson trailer we see this represented with images of Morgan’s first birthday contrasted with images of bloody violence. Meanwhile, the use of lines of dialogue such as "I have to say goodbye to mother” is clearly based on the supercomputer’s ability to identify Freudian themes from well known examples in the horror genre, <a href="https://www.academia.edu/12043112/The_use_of_Freudian_themes_in_Alfred_Hitchcocks_Psycho_and_Vertigo">most notably Psycho</a> (1960). </p>
<p>What Watson doesn’t do is give viewers a clear understanding of the story (or provide any of the other historical functions of Hollywood trailers). The difference becomes obvious if you compare the Watson-made trailer to with the film’s “official” (human-made) clip, which reveals three narrative threads to the storyline, as well as using many of the stock motifs identified by Watson. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rqmHSR0bFU8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>By showing clips of three different parts of the story, the official trailer creates a series of enigmatic questions to arouse the viewers’ interest. What is kept behind the scratched glass wall? What kind of creature is the titular artificial being Morgan? Will the danger implied by the images of death be contained?</p>
<p>The Watson trailer doesn’t manage such a sophisticated retelling of the story. Based on its analysis of horror movie trailers, the supercomputer has created a striking visual and aural collage with a remarkably perceptive selection of images. But the official trailer is more than a random collection of visual and sound motifs. It is a film about the film, and is structured to communicate with its intended viewership by using a gift that the supercomputer doesn’t yet possess – the gift of narrative.</p><img src="https://counter.theconversation.com/content/65446/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Suman Ghosh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>IBM’s Watson watched hundreds of horror movie trailers and then created its own for the new film Morgan.Suman Ghosh, Senior Lecturer in Film Studies, Bath Spa UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/631732016-08-23T01:16:15Z2016-08-23T01:16:15ZHarried doctors can make diagnostic errors: They need time to think<figure><img src="https://images.theconversation.com/files/134843/original/image-20160819-30383-dhnhr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Thinking too fast?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-168769280.html">ER image via www.shutterstock.com.</a></span></figcaption></figure><p>When a person goes to the doctor, there’s usually one thing they want: a diagnosis. Once a diagnosis is made, a path toward wellness can begin.</p>
<p>In some cases, diagnoses are fairly obvious. But in others, they aren’t.</p>
<p>Consider the following: A 50-year-old man with a history of high blood pressure goes to the emergency room with sudden chest pain and difficulty breathing. </p>
<p>Concerned that these are symptoms of a heart attack, the ER physician orders an electrocardiogram and blood tests. The tests are negative, but sometimes heart attacks don’t show up on these tests. Since every minute counts, he prescribes a blood thinner to save the patient’s life. </p>
<p>Unfortunately, the diagnosis and decision was wrong. The patient was not having a heart attack. He had a tear in his aorta (known as an aortic dissection) – a less obvious but equally dangerous condition.</p>
<p>It’s not a far-fetched scenario. </p>
<p>“Three’s Company” star <a href="http://johnritterfoundation.org/ritter-rules/">John Ritter</a> died from an aortic tear that doctors initially <a href="http://articles.latimes.com/2008/mar/15/local/me-ritter15">diagnosed</a> and <a href="http://www.today.com/id/23723123/ns/today-today_news/t/john-ritters-widow-jury-has-spoken/#.V7tucmXhrUk">treated as a heart attack</a>. </p>
<p>With over three decades of combined experience caring for patients in hospital settings, we have faced our share of <a href="http://dx.doi.org/10.1056/NEJM200001063420107">diagnostic dilemmas</a>. Determined to improve our practice and those of other physicians, we are studying ways to prevent diagnostic errors as part of a project funded by the federal government’s <a href="http://www.ahrq.gov">Agency for Healthcare Research and Quality</a>. Below, we describe some of the challenges – and possible solutions – to improving diagnosis.</p>
<h2>The flawed thought processes that result in errors</h2>
<p>When physicians learn to make diagnoses in medical school, they are trained to initiate a mental calculus, analyzing symptoms and considering the possible conditions and illnesses that may cause them. For instance, chest pain could indicate a problem with the cardiovascular or respiratory system. Keeping in mind these systems, students then ask what conditions may cause these problems, focusing first on the most life-threatening ones such as heart attack, pulmonary embolism, collapsed lung or aortic tears.</p>
<p>Once tests rule these out, less dangerous diagnoses such as heartburn or muscle injury are considered. This process of sifting through possibilities to explain a patient’s symptoms is called generating a “differential diagnosis.” </p>
<p>Although the ER physician in our example could have stopped to generate a differential diagnosis, this is easier said than done. With time and experience, mental shortcuts overshadow this time-consuming process and mistakes may result. </p>
<p>One such shortcut is “<a href="http://dx.doi.org/10.1056/NEJMcps052993">anchoring bias</a>.” This is the tendency to rely upon the first piece of information obtained – or the initial diagnosis considered – regardless of subsequent information that might suggest other possibilities. </p>
<p>Anchoring is compounded by availability bias, another mental shortcut in which we overestimate the likelihood of events based on memory or experiences. </p>
<p>Thus, an ER doctor who frequently sees patients with heart attacks <a href="http://dx.doi.org/10.1136/bmj.d4487">might anchor on this diagnosis</a> when evaluating a middle-aged man with cardiac risk factors presenting with chest pain. We doctors also tend to stop exploring something once we’ve reached a tentative conclusion, a bias called premature closure. So, even if a diagnosis doesn’t fit perfectly, we tend not to change our minds to explore other possibilities.</p>
<h2>How can we minimize diagnostic errors?</h2>
<p><a href="http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/kahneman-bio.html">Daniel Kahneman</a>, who won a Nobel Prize in 2002 for his work on human judgment and decision-making, argues that people have two systems that drive everyday thinking: fast and slow. </p>
<p>The fast thinking, known as System 1, is automatic, effortless and fueled by emotion. The slow system of thinking, or System 2, is deliberative, effortful and logical. Medical students are trained to use both systems: by toggling back and forth, physicians can thus harness their training, experience and intuition to craft a <a href="http://www.ncbi.nlm.nih.gov/pubmed/12915363">logic-driven diagnosis</a>. </p>
<p>So why don’t physicians just do this routinely?</p>
<p>In some cases, System 1 thinking is all that is necessary. For example, a physician who sees a young child with fever and the typical rash of chicken pox can easily make this diagnosis without slowing down or thinking about alternatives.</p>
<p>However, some physicians don’t use System 2 thinking when they need to because their work load makes it hard. Really hard. </p>
<p>In an <a href="http://cbssm.med.umich.edu/what-we-do/research-projects/enhancing-patient-safety-through-cognition-communication-m-safety-lab">ongoing study</a>, we have recorded first-hand how time pressures make it hard for doctors to stop and think. In addition to the incessant pace of work and physical distractions, there is substantial variation in how information is collected, presented and synthesized to inform diagnosis. </p>
<p>It is thus abundantly clear that physicians often do not have the time to do this type of toggling back and forth <a href="http://dx.doi.org/10.1136/bmjqs-2011-000149">during patient care</a>. Rather, they are often multitasking when making diagnoses, work that almost always leads to System 1 thinking. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Technology is a help, but not a fix.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-171498107/stock-photo-doctor-working-on-his-computer-and-with-mobile-phone-in-the-office-he-is-wearing-blue-uniform-surgeon-uniform.html?src=Tpx3gGXCWNFgE56-h7nrDg-1-36">Doctor image via www.shutterstock.com.</a></span>
</figcaption>
</figure>
<h2>Can technology help?</h2>
<p>Technology seems like a promising solution to diagnostic errors. After all, computers do not suffer from cognitive traps like humans do.</p>
<p>Software tools that provide a list of potential diagnoses for symptoms and group collaboration platforms that allow physicians to engage with others to discuss cases <a href="http://dx.doi.org/10.1136/bmjqs-2013-001884">appear promising</a> in preventing diagnostic errors.</p>
<p>IBM’s Watson is also helping doctors make <a href="http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4">the right diagnosis</a>. There is even an XPrize to create technology that can diagnose 13 health conditions while <a href="http://tricorder.xprize.org">fitting in the palm of a hand</a>. It may not be too long before a computer <a href="http://www.nytimes.com/2012/12/04/health/quest-to-eliminate-diagnostic-lapses.html?_r=0">will make better diagnoses than physicians.</a></p>
<p>But technology won’t solve the organizational and workflow problems physicians face today. Based on 200 hours of observing clinical teams and asking them what could be done to improve diagnosis as part of an ongoing research project, two remedies appear necessary: time and space. </p>
<p>Crafted timeouts from “busy work” with dedicated “thinking time” is a key need. Within this period, a diagnostic checklist may be <a href="http://www.improvediagnosis.org/page/Checklist">useful</a>. Although they vary in scope and content, these checklists encourage physicians to engage System 2 thinking and improve data synthesis and decision-making. One such tool is the <a href="http://c.ymcdn.com/sites/www.improvediagnosis.org/resource/resmgr/Take_2_-_BThink_Do_clinician.pdf">Take 2, Think Do</a> framework, which asks physicians to take two minutes to reflect on the diagnosis, decide if they need to reexamine facts or assumptions and then act accordingly.</p>
<p>Second, physicians need a quiet place to think, somewhere free from distraction. Working with colleagues in architecture, we are examining how best to create such environments. This is no small challenge. Hospitals have limited physical footprints, and medical culture makes it hard for doctors to duck into quiet spaces to think. But redesigning workflow and space could have an important impact on diagnosis. How do we know? The physicians we followed said so. In the words of one:</p>
<blockquote>
<p>“if we had a place where the pager could be silent for a few minutes, where I could review my [patient] list and think through labs, recommendations and plans, I know I could be a better diagnostician.” </p>
</blockquote>
<p>This approach may prove particularly valuable in high-stress, more chaotic environments such as the ER or intensive care unit.</p>
<p>A future with <a href="http://www.nationalacademies.org/hmd/%7E/media/Files/Report%20Files/2015/Improving-Diagnosis/DiagnosticError_ReportBrief.pdf">fewer diagnostic errors</a> – and the negative consequences of them – appears possible. Stopping to think about our thoughts and employing the power of modern technology is a combination that may lead us to the correct diagnosis more frequently. These changes will help physicians deliver better care and save lives – a future we can all look forward to.</p><img src="https://counter.theconversation.com/content/63173/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vineet Chopra receives funding from the Agency for Healthcare Research and Quality to study diagnostic errors. </span></em></p><p class="fine-print"><em><span>Sanjay Saint receives funding from the Agency for Healthcare Research and Quality to study diagnostic errors. </span></em></p>Cognitive traps can steer doctors away from the right diagnosis.Vineet Chopra, Assistant Professor of Internal Medicine and Research Scientist, University of MichiganSanjay Saint, George Dock Professor of Medicine, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/582052016-06-13T01:59:21Z2016-06-13T01:59:21ZComputers may be evolving but are they intelligent?<figure><img src="https://images.theconversation.com/files/122979/original/image-20160518-13496-1bp8xq8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Computers may be smarter than humans at some things, but are they intelligent?</span> <span class="attribution"><span class="source">Shutterstock/Olga Nikonova </span></span></figcaption></figure><p><em>The final in our <a href="https://theconversation.com/au/topics/computing-turns-60">Computing turns 60</a> series, to mark the 60th anniversary of the first computer in an Australian university, looks at how intelligent the technology has become.</em></p>
<hr>
<p>The term “artificial intelligence” (AI) was <a href="https://web.archive.org/web/20080830093710/http://news.cnet.com/Getting-machines-to-think-like-us/2008-11394_3-6090207.html">first used</a> back in 1956 to describe the <a href="http://www.dartmouth.edu/%7Evox/0607/0724/ai50.html">title of a workshop</a> of scientists at Dartmouth, an Ivy League college in the United States.</p>
<p>At that pioneering workshop, attendees discussed how computers would soon perform all human activities requiring intelligence, including playing chess and other games, composing great music and translating text from one language to another language. These pioneers were wildly optimistic, though their aspirations were unsurprising. </p>
<p>Trying to build intelligent machines has <a href="http://www.computerhistory.org/timeline/ai-robotics/">long been a human preoccupation</a>, both with calculating machines and in literature. Early computers from the 1940s were commonly described as electronic brains and thinking machines.</p>
<h2>The Turing test</h2>
<p>The father of computer science, Britain’s Alan Turing, was in no doubt that computers would one day think. His landmark <a href="http://www.loebner.net/Prizef/TuringArticle.html">1950 article</a> introduced the Turing test, a challenge to see if an intelligent machine could convince a human that it wasn’t in fact a machine.</p>
<p>Research into AI from the 1950s through to the 1970s focused on writing programs for computers to perform tasks that required human intelligence. An early example was the American computer game pioneer Arthur Samuels’ <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/">program for playing checkers</a>. The program improved by analysing winning positions, and rapidly learned to play checkers much better than Samuels.</p>
<p>But what worked for checkers failed to produce good programs for more complicated games such as chess and go.</p>
<p>Another early AI research project tackled introductory calculus problems, specifically symbolic integration. Several years later, symbolic integration became a solved problem and programs for it were no longer labelled as AI.</p>
<h2>Speech recognition? Not yet</h2>
<p>In contrast to checkers and integration, programs undertaking language translation and speech recognition made little progress. No method emerged that could effectively use the processing power of computers of the time.</p>
<p>Interest in AI surged in the 1980s through expert systems. Success was reported with programs performing medical diagnosis, analysing geological maps for minerals, and configuring computer orders, for example.</p>
<p>Though useful for narrowly defined problems, the expert systems were neither robust nor general, and required detailed knowledge from experts to develop. The programs did not display general intelligence.</p>
<p>After a surge of AI start up activity, commercial and research interest in AI receded in the 1990s.</p>
<h2>Speech recognition</h2>
<p>In the meantime, as computer processing power grew, computer speech recognition and language processing by computers improved considerably. New algorithms were developed that focused on statistical modelling techniques rather than emulating human processes.</p>
<p>Progress has continued with voice-controlled personal assistants such as Apple’s <a href="http://www.apple.com/ios/siri/">Siri</a> and <a href="https://support.google.com/websearch/answer/2940021?hl=en">Ok Google</a>. And translation software can give the gist of an article.</p>
<p>But no one believes that the computer truly understands language at present, despite the considerable developments in areas such as <a href="https://theconversation.com/the-future-of-chatbots-is-more-than-just-small-talk-53293">chat-bots</a>. There are definite limits to what Siri and Ok Google can process, and translations lack subtle context. </p>
<p>Another task considered a challenge for AI in the 1970s was face recognition. Programs then were hopeless.</p>
<p>Today, by contrast, Facebook can identify people from <a href="https://www.facebook.com/help/463455293673370/">several tags</a>. And camera software <a href="http://www.popularmechanics.com/technology/gadgets/how-to/a1857/4218937/">recognises faces</a> well. But it is advanced statistical methods rather than intelligence that helps.</p>
<h2>Clever but not intelligent – yet</h2>
<p>In task after task, after detailed analysis, we are able to develop general algorithms that are efficiently implemented on the computer, rather than the computer learning for itself.</p>
<p>In <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">chess</a> and, very recently in <a href="http://www.bbc.com/news/technology-35761246">go</a>, computer programs have beaten champion human players. The feat is impressive and clever techniques have been used, without leading to general intelligent capability. </p>
<p>Admittedly, champion chess players are not necessarily champion go players. Perhaps being expert in one type of problem solving is not a good marker of intelligence.</p>
<p>The final example to consider before looking to the future is <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/watson/">Watson</a>, developed by IBM. Watson famously defeated human champions in the television game show Jeopardy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/P18EdAKuC1U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Dr Watson?</h2>
<p>IBM is now applying it <a href="http://www.ibm.com/smarterplanet/us/en/ibmwatson/">Watson</a> technology with claims it will make accurate <a href="http://www.businessinsider.com.au/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4">medical diagnoses</a> by reading all medical research reports.</p>
<p>I am uncomfortable with Watson making medical decisions. I am happy it can correlate evidence, but that is a long way from understanding a medical condition and making a diagnosis.</p>
<p>Similarly, there have been claims a computer will <a href="http://www.usability.gov/get-involved/blog/2010/01/adaptive-web-based-learning-environments.html">improve teaching</a> by matching student errors to known mistakes and misconceptions. But it takes an insightful teacher to understand what is happening with children and what is motivating them, and that is lacking for the moment. </p>
<p>There are many areas in which human judgement should remain in force, such as legal decisions and launching military weapons.</p>
<p>Advances in computing over the past 60 years have hugely increased the tasks computers can perform, that were thought to involve intelligence. But I believe we have a long way to go before we create a computer that can match human intelligence. </p>
<p>On the other hand, I am comfortable with autonomous cars for driving from one place to another. Let us keep working on making computers better and more useful, and not worry about trying to replace us.</p><img src="https://counter.theconversation.com/content/58205/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Leon Sterling receives funding from the Australian Research Council. </span></em></p>Computing has been getting much smarter since the idea of artificial intelligent was first thought of 60 years ago. But are computers intelligent?Leon Sterling, Professor emeritus, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/102011-03-27T21:33:16Z2011-03-27T21:33:16ZHave computers finally eclipsed their creators?<figure><img src="https://images.theconversation.com/files/68/original/robot.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Could our days at the top of the brain chain be numbered?</span> <span class="attribution"><span class="source">AAP</span></span></figcaption></figure><p>In February this year, game shows got that little bit harder. And at the same time, artificial intelligence took another step towards the ultimate goal of creating and perhaps exceeding human-level intelligence. </p>
<p>Jeopardy! is a long running and somewhat back-to-front American quiz show in which contestants are presented with trivia clues in the form of answers, and must reply in the form of a question.</p>
<p><strong>Host:</strong> “Tickets aren’t needed for this ‘event’, a black hole’s boundary from which matter can’t escape.”<br>
<strong>Watson:</strong> “What is event horizon?”<br>
<strong>Host:</strong> “Wanted for killing Sir Danvers Carew; appearance – pale and dwarfish; seems to have a split personality.”<br>
<strong>Watson:</strong> “Who is Hyde?”<br>
<strong>Host:</strong> “Even a broken one of these on your wall is right twice a day.”
<strong>Watson:</strong> “What is clock?”</p>
<p>In case you didn’t see the news, Watson is a computer assembled by IBM at their research lab in New York State. It is a behemoth of 90 servers with 2880 cores and 16 Terabytes of RAM. </p>
<p>Watson was named in honour of IBM’s founder, T.J. Watson. However, befitting the word play found in many of the questions, the name also hints at Sherlock Holmes’ capable assistant, Dr Watson.</p>
<p>Watson’s two competitors in this Man versus Computer match were no slouches. First up was Brad Rutter. Brad is the biggest all-time money winner on Jeopardy! with over US$3 million in prize money. </p>
<p>Also competing was Ken Jennings, holder of the longest winning streak on the show. In 2004, Ken won 74 games in a row before being knocked from his pedestal. </p>
<p>Despite this formidable competition, Watson easily won the US$1 million prize over three days of competition. Chalk up another loss to humanity.</p>
<p>This isn’t the first time computer has beaten man. Famously, the former World Chess Champion Gary Kasparov was beaten by IBM’s Big Blue computer in 1997. </p>
<p>But there have been other, perhaps less well known, examples before these two momentous and IBM-centered events.</p>
<p>In 1979, Hans Berliner’s BKG program from Carnegie Mellon University beat Luigi Villa at backgammon. It thereby became the first computer program ever to defeat a world champion in any game.</p>
<p>In 1996, the Chinook program, written by a team from the University of Alberta, won the Man vs. Machine World Checkers Championship beating the Grandmaster checkers player, Don Lafferty. </p>
<p>Arguably Chinook’s greater triumph was against Marion Tinsley who is often considered to be the greatest checkers player ever. Tinsely never lost a World Championship match, and lost only seven games in his entire 45 year career, two of them to Chinook. </p>
<p>In their final match, Tinsely and Chinook were drawn but Tinsely had to withdraw due to ill health, and died shortly after. </p>
<p>Sadly we shall never know if Chinook would have gone on to draw or win. But the outcome is now somewhat immaterial as the University of Alberta team have improved their program to the point that it plays perfectly. </p>
<p>They exhaustively showed that their program could never be defeated. “Exhaustive” is the correct term here since it required years of computation on more than 200 computers to explore all the possible games.</p>
<p>More recently, in 2006, the program Quackle defeated former World Champion David Boys at Scrabble in a Human-Computer Showdown in Toronto. </p>
<p>Boys is reported to have remarked that losing to a machine is still better than <em>being</em> a machine. However, that sounds like sour grapes to me.</p>
<p>Man’s defeats have not been limited to games and game shows. Man has started to lose out to computers in many other areas. </p>
<p>Computers are replacing humans in making decisions in many businesses. For example, Visa, Mastercard and American Express all use artificial intelligence programs called neural networks to detect millions of dollars in credit card fraud. </p>
<p>There are many other examples from the mainstream to the esoteric where computers are performing equally or better than humans. In 2008, a team of Swiss, Hungarian and French researchers demonstrated that machine-learning algorithms were better at classifying dog barks than human animal lovers. </p>
<p>Computers have even started to impact on creative activities. One small example is found in my own research. </p>
<p>In 2002, the HR computer program written by Simon Colton, a PhD student I was supervising, invented a new type of number. The properties of this number have since been explored by human mathematicians.</p>
<p>However, computers still have a long way to go. Watson made a few mistakes en route to victory, many of which provide insight into the inner workings of its algorithms.</p>
<p><strong>Host:</strong> “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”<br>
<strong>Watson:</strong> “What is Toronto???”</p>
<p>The question was in the category “US cities”. As Rutler and Jennings knew, the correct answer is Chicago, home to O’Hare and Midway airports. </p>
<p>The multiple question marks signify Watson’s doubt about the answer. Toronto has Pearson International Airport, and a number of Pearsons fought bravely in various wars. </p>
<p>To add to the confusion there are US cities called Toronto in Illinois, Indiana, Iowa Kansas, Missouri, Ohio and South Dakota. This mistake illustrates that Watson doesn’t work with black and white, 0 or 1. It calculates probabilities. </p>
<p>In fact, one of the most interesting aspects of Watson was how it used these probabilities to play strategically, deciding when to answer and how much to bet.</p>
<p>If you’re feeling a little depressed, don’t worry. Man is still well ahead of computers under many measures. </p>
<p>The human brain consumes only around 20 watts of power. This is a big burden for a member of the animal kingdom (and demonstrates the value we get from being smart). But it is miniscule compared to the 350,000 watts used by Watson. </p>
<p>Per watt, man is still well ahead and computers remain very poor at some of the tasks that we take for granted. Seeing danger ahead on a windy and dark road, understanding a conversation in a noisy cocktail party, telling funny jokes, falling hopelessly in love.</p>
<p>Watson does tell us that artificial intelligence is making great advances in areas such as natural language understanding (getting computers to understand text) and probabilistic reasoning (getting computers to deal with uncertainty). </p>
<p>Beating game show contestants is not perhaps of immense value to mankind. In fact, you might be a little disappointed that computers are taking away another of life’s pleasures. </p>
<p>But the same technologies can and will be put to many other practical uses. </p>
<p>They can help doctors understand the vast medical literature and diagnose better. They can help lawyers understand the vast literature in case law and reduce the cost of seeking justice. </p>
<p>And you and I will see similar technology in search engines very soon. In fact, try out this query in Google today: “What is the population of Australia?”. Google understands the question and links directly to some tagged data and a graph showing the growth in the number of people in this lucky country.</p>
<p>Of course, you might worry where this will all end. Are machines going to take over man? Unfortunately, science fiction here is already science fact. </p>
<p>Computers are in control of many parts of our lives. And there are a few cases where computers have made life and (more importantly) death decisions. </p>
<p>In 2007, a software bug led to an automated anti-aircraft cannon killing nine South African soldiers and injuring 14 others. In his 2005 book The Singularity is Near, the futurist Ray Kurzwell predicted that artificial intelligence would approach a technological singularity in around 40 years. </p>
<p>He argues that computers will reach and then quickly exceed the intelligence of humans, and that progress will “snowball” as computers redesign themselves and exploit their many technical advantages. The movie of Ray’s book is coming to a theatre near you soon.</p>
<p>Fortunately, I do not share Ray’s concerns. There are several problems with his argument. </p>
<p>There is, for instance, no reason to suppose that there is much special in exceeding human intelligence. Let me give an analogy. </p>
<p>Airplanes have exceeded birds at flying quickly but you won’t be flying any faster today than you did a decade ago. If you excuse the terrible pun, the speed of flying has stalled. </p>
<p>In addition, there are various fundamental laws that may limit computers, such as the speed of light. Indeed, chip designers are already struggling to keep up with past rates of improvement. Nevertheless I predict that there are many exciting advances still to come from artificial intelligence.</p>
<p>Finally, if you want to have a go at beating Watson yourself, try out the <a href="http://www.nytimes.com/interactive/2010/06/16/magazine/watson-trivia-game.html">interactive web site</a>.</p><img src="https://counter.theconversation.com/content/10/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Walsh does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In February this year, game shows got that little bit harder. And at the same time, artificial intelligence took another step towards the ultimate goal of creating and perhaps exceeding human-level intelligence…Toby Walsh, Professor, Research Director, Data61Licensed as Creative Commons – attribution, no derivatives.