tag:theconversation.com,2011:/es/topics/ibm-92/articlesIBM – The Conversation2023-01-16T11:52:44Ztag:theconversation.com,2011:article/1893622023-01-16T11:52:44Z2023-01-16T11:52:44ZFrom a ‘deranged’ provocateur to IBM’s failed AI superproject: the controversial story of how data has transformed healthcare<figure><img src="https://images.theconversation.com/files/503380/original/file-20230106-16-mc22tn.png?ixlib=rb-1.1.0&rect=42%2C47%2C1076%2C845&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">US health data pioneer Ernest Codman at work on his national registry of patient outcomes, 1925.</span> <span class="attribution"><span class="source">Roy Mabrey/Boston Medical Library</span></span></figcaption></figure><p>Just over a decade ago, artificial intelligence (AI) made one of its showier forays into the public’s consciousness when <a href="https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?action=click&module=RelatedLinks&pgtype=Article">IBM’s Watson computer</a> appeared on the American quiz show <a href="https://en.wikipedia.org/wiki/Jeopardy!">Jeopardy!</a> The studio audience was made up of IBM employees, and Watson’s exhibition performance against two of the show’s most successful contestants was televised to a national viewership across three evenings. In the end, the machine triumphed comfortably.</p>
<hr>
<iframe id="noa-web-audio-player" style="border: none" src="https://embed-player.newsoveraudio.com/v4?key=x84olp&id=https://theconversation.com/from-a-deranged-provocateur-to-ibms-failed-ai-superproject-the-controversial-story-of-how-data-has-transformed-healthcare-189362 &bgColor=F5F5F5&color=D8352A&playColor=D8352A" width="100%" height="110px"></iframe>
<p><em>You can listen to more articles from The Conversation, narrated by Noa, <a href="https://theconversation.com/us/topics/audio-narrated-99682">here</a>.</em></p>
<hr>
<p>One of Watson’s opponents <a href="https://www.jeopardy.com/about/cast/ken-jennings">Ken Jennings</a>, who went on to make a career on the back of his gameshow prowess, showed grace – or was it deference? – in defeat, jotting down this commentary to accompany his final answer: “I, for one, welcome our new computer overlords.”</p>
<p>In fact, his phrase had been poached from another American television mainstay, The Simpsons. Jennings’ wry pop culture reference signalled Watson’s reception less as computer overlord and more as technological curio. But that was not how IBM saw it. On the back of this very public success, in 2011 IBM turned Watson toward one of the most lucrative but untapped industries for AI: healthcare.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/P18EdAKuC1U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>What followed over the next decade was a series of ups and downs – but <a href="https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html">mostly downs</a> – that exemplified the promise, but also the numerous shortcomings, of applying AI to healthcare. The Watson health odyssey finally ended in 2022 when it was <a href="https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html">sold off “for parts”</a>.</p>
<p>There is much to learn from this story about why AI and healthcare seemed so well-suited, and why that potential has proved so difficult to realise. But first we need to revisit the controversial origins of data use in this field, long before electronic computers were invented, and meet one of its American pioneers, <a href="https://www.facs.org/about-acs/archives/past-highlights/codmanhighlight/">Ernest Amory Codman</a> – an elite by birth, a surgeon by training, and a provocateur by nature.</p>
<h2>Data’s role in the birth of modern medicine</h2>
<p>While the utility of data in a general way had already been clear for several centuries, its collection and use on a massive scale was a feature of the 19th century. By the 1850s, collecting census data had become commonplace. Its use was not merely descriptive; it formed a way to make determinations about how to govern.</p>
<p>The 19th century marked the first time that, as US systems expert <a href="https://medium.com/@sjtmartin/big-data-a-19th-century-problem-9d58c3e6495b">Shawn Martin</a> explains, “managers felt the need to tie the information that society collected to things like performance [and] productivity”. This applied to public health as well, where “big data” played a critical role in establishing relationships between populations, their habits and environment (both at home and work), and disease.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Old street map of London" src="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=563&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=563&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=563&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=707&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=707&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503116/original/file-20230104-18-x3rgmh.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=707&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">John Snow’s groundbreaking map of cholera cases in central London, 1854.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Snow-cholera-map-1.jpg">Wikimedia</a></span>
</figcaption>
</figure>
<p>A well-known example is <a href="https://theconversation.com/sewage-alerts-the-long-history-of-using-maps-to-hold-water-companies-to-account-189013">John Snow’s discovery</a> of the source of a cholera outbreak in London’s Soho neighbourhood in 1854. Now considered one of epidemiology’s founding fathers, Snow canvassed door to door asking whether the families within had had cholera. His analysis came chiefly in the re-organisation of the data he collected – its plotting on a map – such that a pattern might emerge. This ultimately established not just the extent of the outbreak but also its source, the <a href="https://lookup.london/john-snow-water-pump/">Broad Street water pump</a>.</p>
<p>For Boston-born Codman, an outspoken medical reformer working at the beginning of the 20th century, such use of data to understand disease was up there as “one of the greatest moments in medicine”.</p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288776/original/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><strong><em>This article is part of Conversation Insights</em></strong>
<br><em>The Insights team generates <a href="https://theconversation.com/uk/topics/insights-series-71218">long-form journalism</a> derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.</em></p>
<hr>
<p>Though Codman was involved in <a href="https://qualitysafety.bmj.com/content/11/1/104">many data-driven reforms</a> during his controversial career, one of the most successful was the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2758960/">Registry of Bone Sarcoma</a>, which he established in 1920. His goal was to collect and analyse all of the cases of bone cancer (or suspected bone cancer) from across the US, and to use these to establish diagnostic criteria, therapeutic effectiveness and a standardised nomenclature.</p>
<p>There were a few rules for this registry. Individual doctors who contributed had to send x-rays, case reports and, if possible, tissue samples for examination by the registry’s consulting pathologists and Codman himself. This would ensure both the accuracy and uniformity of pathological analysis. The effort was a success which grew over time: by 1954, when the American College of Surgeons sought a new home for the registry, it contained an impressive 2,400 complete, cross-referenced cases.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Man's portrait" src="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=858&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=858&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=858&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1078&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1078&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503197/original/file-20230105-12-ydlxj6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1078&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ernest Codman.</span>
<span class="attribution"><a class="source" href="https://collections.nlm.nih.gov/catalog/nlm:nlmuid-101412473-img">National Library of Medicine</a></span>
</figcaption>
</figure>
<p>On the face of it, Codman’s decision to focus on bone cancer was baffling. It was neither a pressing nor a common concern for doctors across the US. But the disease’s relative rarity was one reason he chose it. Codman felt the amount of data received from his nationwide request would not be overwhelming for his small team of researchers to analyse.</p>
<p>Perhaps more importantly, he knew that studying bone cancer would raise the ire of far fewer of his colleagues than a more common disease might. In a clinical atmosphere in which expertise was understood as a combination of long experience with a dash of intuition – the physician’s “art” – Codman’s touting of data as a better way to obtain knowledge about a disease and its treatment was already being met with vociferous opposition.</p>
<p>It didn’t help that he tended to be inflammatory and provocative in the pursuit of his data-driven goals. At a medical meeting in Boston in 1915, he launched a surprise attack on his fellow practitioners. In the middle of this staid affair, Codman unveiled an <a href="https://protomag.com/policy/the-codman-affair/">8ft cartoon</a> lampooning his colleagues for their apathy toward healthcare reform and, as he saw it, their wilful ignorance of the limitations of the profession. As one (former) friend put it in the event’s aftermath, Codman’s only hope was that people would take the “charitable” view and consider him not an enemy of the profession but merely “mentally deranged”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Satirical cartoon titled The Back Bay Golden Goose Ostrich of an ostrich with head in the sand laying eggs being caught by group of men." src="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=284&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=284&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=284&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=357&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=357&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503228/original/file-20230105-24-nw1pdg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=357&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman’s 8ft cartoon lampooned medical practices in the early 20th century.</span>
<span class="attribution"><a class="source" href="https://ia800807.us.archive.org/1/items/b29812161/b29812161.pdf">From The Shoulder by E.A. Codman</a></span>
</figcaption>
</figure>
<p>Undeterred, Codman continued this pugnacious approach to his pioneering work. In a 1922 letter to the prestigious Boston Medical and Surgical Journal, he complained that the surgeons of Massachusetts had been particularly unhelpful to his registry. He explained that he had – politely – asked the 5,494 physicians in the state to “drop him a postal stating whether or not he knew of a case” so that Codman could acquire “the best statistics ever obtained on the frequency of the disease”. To his chagrin, he had received only 19 responses in nearly two years. Needling the journal’s editors and readers simultaneously, he asked:</p>
<blockquote>
<p>Is this because your Journal is not read? … [Or] because of the indifference of the medical profession as to whether the frequency of bone sarcoma is known or not?</p>
</blockquote>
<p>Codman proposed a questionnaire that would allow the journal to see whether the problem was its lack of readership, or his colleagues’ “inertia, procrastination, disapproval, opposition or disinterest”. A subsequent editorial in response to Codman’s proposal was surprisingly magnanimous:</p>
<blockquote>
<p>Whether we will it or not, we are obliged to be irritated, amused or instructed, according to our temperaments, by Dr Codman. Our advice is to be instructed.</p>
</blockquote>
<h2>An end to elitism?</h2>
<p>Despite the establishment’s resistance, submissions to Codman’s registry began to grow such that by 1924, he had enough material to make preliminary comments about bone cancer. For one thing, he had succeeded in standardising the much-contested matter of the proper nomenclature for the disease. This, he exulted, was so significant that it should be likened to the “rising of the sun”.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Hand-written data diary" src="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=892&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=892&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=892&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1121&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1121&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503198/original/file-20230105-26-vbjshz.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1121&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman made this chart of his own life in data.</span>
<span class="attribution"><a class="source" href="https://ia800807.us.archive.org/1/items/b29812161/b29812161.pdf">From The Shoulder by E.A.Codman</a></span>
</figcaption>
</figure>
<p>The registry also offered up many pieces of “impersonal proof”, as Codman called his data-driven findings, of the rightness of certain theories that individual physicians had promoted. Claims, for example, that combined treatments of “surgery, mixed toxins and radium” were more effective than treatments that relied on any of these alone were borne out by the data.</p>
<p>The registry, as Codman’s colleague <a href="https://en.wikipedia.org/wiki/Joseph_Colt_Bloodgood">Joseph Colt Bloodgood</a> <a href="https://www.nejm.org/doi/full/10.1056/NEJM192911142012003">put it</a>, “excited great interest” among practitioners, and not just because it had “influenced the entire medical world to pay more attention to bone tumours”. More importantly, it provided a new model for how to do medical work. Another admiring colleague responded to Bloodgood: </p>
<blockquote>
<p>The work of the registry [is] one of the outstanding American contributions to surgical pathology. As a method of study, it shows the necessity of very wide experience before a surgeon is capable of handling intelligently cases of this disease … [It] is impossible for any single individual to claim finality of this sort.</p>
</blockquote>
<p>This emphasis on “very wide experience” over the experience of “any single individual” points to another critical reason to prefer data, according to Codman. His goal in changing the method by which medical knowledge was made was not just to get better results. By seeking to undo the image of medicine as an “art” that depended on the wisdom of a select group of preternaturally talented individuals, Codman also threatened to undo the class-ridden reality that underlay this public veneer.</p>
<p>As the efficiency engineer Frank Gilbreth implied in a 1913 article in <a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015038046010&view=1up&seq=428&q1=Gilbreth">the American Magazine</a>, if it was true that medicine required no specific intrinsic gifts (monetary or otherwise), then absolutely anybody – whatever their class, race or background – could do it, including “bricklayers, shovellers and dock-wallopers” who were currently shut out of such “high-brow” occupations.</p>
<p>Codman was even more pointed. If data was used to evaluate the outcomes of his physician colleagues, he insisted, it would show that the quality of doctors and hospitals was generally poor. He <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2758959/">sniped</a> that they excelled chiefly in “making dying men think they are getting better, concealing the gravity of serious diseases, and exaggerating the importance of minor illnesses to suit the occasion”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Postcard of large, neoclassical, stone building." src="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=376&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=376&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=376&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=473&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=473&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503240/original/file-20230105-18-glcfyu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=473&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Codman admitted his own social advantages in joining Harvard Medical School.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/f/fe/Harvard_Medical_School%2C_Boston%2C_Mass_%28NYPL_b12647398-74267%29.tiff">Detroit Publishing Company/Wikimedia</a></span>
</figcaption>
</figure>
<p>“Nepotism, pull and politics” were the order of the day in medicine, Codman wrote in one of his most scathing takedowns of his colleagues at the Massachusetts General Hospital. Yet he made himself the centrepiece of this critique, conceding that his <a href="https://www.shoulderdoc.co.uk/article/907">entrance to Harvard Medical School</a> had come on the back of “friends and relatives among the well-to-do”. The only difference, he suggested, was that he was willing to own up to it, and to subject himself and his work to the scrutiny of data.</p>
<h2>Data’s unflattering view of medicine</h2>
<p>Codman was not the only person having a come-to-Jesus moment with data over this period. In the 1920s, the American social science researchers <a href="https://www.britannica.com/biography/Robert-Lynd-and-Helen-Lynd">Robert and Helen Lynd</a> collected data in the small US town of Muncie, Indiana, as a way of creating a picture of the <a href="https://www.c-span.org/video/?197089-1/the-averaged-american">“averaged American”</a>.</p>
<p>By the 1930s, the similarly-minded <a href="http://www.massobs.org.uk/about/history-of-mo">Mass Observation project</a> took off in Britain, intending to collect data about everyday life so as to create an “anthropology of ourselves”. Crucially, both reflected the thinking that also drove Codman: that the right way to know something – a people, a disease – was to produce what seemed a suitably representative average. And this meant the amalgamation of often quite diverse and wide-ranging characteristics and their compression into a single, standard, efficient unit.</p>
<p>The turn from describing representative averages to learning from these averages is probably best articulated in the work of pollsters, whose door-to-door interrogations were aimed at helping a nation to know itself by statistics. In 1948, inspired by their failure to correctly predict the outcome of the US presidential election – one of the <a href="https://www.latimes.com/archives/la-xpm-1998-nov-01-mn-38174-story.html">most famous psephological errors</a> in the nation’s history – pollsters such as George Gallup and Elmo Roper began to rethink their analytic methods, spinning away from quota sampling and towards random sampling.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Satirical cartoon of Harry Truman looking at poll results showing he will lose election while his opponent says 'What's the use of going through with the election?'" src="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=575&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=575&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=575&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=723&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=723&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503126/original/file-20230104-130036-m60jcp.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=723&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The 1948 election was one of the most famous psephological errors in US history.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/0/06/Truman-Dewey-polls-1948.jpg">Clifford K. Berryman/Wikimedia</a></span>
</figcaption>
</figure>
<p>At the same time, thanks primarily to its <a href="https://mitpress.mit.edu/9780262550284/the-closed-world/">military applications</a>, the science of computing began to gather pace. And the growing fascination with knowing the world via data combined with the unparalleled ability of computers to crunch it appeared a match made in heaven.</p>
<p>In a late-in-life <a href="https://archive.org/details/b29812161/page/n7/mode/2up">preface</a> to his 1934 data-driven magnum opus on the anatomy of the shoulder, Codman had comforted himself with the thought that he was a man ahead of his time. And indeed, just a few years after his death in 1940, statistical analysis began to pick up steam in medicine.</p>
<p>Over the next two decades, figures such as <a href="https://en.wikipedia.org/wiki/Ronald_Fisher">Sir Ronald Fisher</a>, the geneticist and statistician remembered for suggesting randomisation as an antidote to bias, and his English compatriot <a href="https://en.wikipedia.org/wiki/Austin_Bradford_Hill">Sir Austin Bradford Hill</a>, who demonstrated the connection between smoking and lung cancer, also pushed forward the integration of statistical analysis into medicine. </p>
<figure class="align-right ">
<img alt="Man's face" src="https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=661&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=661&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=661&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=830&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=830&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503455/original/file-20230106-15-6gv3oj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=830&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Archie Cochrane.</span>
<span class="attribution"><a class="source" href="https://community.cochrane.org/archie-cochrane-name-behind-cochrane">Cardiff University Library/Cochrane Archive</a></span>
</figcaption>
</figure>
<p>However, it would take many more years for word to finally leak out that, by data’s measure, both the methodologies of medical research and much of medicine itself was ineffective. In a movement led in part by outspoken Scottish epidemiologist <a href="https://en.wikipedia.org/wiki/Archie_Cochrane">Archie Cochrane</a>, this unflattering statistical view of medicine finally really saw the light of day in the 1960s and 70s.</p>
<p>Cochrane went so far as to say that medicine was based on “a level of guesswork” so great that any return to health after a medical intervention was more a “tribute to the sheer survival power of the minds and bodies” of patients than anything else. Aghast at the revelations embedded in Cochrane’s 1972 book, <a href="https://www.nuffieldtrust.org.uk/research/effectiveness-and-efficiency-random-reflections-on-health-services">Random Reflections on Health Services</a>, the Guardian journalist Ann Shearer <a href="https://www.proquest.com/docview/185551386/4D45D85AC6E94604PQ/1?accountid=11862">wrote</a>:</p>
<blockquote>
<p>Isn’t it … more than fair to ask what on Earth we – and more particularly, the medical They – have been doing all these years to let the health machine develop with such a lack of quality control?</p>
</blockquote>
<p>The answer dates back to Codman’s bone cancer registry half a century earlier. The medical establishment on both sides of the Atlantic had been avoiding with all their might the scrutiny that data would bring.</p>
<h2>Computers finally acquire medical currency</h2>
<p>Despite their increasing ubiquity in the 1970s and 80s, computers had still only haltingly joined the medical mainstream. Though a <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2793587/">smattering of AI applications</a> began to appear in healthcare in the 1970s, it was only in the 1990s that computers really started to acquire some medical currency.</p>
<p>In a page borrowed straight from Codman’s time, the pioneering American biomedical informatician <a href="https://en.wikipedia.org/wiki/Edward_H._Shortliffe">Edward Shortliffe</a> noted in 1993 that the <a href="https://pubmed.ncbi.nlm.nih.gov/8358494/">future of AI in medicine</a> depended on the realisation that “the practice of medicine is inherently an information-management task”.</p>
<p>In the US, the Institute of Medicine and the <a href="https://www.loc.gov/item/lcwaN0016849/">President’s Information Technology Advisory Council</a> released <a href="https://www.ncbi.nlm.nih.gov/books/NBK222268/">reports</a> highlighting the failures of medicine to fully embrace information technology. By 2004, a newly appointed national coordinator for health information technology was <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2793587/">charged with</a> the herculean task of establishing an electronic medical record for all Americans by 2014. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Man operating early computer" src="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503222/original/file-20230105-26-ii9kzs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An IBM System 360 computer in 1969.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/2/20/1969._IBM_System_360_computer._Automated_Data_Processing_Center._USDA_South_Building%2C_Washington%2C_DC._%2834271384522%29.jpg">USDA Forest Service via Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>This explosion of interest in bringing computers into healthcare made it an enticing and potentially lucrative area for investment. So it is no surprise that IBM celebrated Watson’s winning turn on Jeopardy! in 2011 by putting it to work on an <a href="https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html">oncology-focused programme</a> with multiple US-based clinical partners selected on the basis of their access to medical data.</p>
<p>The idea was laudable. Watson would do what <a href="https://theconversation.com/uk/topics/machine-learning-algorithms-103181">machine learning algorithms</a> do best: mining the massive amounts of data these institutions had at their disposal, searching for patterns that would help to improve treatment. But the complexity of cancer and the frustratingly unique responses of patients to it, yoked together by data systems that were sometimes incomplete and sometimes incompatible with each other or with machine learning’s methods more generally, limited Watson’s ability to be useful.</p>
<p>One sorry example was Watson’s <a href="https://www.mdanderson.org/publications/annual-report/annual-report-2013/the-oncology-expert-advisor.html">Oncology Expert Advisor</a>, a collaboration with the MD Anderson Cancer Center in Houston, Texas. This had begun its life as a “bedside diagnostic tool” that pored through patient records, scientific literature and doctors’ notes in order to make real-time treatment recommendations. Unfortunately, Watson couldn’t “read” the doctors’ notes. While good at mining the scientific literature, it couldn’t apply these large-scale discussions to the specifics of the individuals in front of it. By 2017, the project had been <a href="https://www.forbes.com/sites/matthewherper/2017/02/19/md-anderson-benches-ibm-watson-in-setback-for-artificial-intelligence-in-medicine/?sh=202871377485">shelved</a>.</p>
<p>Elsewhere, at New York City’s famed Memorial Sloan Kettering Cancer Center, clinicians found a more elaborate – and infinitely more problematic – way forward. Rather than relying on the retrospective data that is machine learning’s usual fodder, clinicians invented <a href="https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882">new “synthetic” cases</a> that were, by virtue of having been invented, infinitely less messy and more complete than any real data could be.</p>
<p>The project re-litigated the “data v expertise” debate of Codman’s time – once more in Codman’s favour – since this invented data had built into it the specifics of cancer treatment as understood by a small group of clinicians at a single hospital. Bias, in other words, was programmed directly in, and those engaged in training the system knew it.</p>
<p>Viewing historical patient data as too narrow, they rationalised that replacing this with data that reflected their own collective experience, intuition and judgment could build into Watson For Oncology the latest and greatest treatments. Of course, this didn’t work any better in the early 21st century than it had in the early 20th. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Room-size black box behind glass lit with purple lights" src="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503225/original/file-20230105-26-18ivrq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An early prototype of IBM Watson in 2011.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/2/22/IBM_Watson.PNG">Clockready/Wikimedia</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Furthermore, while these clinicians sidestepped the problem of real data’s impenetrable messiness, treatment options available at a wealthy hospital in Manhattan were far removed from those available in the other localities that Watson was meant to serve. The contrast was perhaps starkest when Watson was introduced to <a href="https://www.statnews.com/2016/08/19/ibm-watson-cancer-asia/">other parts of the world</a>, only to find the treatment regimens it recommended either didn’t exist or were not in keeping with the local and national infrastructures governing how healthcare was done there.</p>
<p>Even in the US, the consensus, as one unnamed physician in Florida reported back to IBM, was that Watson was a <a href="https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882">“piece of shit”</a>. Most of the time, it either told clinicians what they already knew or offered up advice that was incompatible with local conditions or the specifics of a patient’s illness. At best, it offered up a snapshot of the views of a select few clinicians at a moment in time, now reified as “facts” that ought to apply uniformly and everywhere they went.</p>
<p>Many of the elegies written to mark Watson’s selling-off in 2022, having failed to make good on its promise in healthcare, attributed its downfall to the same kind of overpromise and under-delivery that has spelled the end for many health technology start-ups.</p>
<p>Some maintained that the scaling-up of Watson from gameshow savant to oncological wunderkind might have been successful with more time. Perhaps. But in 2011, time was of the essence. To capitalise on the goodwill toward Watson and IBM that Jeopardy! had created, to be the trailblazer into the lucrative but technologically backward world of healthcare, had meant striking first and fast.</p>
<p>Watson’s high-profile failure highlights an overlooked barrier to modern, data-driven healthcare. In its encounters with real, human patients, Watson stirred up the same anxieties that Codman had encountered – difficult questions about what it is exactly that medicine produces: care, and the human touch that comes with it; or cure, and the information management tasks that play a critical role here?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-excel-at-medical-diagnosis-but-the-harder-task-is-to-win-hearts-and-minds-first-63782">AI can excel at medical diagnosis, but the harder task is to win hearts and minds first</a>
</strong>
</em>
</p>
<hr>
<p>A <a href="https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2791851?resultClick=1">2019 study</a> of US patient perspectives of AI’s role in healthcare gave these concerns some statistical shape. Though some felt optimistic about AI’s potential to improve healthcare, a vast majority gave voice to fundamental misgivings about relinquishing medicine to machine learning algorithms that could not explain the logic they employed to reach their diagnosis. Surely the absence of a physician’s judgment would increase the risk of misdiagnosis?</p>
<p>The persistence of this worry has quite often resulted in caveating the work of machine learning with reassurances that humans are still in charge. In a 2020 <a href="https://news.microsoft.com/en-gb/2020/12/09/a-microsoft-ai-tool-is-helping-to-speed-up-cancer-treatment-and-addenbrookes-will-be-the-first-hospital-in-the-world-to-use-it/">report</a> on the InnerEye project, for example, which used retrospective data to identify tumours on patient scans, Yvonne Rimmer, a clinical oncologist at Addenbrooke’s Hospital in Cambridge, addressed this concern:</p>
<blockquote>
<p>It’s important for patients to know that the AI is helping me in my professional role. It’s not replacing me in the process. I doublecheck everything the AI does, and can change it if I need to.</p>
</blockquote>
<h2>Data’s uncertain role in the future of healthcare</h2>
<p>Today, whether a doctor gives you your diagnosis or you get it from a computer, that diagnosis is not primarily based on the intuition, judgment or experience of either doctor or patient. It’s driven by data that has made our cultures of mainstream care relatively more uniform and of a higher standard. Just as Codman foresaw, the introduction of data in medicine has also forced a greater degree of transparency, both in terms of methodologies and effectiveness.</p>
<p>However, the more important – and potentially intractable – problem with this modern approach to health is its lack of representation. As the Sloan Kettering dalliance with Watson began to show, datasets are not the “impersonal proofs” that Codman took them to be.</p>
<p>Even under less egregiously subjective conditions, data undeniably replicates and concretises the biases of society itself. As MIT computer scientist Marzyeh Ghassemi explains, data offers the “<a href="https://news.mit.edu/2022/marzyeh-ghassemi-explores-downside-machine-learning-health-care-0201">sheen of objectivity</a>” while replicating the ethnic, racial, gender and age biases of institutionalised medicine. Thus the tools, tests and techniques that are based on this data are also not impartial. </p>
<p>Ghassemi highlights the inaccuracy of pulse oximeters, often calibrated on light-skinned individuals, for those with darker skin. Others might note the outcry over the <a href="https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(19)30510-0/fulltext">gender bias in cardiology</a>, spelled out especially in higher mortality rates for women who have heart attacks. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/C22JlzHlLJQ?wmode=transparent&start=8" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The landmark human genome announcement in 2000.</span></figcaption>
</figure>
<p>The list goes on and on. Remember the human genome project, that big data triumph which has, according to the US National Institutes of Health <a href="https://www.genome.gov/human-genome-project">website</a>, “accelerated the study of human biology and improved the practice of medicine”? It almost exclusively drew upon genetic studies of white Europeans. According to <a href="https://precisionmedicine.ucsf.edu/%E2%80%9Cwicked-problem%E2%80%9D-racism-and-race-precision-medicine">Esteban Burchard</a> at the University of California, San Francisco: </p>
<blockquote>
<p>96% of genetic studies have been done on people with European origin, even though Europeans make up less than 12% of the world’s population … The human genome project should have been called the European genome project.</p>
</blockquote>
<p>A lack of representative data has implications for big data projects across the board – not least for <a href="https://www.fda.gov/medical-devices/in-vitro-diagnostics/precision-medicine">precision medicine</a>, which is widely touted as the antidote to the problems of impersonal, algorithm-driven healthcare.</p>
<p>Precision or “personalised” medicine seeks to address one of the essential perceived drawbacks of data-based medicine by locating finer-grained commonalities between smaller and smaller subsets of the population. By focusing on data at a genetic and cellular level, it may yet counter the criticism that the data-driven approach of recent decades is too blunt and insensitive a tool, such that “even the most frequently prescribed drugs for the most common conditions have very limited efficacy”, according to computational biologist <a href="https://www.jstor.org/stable/26601761#metadata_info_tab_contents">Chloe-Agathe Azencott</a>. </p>
<p>But personalised medicine still feeds on the same depersonalised data as medicine more generally, so it too is handicapped by data’s biases. And even if it could step beyond the problems of biased data – <a href="https://theconversation.com/extent-of-institutional-racism-in-british-universities-revealed-through-hidden-stories-118097">and, indeed, institutions</a> – the question of its role in the future of our everyday healthcare does not end there.</p>
<p>Even taking the utopian view that personalised medicine might make possible treatments as individual as we are, pharmaceutical companies won’t develop these treatments unless they are profitable. And that requires either prices so high that only the wealthiest of us could afford them, or a market so big that these companies can “<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2918032/">achieve the requisite return on investment</a>”. Truly individualised care is not really on the table.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/in-defence-of-imprecise-medicine-the-benefits-of-routine-treatments-for-common-diseases-128440">In defence of ‘imprecise’ medicine: the benefits of routine treatments for common diseases</a>
</strong>
</em>
</p>
<hr>
<p>If our goal in healthcare is to help more people by being more representative, more inclusive and more attentive to individual difference in the medical everyday of diagnosis and treatment, big data isn’t going to help us out. At least not as things currently stand.</p>
<p>For the story of healthcare data to date has pointed us squarely in the other direction, towards homogenisation and standardisation as medical goals. Laudable as the rationales for such a focus for medicine have been at different moments in our history, our expectations for the potential for machine learning to enable all of us to live longer, healthier lives remain something of a pipe dream. Right now it is still us humans, not our computer overlords, who hold most sway over our individual health outcomes.</p>
<p><em>Dr Caitjan Gainty is a winner of The Conversation’s <a href="https://theconversation.com/sir-paul-curran-award-for-academic-communication-2021-goes-to-caitjan-gainty-175125">Sir Paul Curran award for academic communication</a></em></p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=112&fit=crop&dpr=1 600w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=112&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=112&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=140&fit=crop&dpr=1 754w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=140&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/313478/original/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=140&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em>For you: more from our <a href="https://theconversation.com/uk/topics/insights-series-71218?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Insights series</a>:</em></p>
<ul>
<li><p><em><a href="https://theconversation.com/the-discovery-of-insulin-a-story-of-monstrous-egos-and-toxic-rivalries-172820?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The discovery of insulin: a story of monstrous egos and toxic rivalries
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/james-mccune-smith-new-discovery-reveals-how-first-african-american-doctor-fought-for-womens-rights-in-glasgow-166233?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">James McCune Smith: new discovery reveals how first African American doctor fought for women’s rights in Glasgow
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/drugs-robots-and-the-pursuit-of-pleasure-why-experts-are-worried-about-ais-becoming-addicts-163376?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">Drugs, robots and the pursuit of pleasure – why experts are worried about AIs becoming addicts
</a></em></p></li>
<li><p><em><a href="https://theconversation.com/the-inside-story-of-recovery-how-the-worlds-largest-covid-19-trial-transformed-treatment-and-what-it-could-do-for-other-diseases-184772?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK">The inside story of Recovery: how the world’s largest COVID-19 trial transformed treatment – and what it could do for other diseases
</a></em></p></li>
</ul>
<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. <a href="https://theconversation.com/uk/newsletters/the-daily-newsletter-2?utm_source=TCUK&utm_medium=linkback&utm_campaign=TCUKengagement&utm_content=InsightsUK"><strong>Subscribe to our newsletter</strong></a>.</em></p><img src="https://counter.theconversation.com/content/189362/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Caitjan Gainty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To understand the potential for machine learning to transform medicine, we must go back to the controversial origins of data use in healthcareCaitjan Gainty, Senior Lecturer in the History of Science, Technology and Medicine, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1728502021-12-01T13:36:14Z2021-12-01T13:36:14ZHow the US census led to the first data processing company 125 years ago – and kick-started America’s computing industry<figure><img src="https://images.theconversation.com/files/434761/original/file-20211130-27-1uk0tsc.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C2394%2C2307&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This electromechanical machine, used in the 1890 U.S. census, was the first automated data processing system.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/niallkennedy/6414584">Niall Kennedy/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>The U.S. Constitution requires that a population count be conducted at the beginning of every decade. </p>
<p>This census has always been charged with political significance, and continues to be. That’s clear from <a href="https://www.cnn.com/2020/09/09/politics/census-challenges/index.html">the controversies in the run-up to the 2020 census</a>. </p>
<p>But it’s less widely known how important the census has been in developing the U.S. computer industry, a story that I tell in my book, “<a href="https://jhupbooks.press.jhu.edu/title/republic-numbers">Republic of Numbers: Unexpected Stories of Mathematical Americans through History</a>.” That history includes the founding of the first automated data processing company, the <a href="https://www.smithsonianmag.com/smithsonian-institution/herman-holleriths-tabulating-machine-2504989/">Tabulating Machine Company</a>, 125 years ago on December 3, 1896.</p>
<h2>Population growth</h2>
<p>The only use of the census clearly specified in the Constitution is to allocate seats in the House of Representatives. More populous states get more seats. </p>
<p>A minimalist interpretation of the census mission would require reporting only the overall population of each state. But the census has never confined itself to this.</p>
<p>A complicating factor emerged right at the beginning, with the Constitution’s distinction between “free persons” and “<a href="http://www.digitalhistory.uh.edu/disp_textbook.cfm?smtID=3&psid=163">three-fifths of all other persons</a>.” This was the Founding Fathers’ infamous mealy-mouthed compromise between those states with a large number of enslaved persons and those states where relatively few lived. </p>
<p><a href="https://www.census.gov/history/www/through_the_decades/index_of_questions/1790_1.html">The first census</a>, in 1790, also made nonconstitutionally mandated distinctions by age and sex. In subsequent decades, many other personal attributes were probed as well: occupational status, marital status, educational status, place of birth and so on.</p>
<p>As the country grew, each census required greater effort than the last, not merely to collect the data but also to compile it into usable form. <a href="https://www.jstor.org/stable/24987147?seq=1#page_scan_tab_contents">The processing of the 1880 census</a> was not completed until 1888. </p>
<p>It had become a mind-numbingly boring, error-prone, clerical exercise of a magnitude rarely seen. </p>
<p>Since the population was evidently continuing to grow at a rapid pace, those with sufficient imagination could foresee that processing the 1890 census would be gruesome indeed without some change in procedure. </p>
<p><iframe id="1Onyi" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/1Onyi/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>A new invention</h2>
<p>John Shaw Billings, a physician assigned to assist the Census Office with compiling health statistics, had closely observed the immense tabulation efforts required to deal with the raw data of 1880. He expressed his concerns to a young mechanical engineer assisting with the census, Herman Hollerith, a recent graduate of the Columbia School of Mines. </p>
<p>On Sept. 23, 1884, the U.S. Patent Office recorded a submission from the 24-year-old Hollerith, titled “<a href="https://pdfpiw.uspto.gov/.piw?PageNum=0&docid=00395782&IDKey=73D9506C5930%0D%0A&HomeUrl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D0395782.PN.%2526OS%3DPN%2F0395782%2526RS%3DPN%2F0395782">Art of Compiling Statistics</a>.”</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an old black and white photograph showing a man seated at a wooden desk-like machine looking at a bank of indicator dials" src="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=709&fit=crop&dpr=1 600w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=709&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=709&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=891&fit=crop&dpr=1 754w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=891&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/434755/original/file-20211130-19-16o80z7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=891&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Hollerith electric tabulating machine in use in 1902.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/history/img/1902_Hollerith_electric_tabulating_machine.jpg">United States Census Bureau</a></span>
</figcaption>
</figure>
<p>By progressively improving the ideas of this initial submission, Hollerith would decisively win an 1889 competition to improve the processing of the 1890 census. </p>
<p>The <a href="https://www.census.gov/history/www/innovations/technology/the_hollerith_tabulator.html">technological solutions</a> devised by Hollerith involved a suite of mechanical and electrical devices. The first crucial innovation was to translate data on handwritten census tally sheets to patterns of holes punched in cards. As Hollerith phrased it, in the 1889 revision of his patent application,</p>
<blockquote>
<p>“A hole is thus punched corresponding to person, then a hole according as person is a male or female, another recording whether native or foreign born, another either white or colored, &c.”</p>
</blockquote>
<p>This process required developing special machinery to ensure that holes could be punched with accuracy and efficiency. </p>
<p>Hollerith then devised a machine to “read” the card, by probing the card with pins, so that only where there was a hole would the pin pass through the card to make an electrical connection, resulting in advance of the appropriate counter. </p>
<p>For example, if a card for a white male farmer passed through the machine, a counter for each of these categories would be increased by one. The card was made sturdy enough to allow passage through the card reading machine multiple times, for counting different categories or checking results.</p>
<p>The count proceeded so rapidly that the <a href="https://play.google.com/books/reader?id=MGZqAAAAMAAJ&pg=GBS.PA1">state-by-state numbers needed for congressional apportionment</a> were certified before the end of November 1890. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=470&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=470&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=470&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This ‘mechanical punch card sorter’ was used for the 1950 census.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/library/photos/machinists_technicians_5.html">U.S. Census Bureau</a></span>
</figcaption>
</figure>
<h2>Rise of the punched card</h2>
<p>After his census success, <a href="https://www.worldcat.org/title/computer-a-history-of-the-information-machine/oclc/1110437971?referer=br&ht=edition">Hollerith went into business selling this technology</a>. The company he founded, the Tabulating Machine Company, would, after he retired, become International Business Machines - IBM. IBM led the way in perfecting card technology for recording and tabulating large sets of data for a variety of purposes. </p>
<p>By the 1930s, many businesses were using cards for record-keeping procedures, such as payroll and inventory. Some data-intensive scientists, especially astronomers, were also finding the cards convenient. IBM had by then standardized an 80-column card and had developed keypunch machines that would change little for decades. </p>
<p>Card processing became one leg of the mighty computer industry that blossomed after World War II, and IBM for a time would be the third-largest corporation in the world. Card processing served as a scaffolding for vastly more rapid and space-efficient purely electronic computers that now dominate, with little evidence remaining of the old regime. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blue IBM punch card.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Blue-punch-card-front.png">Gwern/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Those who have grown up knowing computers only as easily portable devices, to be communicated with by the touch of a finger or even by voice, may be unfamiliar with the room-size computers of the 1950s and ’60s, where the primary means of loading data and instructions was by creating a deck of cards at a keypunch machine, and then feeding that deck into a card reader. This persisted as the default procedure for many computers well into the 1980s. </p>
<p><a href="https://www.worldcat.org/title/grace-hopper-navy-admiral-and-computer-pioneer/oclc/19516564&referer=brief_results">As computer pioneer Grace Murray Hopper recalled</a> about her early career, “Back in those days, everybody was using punched cards, and they thought they’d use punched cards forever.”</p>
<p>Hopper had been an important member of the team that created the first commercially viable general-purpose computer, the Universal Automatic Computer, or UNIVAC, one of the card-reading behemoths. Appropriately enough, the first UNIVAC delivered, in 1951, was to the U.S. Census Bureau, still hungry to improve its data processing capabilities.</p>
<p>No, computer users would not use punched cards forever, but they used them through the Apollo Moon-landing program and the height of the Cold War. Hollerith would likely have recognized the direct descendants of his 1890s census machinery almost 100 years later. </p>
<p><em>This is an updated version of an article originally published on October 15, 2019.</em></p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/172850/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Lindsay Roberts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As the country grew, each census required greater effort than the last. That problem led to the invention of the punched card – and the birth of an industry.David Lindsay Roberts, Adjunct Professor of Mathematics, Prince George's Community CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1435172020-07-28T21:31:03Z2020-07-28T21:31:03ZLawmakers keen to break up ‘big tech’ like Amazon and Google need to realize the world has changed a lot since Microsoft and Standard Oil<figure><img src="https://images.theconversation.com/files/350020/original/file-20200728-13-10an37w.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3250%2C2096&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">House lawmakers grilled these four CEOs on July 29.</span> <span class="attribution"><span class="source">AP Photo</span></span></figcaption></figure><p>Big tech is back in the spotlight. </p>
<p>The chief executives of Amazon, Apple, Facebook and Google <a href="https://www.nytimes.com/2020/07/28/technology/amazon-apple-facebook-google-antitrust-hearing.html?action=click&module=Top%20Stories&pgtype=Homepage">testified before Congress</a> on July 29 to defend their market dominance from accusations they’re stifling rivals. Lawmakers and regulators are increasingly talking about <a href="https://www.wsj.com/articles/justice-department-is-preparing-antitrust-investigation-of-google-11559348795">antitrust action</a> and possibly breaking the companies up into smaller pieces. </p>
<p>I study the effects of <a href="https://sites.tufts.edu/digitalplanet">digital technologies on lives</a> and livelihoods across 90 countries. I believe <a href="https://www.politico.com/2020-election/candidates-views-on-the-issues/technology/tech-competition-antitrust/">advocates</a> of breaking up big technology companies, as well as <a href="https://www.weforum.org/agenda/2019/07/these-are-some-of-the-best-quotes-about-technology-monopolies-in-2019/">opponents</a>, are both falling prey to some serious myths and misconceptions. </p>
<h2>Myth 1: Comparing Google with Standard Oil</h2>
<p>Arguments for and against antitrust action <a href="https://www.nytimes.com/1998/10/19/business/microsoft-trial-precedents-previous-antitrust-cases-leave-room-for-both-sides.html">often use earlier cases</a> as reference points.</p>
<p>The massive <a href="https://theconversation.com/for-tech-giants-a-cautionary-tale-from-19th-century-railroads-on-the-limits-of-competition-91616">19th-century monopoly Standard Oil</a>, for example, has been referred to as the “<a href="https://www.nytimes.com/2018/02/20/magazine/the-case-against-google.html">Google of its day</a>.” There are also people who are recalling the 1990s <a href="https://www.nytimes.com/2018/05/18/opinion/microsoft-antitrust-case.html">antitrust case against Microsoft</a>. </p>
<p>Those cases may seem similar to today’s situation, but this era is different in one crucial way: the global technology marketplace. </p>
<p>Currently, there are two “big tech” clusters. One is in the U.S., dominated by <a href="https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791">Google, Amazon, Facebook and Apple</a>. The other is in China, dominated by <a href="https://singularityhub.com/2018/08/17/baidu-alibaba-and-tencent-the-rise-of-chinas-tech-giants/">Baidu, Alibaba, Tencent, Huawei</a> and <a href="https://www.nytimes.com/2020/07/26/technology/tiktok-china-ban-model.html">TikTok</a>-maker ByteDance. </p>
<p>This global market is subject to very different political and policy pressures than regulators faced when dealing with Standard Oil and Microsoft. For example, the Chinese government <a href="https://www.scmp.com/tech/china-tech/article/2120913/china-recruits-baidu-alibaba-and-tencent-ai-national-team">has blocked most</a> of the U.S. companies from entering its market. And the <a href="https://www.bloomberg.com/news/articles/2018-06-27/alibaba-pulls-back-in-u-s-amid-trump-crackdown-on-chinese-investment">U.S. government has done likewise</a>, blacklisting some Chinese outfits over perceived national security threats while discouraging others.</p>
<p>Since the COVID-19 outbreak, the Chinese government <a href="https://www.amnesty.org/en/latest/news/2020/04/how-china-used-technology-to-combat-covid-19-and-tighten-its-grip-on-citizens/">has doubled down</a> on championing its own technology companies.</p>
<p>U.S. companies’ size and data accumulation capabilities give the country economic and political influence around the globe. If the U.S. technology giants are broken up, the result would be a vastly uneven global playing field, pitting fragmented U.S. companies against consolidated state-protected Chinese firms.</p>
<h2>Myth 2: Antitrust is about money</h2>
<p>There are two main views of antitrust action among legal experts. </p>
<p>One focuses on consumer welfare, which has been the prevailing approach federal lawyers have taken <a href="https://www.jstor.org/stable/724991">since the 1960s</a>. The other suggests that regulators should look at the <a href="https://www.yalelawjournal.org/note/amazons-antitrust-paradox">underlying structure of the market</a> and potential for <a href="https://www.pbwt.com/antitrust-update-blog/a-brief-overview-of-the-new-brandeis-school-of-antitrust-law">powerful players to exploit</a> their positions.</p>
<p>Those two sides seem to agree that price plays a key role. People who argue against breaking up the tech giants point out that Facebook and Google provide services that are <a href="https://slate.com/technology/2019/06/facebook-big-tech-antitrust-breakup-mistake.html">free to the consumer</a>, and that Amazon’s marketplace power drives its products’ costs down. On the other side, though, are those who say that <a href="https://www.yalelawjournal.org/note/amazons-antitrust-paradox">having low or no prices</a> is evidence that these companies are artificially lowering consumer costs to draw users into company-controlled systems that are <a href="https://techcrunch.com/2019/02/04/why-no-one-really-quits-google-or-facebook/">hard to leave</a>.</p>
<p>Both sides are missing the fact that the monetary price is less relevant as a measure of what users pay in the technology industry than it is in other types of business. Users <a href="https://theconversation.com/how-much-is-your-data-worth-to-tech-companies-lawmakers-want-to-tell-you-but-its-not-that-easy-to-calculate-119716">pay for digital products with their data</a>, rather than just money.</p>
<p>Regulators shouldn’t focus only on the monetary costs to the users. Rather, they should ask whether users are being asked for more data than is strictly necessary, whether information is being collected in <a href="https://theconversation.com/7-in-10-smartphone-apps-share-your-data-with-third-party-services-72404">intrusive or abusive ways</a> and whether customers are <a href="https://www.axios.com/mark-warner-josh-hawley-dashboard-tech-data-4ee575b4-1706-4d05-83ce-d62621e28ee1.html">getting good value in exchange for their data</a>.</p>
<h2>Myth 3: Trust-busting is all or nothing</h2>
<p>There aren’t just two ways for this debate to end, with either a breakup of one or more technology giants or simply leaving things as they are for the market to develop further. </p>
<p>In my view, the best outcome is right in the middle. The errant company is sued to make necessary changes but isn’t broken up. The very fact that the government filed a lawsuit leads to progress with other companies. That is exactly what happened in past cases against the Bell System, IBM and Microsoft. </p>
<p>In the 1956 federal consent decree against the <a href="https://www.beatriceco.com/bti/porticus/bell/bellsystem_history.html">Bell System</a> telephone company, for example, which settled a seven-year legal saga, the company wasn’t split up. Instead, Bell was required to <a href="https://economics.yale.edu/sites/default/files/how_antitrust_enforcement.pdf">license all its patents royalty-free</a> to other businesses. This meant that some of the most profound technological innovations in history – including the <a href="https://www.computerhistory.org/atchm/who-invented-the-transistor/">transistor</a>, the <a href="https://www.popsci.com/article/science/invention-solar-cell/">solar cell</a> and the <a href="https://www.photonics.com/Articles/A_History_of_the_Laser_1960_-_2019/a42279">laser</a> – became widely available, yielding computers, solar power and other technologies that are crucial to the modern world. When the Bell System was <a href="https://www.cio.com/article/3267826/breaking-up-is-hard-to-do-why-the-bell-system-breakup-isn-t-a-model-for-tech.html">eventually broken up</a> in 1982, it did not do nearly as much to spread <a href="https://si.wsj.net/public/resources/images/BF-AV826_ATT_16U_20171120171814.jpg">innovation and competition</a> as the agreement that kept the Bells together a quarter-century earlier. </p>
<p>The antitrust action against IBM lasted 13 years and didn’t break up the company. However, as part of its tactics to avoid appearing to be a monopoly, IBM agreed to <a href="https://www.cnet.com/news/ibm-and-microsoft-antitrust-then-and-now/">separate pricing for its hardware and software products</a>, previously sold as an indivisible bundle. This created an opportunity for entrepreneurs Bill Gates and Paul Allen to create a new software-only company called Microsoft. The surge of software innovations that have followed can clearly trace their origins to the IBM settlement. </p>
<p>Two decades later, Microsoft was itself the target of an antitrust action. In the resulting settlement, <a href="https://www.theverge.com/2018/9/6/17827042/antitrust-1990s-microsoft-google-aol-monopoly-lawsuits-history">Microsoft agreed to ensure its products were compatible</a> with competitors’ software. That made room in the emerging internet marketplace for web browsers, the predecessors of Apple’s Safari, Mozilla’s Firefox and Google Chrome.</p>
<p>Even Margrethe Vestager, the European Union’s top antitrust official and frequent tech-giant nemesis, has said that “<a href="https://www.nytimes.com/2018/02/20/magazine/the-case-against-google.html">antitrust prosecutions are part of how technology grows</a>.” But that doesn’t mean they all have to achieve their most extreme ends and be broken up. </p>
<h2>Myth 4: COVID-19 and the end of tech bashing</h2>
<p>The current pandemic has highlighted the value of the technological innovations of the big tech companies. </p>
<p>Americans are relying more than ever on the internet and online shopping and delivery, while <a href="https://www.google.com/covid19/mobility/">mobility data</a> has been critical in gauging social distancing behaviors and guiding policy. <a href="https://sites.tufts.edu/digitalplanet/covid-19-hotspots-rural-america/">Digital tools</a> for tracking coronavirus cases, deaths and social distancing behaviors in the smallest counties <a href="https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6">have circulated widely</a>, and social media and smartphone videos were <a href="https://www.nytimes.com/2020/06/18/technology/social-media-protests.html">crucial</a> to the recent protests and calls for social justice. </p>
<p>Altogether, this has led to a <a href="https://www.coindesk.com/public-opinion-shifts-on-big-tech-and-privacy-during-pandemic">softening</a> of <a href="https://www.coindesk.com/public-opinion-shifts-on-big-tech-and-privacy-during-pandemic">public opinion toward big tech</a> and <a href="https://www.theverge.com/interface/2020/3/26/21193902/tech-backlash-covid-19-coronavirus-google-facebook-amazon">calls</a> for an <a href="https://www.brookings.edu/techstream/covid-and-the-future-of-techlash/">end to talk</a> of <a href="https://www.mercurynews.com/2020/04/09/opinion-covid-19-response-will-end-all-the-big-tech-bashing/">breaking them up</a>. </p>
<p>But the pandemic has also revealed numerous digital fault lines: differences in access by <a href="https://hbr.org/2020/04/which-countries-were-and-werent-ready-for-remote-work">country</a>, <a href="https://sites.tufts.edu/digitalplanet/how-digital-disparities-across-the-us-disproportionately-hurt-black-and-latinx-communities/">race</a> and <a href="https://sites.tufts.edu/digitalplanet/urban-rural-divide-in-the-us-during-covid-19/">region</a>; the ability of tech companies to <a href="https://www.theguardian.com/technology/2020/jul/27/california-investigations-amazon-workers-coronavirus">exploit labor</a>; and potential for new kinds of misuse of <a href="https://www.brookings.edu/techstream/the-dangers-of-tech-driven-solutions-to-covid-19/">data</a>. </p>
<p>Far from giving the technology industry a free pass, the pandemic is an opportunity to take a more balanced view. Yes, let’s celebrate the Silicon Valley’s value, but let’s not turn a blind eye to the problems they create or worsen. </p>
<p>During the hearings, you’ll likely hear politicians accentuate the bad stuff, while the tech CEOs will paint an overly rosy image of themselves. Antitrust is complicated enough without misconceptions clouding their judgments as well. </p>
<p><em>This is an updated and expanded version of an <a href="https://theconversation.com/3-myths-to-bust-about-breaking-up-big-tech-119283">article originally published</a> on July 17, 2019.</em></p><img src="https://counter.theconversation.com/content/143517/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bhaskar Chakravorti has founded and directs the Institute for Business in the Global Context at Fletcher/Tufts that has received funding from Mastercard, Microsoft, the Gates Foundation, the Rockefeller Foundation, Omidyar Network and the Onassis Foundation. He is a Non-Resident Senior Fellow at Brookings India and a Senior Advisor on Digital Inclusion at the Mastercard Center for Inclusive Growth.</span></em></p>As the government considers antitrust action against big US technology companies, a global business scholar identifies four myths that need busting first.Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/807072020-06-03T12:15:52Z2020-06-03T12:15:52ZPhysicists hunt for room-temperature superconductors that could revolutionize the world’s energy system<figure><img src="https://images.theconversation.com/files/331514/original/file-20200429-51495-1gds604.jpg?ixlib=rb-1.1.0&rect=44%2C0%2C6886%2C4285&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Wind turbines and solar panels in Southern California.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/wind-turbines-and-solar-panels-royalty-free-image/1133686786?adppopup=true">4kodiak/E+ via Getty Images</a></span></figcaption></figure><p>Waste heat is all around you. On a small scale, if your phone or laptop feels warm, that’s because some of the energy powering the device is being transformed into unwanted heat. </p>
<p>On a larger scale, electric grids, such as high power lines, lose over <a href="https://www.eia.gov/tools/faqs/faq.php?id=105&t=3">5% of their energy</a> in the process of transmission. In an electric power industry that generated more than <a href="https://www.statista.com/statistics/190548/revenue-of-the-us-electric-power-industry-since-1970/#statisticContainer">US$400 billion in 2018</a>, that’s a tremendous amount of wasted money. </p>
<p>Globally, the computer systems of Google, Microsoft, Facebook and others require enormous amounts of energy to power massive cloud servers and data centers. <a href="https://www.greenbiz.com/article/microsoft-facebook-take-plunge-novel-cloud-cooling-approaches">Even more energy</a>, to power water and air cooling systems, is required to offset the heat generated by these computers. </p>
<p>Where does this wasted heat come from? Electrons. These elementary particles of an atom move around and interact with other electrons and atoms. Because they have an electric charge, as they move through a material – like metals, which can easily conduct electricity – they scatter off other atoms and generate heat. </p>
<p>Superconductors are materials that address this problem by allowing energy to flow efficiently through them without generating unwanted heat. They have great potential and many cost-effective applications. They operate magnetically levitated trains, generate magnetic fields for MRI machines and recently have been used to build <a href="https://www.scientificamerican.com/article/hands-on-with-googles-quantum-computer/">quantum computers</a>, though a fully operating one does not yet exist.</p>
<p>But superconductors have an essential problem when it comes to other practical applications: They operate at ultra-low temperatures. There are no room-temperature superconductors. That “room-temperature” part is what scientists have been working on for more than a century. Billions of dollars have funded research to solve this problem. Scientists around the world, <a href="https://scholar.google.com/citations?user=B_5QhO4AAAAJ&hl=en">including me</a>, are trying to understand the physics of superconductors and how they can be enhanced.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=498&fit=crop&dpr=1 754w, https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=498&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/331829/original/file-20200430-42942-1p6pah7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=498&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The U.S. power grid sheds heat at a loss of billions of dollars each year.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/multiple-power-lines-on-overhead-towers-royalty-free-image/639773156?adppopup=true">Douglas Sacha/Moment via Getty Images</a></span>
</figcaption>
</figure>
<h2>Understanding the mechanism</h2>
<p>A superconductor is a material, such as a pure metal like aluminum or lead, that when cooled to ultra-low temperatures allows electricity to move through it with absolutely zero resistance. How a material becomes a superconductor at the microscopic level is not a simple question. It took the scientific community 45 years to understand and formulate a <a href="https://journals.aps.org/pr/abstract/10.1103/PhysRev.108.1175">successful theory of superconductivity</a> in 1956.</p>
<p>While physicists researched an understanding of the mechanisms of superconductivity, chemists mixed different elements, such as the rare metal niobium and tin, and tried recipes guided by other experiments to discover new and stronger superconductors. There was progress, but mostly incremental. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/331828/original/file-20200430-42929-ksbrj3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Copper rods.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/copper-rods-used-to-machine-parts-are-stacked-on-a-shelf-at-news-photo/1179652884?adppopup=true">Scott Olson/Getty Images News via Getty Images</a></span>
</figcaption>
</figure>
<p>Simply put, superconductivity occurs when two electrons bind together at low temperatures. They form the building block of superconductors, the Cooper pair. Elementary physics and chemistry tell us that electrons repel each other. This holds true even for a potential superconductor like lead when it is above a certain temperature. </p>
<p>When the temperature falls to a certain point, though, the electrons become more amenable to pairing up. Instead of one electron opposing the other, a kind of “glue” emerges to hold them together. </p>
<h2>Keeping matter cool</h2>
<p>Discovered in 1911, the first superconductor was mercury (Hg), the basic element of old-fashioned thermometers. In order for mercury to become a superconductor, it had to be cooled to ultra-low temperatures. <a href="https://www.nobelprize.org/prizes/physics/1913/onnes/biographical/">Kamerlingh Onnes</a> was the first scientist who figured out exactly how to do that – by compressing and liquefying helium gas. During the process, once helium gas becomes a liquid, the temperature drops to -452 degrees Fahrenheit. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=511&fit=crop&dpr=1 600w, https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=511&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=511&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=642&fit=crop&dpr=1 754w, https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=642&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/333710/original/file-20200508-49556-1d4es8f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=642&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Quicksilver or mercury, the only metal that is liquid at room temperature.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/quicksilver-royalty-free-image/93292637?adppopup=true">videophoto/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>When Onnes was <a href="https://en.wikipedia.org/wiki/Heike_Kamerlingh_Onnes">experimenting with mercury</a>, he discovered that when it was placed inside a liquid helium container and cooled to very low temperatures, its electric resistance, the opposition of the electric current in the material, suddenly dropped to zero ohms, a unit of measurement that describes resistance. Not close to zero, but zero exactly. No resistance, no heat waste.</p>
<p>This meant that an electric current, once generated, would flow continuously with nothing to stop it, at least in the lab. Many superconducting materials were soon discovered, but practical applications were another matter. </p>
<p>These superconductors shared one problem – they needed to be cooled down. The amount of energy needed to cool a material down to its superconducting state was too expensive for daily applications. By the early 1980s, the research on superconductors had nearly reached its conclusion. </p>
<h2>A surprising discovery</h2>
<p>In a dramatic turn of events, a new kind of superconductor material was discovered in 1987 at <a href="https://www.zurich.ibm.com/">IBM in Zurich, Switzerland</a>. Within months, superconductors operating at less extreme temperatures were being synthesized globally. The material was a kind of a ceramic. </p>
<p>These new ceramic superconductors were made of copper and oxygen mixed with other elements such as lanthanum, barium and bismuth. They contradicted everything physicists thought they knew about making superconductors. Researchers had been looking for very good conductors, yet these ceramics were nearly insulators, meaning that very little electrical current can flow through. Magnetism destroyed conventional superconductors, yet these were themselves magnets. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/x6OhDE_AYaw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Scientists were seeking materials where electrons were free to move around, yet in these materials, the electrons were locked in and confined. The scientists at IBM, <a href="https://www.nobelprize.org/prizes/physics/1987/summary/">Alex Müller and Georg Bednorz</a>, had actually discovered a new kind of superconductor. These were the high-temperature superconductors. And they played by their own rules. </p>
<h2>Elusive solutions</h2>
<p>Scientists now have a new challenge. Three decades after the high-temperature superconductors were discovered, we are still struggling to understand how they work at the microscopic level. Creative experiments are being conducted every day in universities and research labs around the world. </p>
<p>In my laboratory, we have built a microscope known as a <a href="https://www.youtube.com/watch?v=Yi6q1j_QjSc&feature=youtube">scanning tunneling microscope</a> that helps our research team “see” the electrons at the surface of the material. This allows us to understand how electrons bind and form superconductivity at an atomic scale. </p>
<p>We have come a long way in our research and now know that electrons also pair up in these high-temperature superconductors. There is great value and utility in answering how high-temperature superconductors work because that may be the route to room-temperature superconductivity. If we succeed in making a room-temperature superconductor, then we can address the billions of dollars that it costs in wasted heat to transmit energy from power plants to cities. </p>
<p>More remarkably, solar energy harvested in the vast empty deserts around the world could be stored and transmitted without any loss of energy, which could power cities and dramatically reduce greenhouse gas emissions. The potential is hard to imagine. Finding the glue for room-temperature superconductors is the next million-dollar question.</p>
<p>[<em>You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklybusy">Sign up for good Sunday reading.</a> ]</p><img src="https://counter.theconversation.com/content/80707/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pegor Aynajian received funding from National Science Foundation (NSF) CAREER under Award No. DMR-1654482.</span></em></p>Generating energy usually means wasted heat. Semiconductors let the electrons flow with zero waste – but so far scientists only know how to get them to work at ultra-low temperatures.Pegor Aynajian, Associate Professor of Physics, Binghamton University, State University of New YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1300262020-01-24T13:38:15Z2020-01-24T13:38:15ZWinning worker hearts and minds is key to companies achieving their green goals<figure><img src="https://images.theconversation.com/files/311777/original/file-20200124-81346-1svitz5.jpg?ixlib=rb-1.1.0&rect=12%2C162%2C3999%2C2508&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Every employee needs to embrace a company's sustainability mission if it hopes to succeed. </span> <span class="attribution"><span class="source">pcruciatti/Shutterstock.com</span></span></figcaption></figure><p>A lot of companies say they care about the environment and commit to certain goals but don’t end up doing much about it.</p>
<p>A <a href="https://corpgov.law.harvard.edu/2018/12/03/state-of-integrated-and-sustainability-reporting-2018/">whopping 78% of companies</a> in the Standard & Poor’s 500, for example, issued sustainability reports in 2018 and <a href="https://www.cbd.int/financial/2017docs/un2017-scr100.pdf">66% of all U.S. companies committed</a> to the U.N.’s Sustainable Development Goals in 2017 through either explicit statements about the goals or implicit actions that support them. </p>
<p>But <a href="https://www.mckinsey.com/business-functions/sustainability/our-insights/sustainabilitys-deepening-imprint">relatively few</a> say they’ve actually embedded the sustainability goals into their business strategies or into departments such as communications, <a href="https://www.greenbiz.com/article/why-human-resources-your-sustainability-ally">human resources</a> and <a href="https://www.supplychaindigital.com/supply-chain/sustainability-supply-chain-key-cost-saving-and-efficiency-hsbc-report-finds">supply chain management</a>, corporate functions that can play a huge role in boosting sustainability. A <a href="https://www.bain.com/insights/achieving-breakthrough-results-in-sustainability">2016 report</a> found that just 2% of companies actually achieve their sustainability goals. </p>
<p>This matters because the <a href="https://thehill.com/policy/energy-environment/411444-trump-administration-doubles-down-on-climate-skepticism">Trump administration’s skepticism</a> about the threat of climate change has made it clear that the federal government won’t be leading the charge to avert the worst of it. That means it’s up to companies to pick up the baton. </p>
<p>So what separates companies that succeed at becoming more sustainable from those that fail? </p>
<p>I spoke with over 100 CEOs, managers and regular employees at 25 multinational companies that have committed to becoming more sustainable in hopes of answering that question. My research, published in my book “<a href="https://smallactionsbigdifference.net">Small Actions, Big Difference</a>,” suggests it begins with a shared purpose – and winning over employee hearts and minds. </p>
<h2>Elevating sustainability</h2>
<p>Part of the problem is that companies have made <a href="https://www.nytimes.com/1970/09/13/archives/article-15-no-title.html">profit maximization</a> their primary purpose for decades. That has made all other aims, such as sustainability, secondary and separate from a company’s main mission.</p>
<p>The result has been that companies tend to departmentalize sustainability efforts, depriving the company of the ingenuity and passion of the employee base in addressing one of the most complex problems of our times. Since sustainability permeates every aspect of a company’s operations – from procurement to disposal – it’s vital to embed a purpose promoting it in every department. </p>
<p>Perhaps not surprisingly, companies that want to achieve goals like reducing their carbon footprint or waste tend to do better when they make sustainability an integral part of their core purpose and communicate this commitment to the entire staff. That’s clear from a recent analysis I conducted of environmental, social and governance performance data on <a href="https://my.refinitiv.com/content/dam/myrefinitiv/products/9753/en/BrochuresandF/ASSET4assetmasterExecutiveFactsheet_a4.pdf">over 3,000 companies during a 10-year period</a>. I found that companies that said they have an “overarching vision” that combines financial goals with social and environmental ones tended to perform better on a measure of their impact on the environment. They also tended to perform better financially as well.</p>
<p>Why? Because workers like a corporate purpose that trumps profit. Research has shown <a href="https://static1.squarespace.com/static/5c03c5ab96e76fd25bee4c32/t/5d6b98cfc6d43900015b3f74/1567332560551/Harvard+Business+Review+August+2019.pdf">articulating a purpose</a> beyond profit <a href="https://books.google.com/books?hl=en&lr=&id=Ls1HOwAi3lcC&oi=fnd&pg=PR1&dq=corporate+purpose+employee&ots=7pkhnQgVHX&sig=XN3ju8FMgr4e3yB8AY5KIYBL8u0#v=onepage&q=corporate%20purpose%20employee&f=false">resonates</a> with a company’s workforce. </p>
<p>For my book, I spent countless hours over a period of five years interviewing executives, middle managers and factory workers to try understand what separates the companies making successful strides in reducing their environmental impact from those still struggling. </p>
<p>What I learned from the reams of interview data that I collected and transcribed is that the successful companies endow a sense of “sustainability ownership” in their employees so that everyone – from the mailroom to the boardroom – picks up the baton as part of his or her day job. And it all starts with defining a corporate purpose, the all-important question of “why do we do what we do,” something that three companies did particularly well. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&rect=7%2C169%2C4913%2C3105&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/311694/original/file-20200123-162185-kfl1ll.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Employee buy-in has been vital to Unilever’s success in becoming more sustainable.</span>
<span class="attribution"><span class="source">John Thys/AFP via Getty Images)</span></span>
</figcaption>
</figure>
<h2>Saving lives by selling soap</h2>
<p>When Paul Polman took over as consumer goods giant Unilever CEO in 2009, he realized that the company had to transition to a new business model that accounted for the environmental and social realities of today’s world in order to survive.</p>
<p>Working with his leadership team, he came up with a <a href="https://www.unileverusa.com/sustainable-living/">new purpose for Unilever</a>: “to make sustainable living commonplace,” which was widely communicated to all workers using a variety of means from company YouTube channels to embedding “sustainability ambassadors” throughout the company. </p>
<p>The effort worked. Employees I spoke with clearly internalized and appreciated the new corporate purpose and culture. One factory worker in India put it succintly: “I would rather save lives than sell soap.” </p>
<p>Corporate executives credit this integration with Unilever’s success in becoming a greener company. From 2008 to 2018, the <a href="https://www.unilever.com/Images/uslp-performance-summary-2018_tcm244-536032_en.pdf">company says it cut greenhouse gas emissions by 52%</a>, water use by 44% and waste by 97%. Like the financial results companies report, sustainability figures are audited and verified by accounting firms. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=451&fit=crop&dpr=1 600w, https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=451&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=451&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=567&fit=crop&dpr=1 754w, https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=567&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/311771/original/file-20200124-81403-11nkt14.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=567&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Former British Prime Minister Tony Blair, left, and Marks & Spencer CEO Stuart Rose, far right, discuss how to fight global warming in 2007.</span>
<span class="attribution"><span class="source">Leon Neal/AFP via Getty Images)</span></span>
</figcaption>
</figure>
<h2>No plan B</h2>
<p>British retailer Marks & Spencer began incorporating sustainability into its operations in 2007 <a href="https://global.marksandspencer.com/plan-a/">under the provocative name</a> “Plan A” – because “there is no Plan B for our one planet,” the company said. </p>
<p>From my interviews I learned the company uses a variety of strategies to ensure the mission is embraced by every employee, in part by appealing to the heart. For example, Marks and Spencer sponsors trips into local communities where their stores are located to show the impact of a changing climate and organizes informal after-work drinks at local pubs to discuss the crisis in a personalized way.</p>
<p>The efforts have paid off. For example, the company says <a href="https://corporate.marksandspencer.com/documents/reports-results-and-publications/plan-a-reports/plan-a-performance-update-2019">carbon emissions have plunged</a> 75% since 2007 and waste is down 35% compared since 2009, with none being sent to a landfill. </p>
<h2>Appeals to the head</h2>
<p>At IBM, environmental goal setting has long been an integral part of the company’s sustainability strategy. In contrast to Marks and Spencer’s appeal to an employee’s heart, however, IBM primarily appeals to the head – and the bottom line – as you might expect from an information technology company. </p>
<p>When discussing proposed goals with business units, IBM’s corporate staff identifies opportunities for cost savings as well as revenue growth. This helps employees gain an understanding of the environmental drivers and objectives behind each goal as well as the business and societal benefits. </p>
<p>For example, consolidating multiple computer servers that aren’t well utilized into one larger and more energy-efficient server not only reduces energy demand and greenhouse gas emissions but also frees up space, electricity and cooling capacity to support new business. </p>
<p>Seeing this kind of data motivates workers to innovate on the sustainability front because they’re able to see how it’ll lead to more money and environmental well-being for the company – and ultimately greater financial rewards and a sense of contributing to a greater cause as well.</p>
<p>These types of initiatives helped IBM <a href="https://www.ibm.com/ibm/environment/annual/IBMEnvReport_2018.pdf">reduce its carbon emissions</a> by a third from 2005 to 2018 and its nonhazardous waste by 68% since 2014. Almost 90% of the remaining waste gets recycled. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/311773/original/file-20200124-81411-14ep985.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Amazon and other tech employees staged a walkout during the Global Climate Strike in 2019, another example that many people prefer to work for companies that have a higher purpose.</span>
<span class="attribution"><span class="source">Karen Ducey/Getty Images</span></span>
</figcaption>
</figure>
<h2>A higher purpose</h2>
<p>My interviews, <a href="https://hbr.org/2018/11/9-out-of-10-people-are-willing-to-earn-less-money-to-do-more-meaningful-work">countless surveys</a> and <a href="https://link.springer.com/article/10.1007/s10869-010-9159-4">scholarly research</a> show employees – particularly younger ones – prefer to work at companies that serve a higher purpose. </p>
<p>The good news is that companies <a href="https://www.businessroundtable.org/business-roundtable-redefines-the-purpose-of-a-corporation-to-promote-an-economy-that-serves-all-americans">are increasingly vowing</a> to pursue more than just profits and incorporating issues like protecting the environment and their communities into their purposes. </p>
<p>But it’s not enough to make promises. And even companies that sincerely want to do better can find it hard if they don’t bring their employees along for the ride. Small actions can lead to big difference. </p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/130026/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>CB Bhattacharya has previously consulted for some of the organizations he researched for "Small Actions, Big Difference." He also founded the Sustainable Business Roundtable at ESMT Berlin and the Center for Sustainable Business at the University of Pittsburgh, both of which had some organizations researched for the book as members, including IBM.
</span></em></p>Companies that want to reduce their environmental footprint need to ensure their entire workforce feels a shared sense of purpose.CB Bhattacharya, Professor of Sustainability and Ethics, University of PittsburghLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1273092020-01-20T17:44:22Z2020-01-20T17:44:22ZGoogle claims to have invented a quantum computer, but IBM begs to differ<figure><img src="https://images.theconversation.com/files/304839/original/file-20191203-67002-chsvk1.jpg?ixlib=rb-1.1.0&rect=7%2C23%2C5114%2C3002&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Quantum computing would signify an immense shift in processing power, but how close are we to achieving it?</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>On Oct. 23, 2019, Google published a paper in the journal <em>Nature</em> entitled “<a href="https://doi.org/10.5061/dryad.k6t1rj8">Quantum supremacy using a programmable superconducting processor</a>.” The tech giant announced its achievement of a much vaunted goal: quantum supremacy. </p>
<p>This perhaps ill-chosen term (<a href="https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002/">coined by physicist John Preskill</a>) is meant to convey the huge speedup that processors based on quantum-mechanical systems are predicted to exhibit, relative to even the fastest classical computers.</p>
<p>Google’s benchmark was achieved on a new type of quantum processor, code-named Sycamore, consisting of 54 independently addressable superconducting junction devices (of which only 53 were working for the demonstration). </p>
<p>Each of these devices allows the storage of one bit of quantum information. In contrast to the bits in a classical computer, which can only store one of two states (0 or 1 in the digital language of binary code), a quantum bit – qbit — can store information in a coherent superposition state which can be considered to contain fractional amounts of both 0 and 1. </p>
<p>Sycamore uses technology developed by the <a href="https://web.physics.ucsb.edu/%7Emartinisgroup/">superconductivity research group of physicist John Martinis at the University of California, Santa Barbara</a>. The entire Sycamore system must be kept cold at cryogenic temperatures using special helium dilution refrigeration technology. Because of the immense challenge involved in keeping such a large system near the absolute zero of temperature, it is a technological tour de force. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/306392/original/file-20191211-95125-r0n3lh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Researchers at Google have been working on a quantum computer, which would revolutionize the industry.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Contentious findings</h2>
<p>The Google researchers demonstrated that the performance of their quantum processor in sampling the output of a pseudo-random quantum circuit was vastly better than a classical computer chip — like the kind in our laptops — could achieve. Just how vastly became a point of contention, and the story was not without intrigue. </p>
<p>An inadvertent leak of the Google group’s paper on the NASA Technical Reports Server (NTRS) occurred a month prior to publication, during the blackout period when <em>Nature</em> prohibits discussion by the authors regarding as-yet-unpublished papers. The lapse was momentary, but long enough that <a href="https://www.ft.com/content/b9bb4e54-dbc1-11e9-8f9b-77216ebe1f17"><em>The Financial Times</em></a>, <a href="https://www.theverge.com/2019/9/23/20879485/google-quantum-supremacy-qubits-nasa"><em>The Verge</em></a> and other outlets picked up the story. </p>
<p>A well-known quantum computing blog by computer scientist Scott Aaronson contained some <a href="https://www.scottaaronson.com/blog/?p=4317">oblique references to the leak</a>. The reason for this obliqueness became clear when the paper was finally published online and Aaronson could at last reveal himself to be one of the reviewers.</p>
<h2>Challenges to Google’s story</h2>
<p>The story had a further controversial twist when the Google group’s claims were immediately countered by IBM’s quantum computing group. IBM shared <a href="https://arxiv.org/abs/1910.09534">a preprint posted on the ArXiv</a> (an online repository for academic papers that have yet to go through peer review) and <a href="https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/">a blog post dated Oct. 21, 2019</a> (note the date!). </p>
<p>While the Google group had claimed that a classical (super)computer would require 10,000 years to simulate the same 53-qbit random quantum circuit sampling task that their Sycamore processor could do in 200 seconds, the IBM researchers showed a method that could reduce the classical computation time to a mere matter of days. </p>
<p>However, the IBM classical computation would have to be carried out on the world’s fastest supercomputer — the IBM-developed Summit OLCF-4 at Oak Ridge National Labs in Tennessee — with clever use of secondary storage to achieve this benchmark.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=480&fit=crop&dpr=1 600w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=480&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=480&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=603&fit=crop&dpr=1 754w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=603&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/306395/original/file-20191211-95173-1tgmq2o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=603&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Summit OLCF-4 supercomputer was developed by IBM for use at Oak Ridge National Laboratory.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/olcf/42957291821/in/photolist-NsW4ML-25mPCpZ-JkN2vk-28rZmfr-YYYjk1-282ZTzq-271XTpf-271XZao-26JSfsB-25mPBPa-287nqxR-FENxmy-22HVvNY-227b4AU-XgBEPE-W6iPRi-XZZrnP-28rxs9o-XqcFKR-28rZmpK-H4EmiH-27ZDEwH-26JSngB-279g4ti-25moRES-28vVuuM">Carlos Jones/ORNL</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>While of great interest to researchers like myself working on hardware technologies related to quantum information, and important in terms of establishing academic bragging rights, the IBM-versus-Google aspect of the story is probably less relevant to the general public interested in all things quantum. </p>
<p>For the average citizen, the mere fact that a 53-qbit device could beat the world’s fastest supercomputer (containing more than 10,000 multi-core processors) is undoubtedly impressive. Now we must try to imagine what may come next.</p>
<h2>Quantum futures</h2>
<p>The reality of quantum computing today is that very impressive strides have been made on the hardware front. A wide array of credible quantum computing hardware platforms now exist, including <a href="https://ionq.com">ion traps</a>, <a href="https://quantuminstitute.yale.edu/publications/what-makes-great-qubit-diamonds-and-ions-could-hold-answer">superconducting device arrays</a> similar to those in Google’s Sycamore system and <a href="https://www.aps.org/publications/apsnews/200805/diamond.cfm">isolated electrons trapped in NV-centres in diamond</a>. </p>
<p>These and other systems are all now in play, each with benefits and drawbacks. So far researchers and engineers have been making steady technological progress in developing these different hardware platforms for quantum computing.</p>
<p>What has lagged quite a bit behind are custom-designed algorithms (computer programs) designed to run on quantum computers and able to take full advantage of possible quantum speed-ups. While several notable quantum algorithms exist — <a href="https://www.scottaaronson.com/blog/?p=208">Shor’s algorithm for factorization</a>, for example, which has applications in cryptography, and <a href="https://www.cs.cmu.edu/%7Eodonnell/quantum15/lecture04.pdf">Grover’s algorithm</a>, which might prove useful in database search applications — the total set of quantum algorithms remains rather small. </p>
<p>Much of the early interest (and funding) in quantum computing was spurred by the possibility of quantum-enabled advances in cryptography and code-breaking. A huge number of online interactions ranging from confidential communications to financial transactions require secure and encrypted messages, and modern cryptography relies on the difficulty of factoring large numbers to achieve this encryption. </p>
<p>Quantum computing could be very disruptive in this space, as Shor’s algorithm could make code-breaking much faster, while quantum-based encryption methods would allow detection of any eavesdroppers. </p>
<p>The interest various agencies have in unbreakable codes for secure military and financial communications has been a major driver of research in quantum computing. It is worth noting that all these code-making and code-breaking applications of quantum computing ignore to some extent the fact that no system is perfectly secure; there will always be a backdoor, because there will always be a non-quantum human element that can be compromised.</p>
<h2>Quantum applications</h2>
<p>More appealing for the non-espionage and non-hacker communities — in other words, the rest of us — are the possible applications of quantum computation to solve very difficult problems that are effectively unsolvable using classical computers. </p>
<p>Ironically, many of these problems emerge when we try to use classical computers to solve quantum-mechanical problems, <a href="https://www.dwavesys.com/media-coverage/ieee-spectrum-vw-solves-quantum-chemistry-problems-d-wave-machine">such as quantum chemistry problems that could be relevant for drug design</a> and various challenges in condensed matter physics including a number related to high-temperature superconductivity. </p>
<p>So where are we in the wonderful and wild world of quantum computation? </p>
<p>In recent years, we have had many convincing demonstrations that qbits can be created, stored, manipulated and read using a number of futuristic-sounding quantum hardware platforms. But the algorithms lag. So while the prospect of quantum computing is fascinating, it will likely be a long time before we have quantum equivalents of the silicon chips that power our versatile modern computing devices. </p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/127309/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Bradley receives funding from NSERC, for research on plasma processing techniques for new quantum materials, with applications in quantum information.</span></em></p>A paper published by researchers at Google claimed that they had achieved computing quantum supremacy, but leaks and counter-claims have created a stir.Michael Bradley, Professor of Physics & Engineering Physics, University of SaskatchewanLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1265032019-12-03T12:58:03Z2019-12-03T12:58:03ZA quantum computing future is unlikely, due to random hardware errors<figure><img src="https://images.theconversation.com/files/304539/original/file-20191130-156095-1m7msum.jpg?ixlib=rb-1.1.0&rect=603%2C11%2C3069%2C2121&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Will quantum computers ever reliably best classical computers?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/3d-render-qubits-1015677376">Amin Van/Shutterstock.com</a></span></figcaption></figure><figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=801&fit=crop&dpr=1 600w, https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=801&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=801&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1007&fit=crop&dpr=1 754w, https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1007&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/303588/original/file-20191125-74588-1s17qy1.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1007&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Artist’s rendition of the Google processor.</span>
<span class="attribution"><a class="source" href="https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html">Forest Stearns, Google AI Quantum Artist in Residence</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p><a href="https://www.nature.com/articles/s41586-019-1666-5">Google announced</a> this fall to much fanfare that it had demonstrated “quantum supremacy” – that is, it performed a specific quantum computation far faster than the best classical computers could achieve. IBM <a href="https://www.quantamagazine.org/google-and-ibm-clash-over-quantum-supremacy-claim-20191023/">promptly critiqued the claim</a>, saying that its own classical supercomputer could perform the computation at <a href="https://www.ibm.com/blogs/research/2019/10/on-quantum-supremacy/">nearly the same speed with far greater fidelity</a> and, therefore, the Google announcement should be taken “with a large dose of skepticism.”</p>
<p>This wasn’t the first time someone cast doubt on quantum computing. Last year, <a href="https://scholar.google.com/citations?user=EsHfKvUAAAAJ&hl=en&oi=sra">Michel Dyakonov</a>, a theoretical physicist at the University of Montpellier in France, offered a slew of technical reasons <a href="https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing">why practical quantum supercomputers will never be built</a> in an article in IEEE Spectrum, the flagship journal of electrical and computer engineering.</p>
<p>So how can you make sense of what is going on?</p>
<p>As someone who has worked on <a href="https://arxiv.org/abs/quant-ph/0206144">quantum computing</a> for <a href="https://arxiv.org/abs/quant-ph/0503027">many years</a>, I believe that due to the inevitability of random errors in the hardware, useful quantum computers are unlikely to ever be built. </p>
<h2>What’s a quantum computer?</h2>
<p>To understand why, you need to understand how quantum computers work since they’re fundamentally different from classical computers.</p>
<p>A classical computer uses 0s and 1s to store data. These numbers could be voltages on different points in a circuit. But a quantum computer works on quantum bits, also known as qubits. You can picture them as waves that are associated with amplitude and phase.</p>
<p>Qubits have special properties: They can exist in superposition, where they are both 0 and 1 at the same time, and they may be entangled so they share physical properties even though they may be separated by large distances. It’s a behavior that does not exist in the world of classical physics. The <a href="https://en.wikipedia.org/wiki/Quantum_superposition">superposition vanishes when the experimenter interacts</a> with the quantum state. </p>
<p>Due to superposition, a quantum computer with 100 qubits can represent 2<sup>100</sup> solutions simultaneously. For certain problems, this exponential parallelism can be harnessed to create a tremendous speed advantage. Some <a href="https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break-2048-bit-rsa-encryption-in-8-hours/">code-breaking problems could be solved exponentially faster on a quantum machine</a>, for example.</p>
<p>There is another, narrower approach to quantum computing called <a href="https://en.wikipedia.org/wiki/Quantum_annealing">quantum annealing</a>, where qubits are used to speed up optimization problems. D-Wave Systems, based in Canada, has built optimization systems that use qubits for this purpose, but critics also claim that these systems <a href="https://www.theverge.com/2014/6/19/5824336/google-s-quantum-computer-just-flunked-its-first-big-test">are no better than classical computers</a>.</p>
<p>Regardless, companies and countries are investing massive amounts of money in quantum computing. China has developed a <a href="https://www.scmp.com/news/china/society/article/2110563/china-building-worlds-biggest-quantum-research-facility">new quantum research facility worth US$10 billion</a>, while the European Union has developed a €1 billion ($1.1 billion) <a href="https://ec.europa.eu/digital-single-market/en/news/european-commission-will-launch-eu1-billion-quantum-technologies-flagship">quantum master plan</a>. The United States’ <a href="https://www.technologyreview.com/f/612679/president-trump-has-signed-a-12-billon-law-to-boost-us-quantum-tech/">National Quantum Initiative Act</a> provides $1.2 billion to promote quantum information science over a five-year period.</p>
<p>Breaking encryption algorithms is a powerful motivating factor for many countries – if they could do it successfully, it would give them an enormous intelligence advantage. But these investments are also promoting fundamental research in physics. </p>
<p><a href="https://www.predictiveanalyticstoday.com/what-is-quantum-computing/">Many companies are pushing to build quantum computers</a>, including Intel and Microsoft in addition to Google and IBM. These companies are trying to build hardware that replicates the circuit model of classical computers. However, current experimental systems have less than 100 qubits. To achieve useful computational performance, you probably need machines with hundreds of thousands of qubits.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=395&fit=crop&dpr=1 600w, https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=395&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=395&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=496&fit=crop&dpr=1 754w, https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=496&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/303589/original/file-20191125-74588-j0a746.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=496&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Google’s Sycamore processor has only 54 qubits.</span>
<span class="attribution"><a class="source" href="https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html">Erik Lucero, Research Scientist and Lead Production Quantum Hardware, Google</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Noise and error correction</h2>
<p>The mathematics that underpin quantum algorithms is well established, but there are daunting engineering challenges that remain. </p>
<p>For computers to function properly, they must correct all small random errors. In a quantum computer, such errors arise from the non-ideal circuit elements and the interaction of the qubits with the environment around them. For these reasons the qubits can lose coherency in a fraction of a second and, therefore, the computation must be completed in even less time. If random errors – which are inevitable in any physical system – are not corrected, the computer’s results will be worthless.</p>
<p>In classical computers, small noise is corrected by taking advantage of a concept known as thresholding. It works like the rounding of numbers. Thus, in the transmission of integers where it is known that the error is less than 0.5, if what is received is 3.45, the received value can be corrected to 3.</p>
<p>Further errors can be corrected by introducing redundancy. Thus if 0 and 1 are transmitted as 000 and 111, then at most one bit-error during transmission can be corrected easily: A received 001 would be a interpreted as 0, and a received 101 would be interpreted as 1.</p>
<p>Quantum error correction codes are a generalization of the classical ones, but there are crucial differences. For one, the unknown qubits cannot be copied to incorporate redundancy as an error correction technique. Furthermore, errors present within the incoming data before the error-correction coding is introduced cannot be corrected. </p>
<h2>Quantum cryptography</h2>
<p>While the problem of noise is a serious challenge in the implementation of quantum computers, it isn’t so in quantum cryptography, where people are dealing with single qubits, for single qubits can remain isolated from the environment for significant amount of time. Using quantum cryptography, two users can exchange the very large numbers known as keys, which secure data, without anyone able to break the key exchange system. Such key exchange could help secure communications between satellites and naval ships. But the actual encryption algorithm used after the key is exchanged remains classical, and therefore the encryption is theoretically no stronger than classical methods.</p>
<p>Quantum cryptography is being commercially used in a limited sense for high-value banking transactions. But because the two parties must be authenticated using classical protocols, and since a chain is only as strong as its weakest link, it’s not that different from existing systems. Banks are still using a classical-based authentication process, which itself could be used to exchange keys without loss of overall security. </p>
<p>Quantum cryptography technology <a href="https://www.extremetech.com/extreme/287094-quantum-cryptography#disqus_thread">must shift its focus to quantum transmission of information</a> if it’s going to become significantly more secure than existing cryptography techniques. </p>
<h2>Commercial-scale quantum computing challenges</h2>
<p>While quantum cryptography holds some promise if the problems of quantum transmission can be solved, I doubt the same holds true for generalized quantum computing. Error-correction, which is fundamental to a multi-purpose computer, is such a significant challenge in quantum computers that I don’t believe they’ll ever be built at a commercial scale.</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklysmart">You can get our highlights each weekend</a>. ]</p><img src="https://counter.theconversation.com/content/126503/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Subhash Kak does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Google claims quantum supremacy – IBM says not so fast. One researcher explains why he doesn’t see quantum computers outpacing classical computers any time soon … and maybe not ever.Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1280282019-11-29T15:15:48Z2019-11-29T15:15:48ZBlockchain’s first revolutionary product could be online ID<figure><img src="https://images.theconversation.com/files/304449/original/file-20191129-95226-wwnfc6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">No more fakes. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/network-security-system-concept-fingerprint-inside-726504361">Black Salmon</a></span></figcaption></figure><p>In <a href="https://fortune.com/2019/11/20/paypal-ceo-dan-schulman-libra/">an interview</a>, PayPal’s chief executive, Dan Schulman, recently discussed the prospects for <a href="https://theconversation.com/is-blockchain-all-hype-a-financier-and-supply-chain-expert-discuss-106584">blockchain</a> – the encrypted, decentralised online ledger system that underpins Bitcoin and <a href="https://www.wired.com/story/guide-blockchain/">myriad</a> other cutting-edge projects. While talking about blockchain’s potential for improving how people make payments around the world, Schulman said: </p>
<blockquote>
<p>We think there’s a lot of promise to blockchain technology … but it really needs to do something that the traditional rails [of the interntional payments system] can’t do. Most people think that blockchain is about efficiency, but the system today is pretty efficient. There are middlemen sometimes in between, but the rails of it are pretty efficient. So we think a lot of the neat stuff that can happen on blockchain is around identity, for example.</p>
</blockquote>
<p>Schulman was referencing <a href="https://coincentral.com/blockchain-vs-paypal/">a debate</a> around payments that has been going for a few years. Without getting too technical, the main benefits of blockchain payments are that they are not controlled by middlemen, so there are no fees to pay; and transactions can’t be hacked and changed once they are on the ledger. </p>
<p>But at the same time, they are not yet as quick at processing transactions as the traditional system – which as Schulman argues, is fast enough in any case. Regulation is also a major issue: people on either end of a blockchain payment are completely anonymous. This presents major issues for everything from money laundering to being able to reclaim payments if you accidentally credit the wrong address. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304452/original/file-20191129-95207-oh3t7x.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Dan Schulman: not desperate for blockchain payments.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/fortunebrainstormtech/27672237934/in/photolist-gZF8LV-PYrbTE-JaijKW-2c35Reb-DSjNeR-PYrtFC-JYMwWD-2c7yDgc-Jak2WR-NmnKqV-JaisfA-Nmongx-29m5kod-2c7yC6X-JakgXM-29m6moC-JYNtfi-2aHLBM6-JWxy6d-Nmnqdg-NmnCwz-K3HjZb-JERt4h-29m5BeS-JajJqi-K6JxL2-JahurS-2aHLLzX-Jakdfr-2aHLqpM-JERZwS-JERHWo-JERyMG-PYs4aS-Jak9bV-JakhHK-K3GXBw-K6JtmR-JERX2m-JaifC7-K3Heq5-2c3642Q-K6J7oP-Jaieid-K3H23j-29m5MtE-JERwYb-K6JAWF-K3HnHN-21aVe4N">Fortune Brainstorm TECH</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>In short, blockchain-based payments technology is very easy, but policy and regulation are much harder. For PayPal, which relies on the international banking system, there is still no contest. </p>
<h2>The real you</h2>
<p>So why was Schulman much more bullish about the prospects for blockchain around online identity? Interestingly, it relates to one of the technology’s weaknesses in payments: anonymity. Much of the <a href="https://www.britannica.com/topic/cybercrime/Identity-theft-and-invasion-of-privacy">cybercrime</a> that takes place results from the fact that we don’t know who we’re talking to. If we could all encrypt our online identities on blockchains so that we could completely trust who were are dealing with, it could overcome this problem. </p>
<p>This isn’t just theory: there are numerous interesting developments in the offing. Take the UK government, which like many countries is increasingly moving interactions with the public online. This includes benefits, taxation and other services such as passport and driving licence applications. Yet like all governments, it faces a major challenge from the fact that citizens lack a unique online identifier. This <a href="https://www.gov.uk/government/publications/cross-government-fraud-landscape-annual-report-2018">is helping</a> fraudsters to steal nearly £50 billion worth of government services every year in the UK. For this reason, the government is <a href="https://www.computing.co.uk/ctg/analysis/3068359/online-identity-with-the-failure-of-govuk-verify-should-britain-follow-the-nordic-model">currently exploring</a> blockchain as a potential solution. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=662&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=662&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=662&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=832&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=832&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304453/original/file-20191129-95250-tb6bao.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=832&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">You know it makes sense.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/fortunebrainstormtech/27672237934/in/photolist-gZF8LV-PYrbTE-JaijKW-2c35Reb-DSjNeR-PYrtFC-JYMwWD-2c7yDgc-Jak2WR-NmnKqV-JaisfA-Nmongx-29m5kod-2c7yC6X-JakgXM-29m6moC-JYNtfi-2aHLBM6-JWxy6d-Nmnqdg-NmnCwz-K3HjZb-JERt4h-29m5BeS-JajJqi-K6JxL2-JahurS-2aHLLzX-Jakdfr-2aHLqpM-JERZwS-JERHWo-JERyMG-PYs4aS-Jak9bV-JakhHK-K3GXBw-K6JtmR-JERX2m-JaifC7-K3Heq5-2c3642Q-K6J7oP-Jaieid-K3H23j-29m5MtE-JERwYb-K6JAWF-K3HnHN-21aVe4N">Marcia Cirillo</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Another very interesting application, currently <a href="https://www.computerworld.com/article/3427866/how-is-the-uk-government-using-blockchain.html">being explored</a> by the UK government as part of the same project, is <a href="https://theconversation.com/blockchain-voting-is-vulnerable-to-hackers-software-glitches-and-bad-id-photos-among-other-problems-122521">voting systems</a>. Blockchains could provide a way of guaranteeing that every person queuing up to vote is who they say they are – or allow people to vote online, potentially with big benefits to turnout. </p>
<p>Australia <a href="https://micky.com.au/blockchain-technology-used-by-south-australian-government-to-conduct-election/">is looking</a> at idea this, too, having recently trialled allowing voters in South Australia to identify themselves via blockchain technology for a minor council election. Meanwhile, New South Wales <a href="https://www.computing.co.uk/ctg/news/3033006/state-owned-quantum-computer-break-blockchains-three-years/page/4">conducted</a> a trial earlier in 2019 using a blockchain for identity verification based on people’s driving licences. It enabled participants to prove their identity and age for things like buying alcohol and gambling without the need for a physical ID card. </p>
<p>The trials concluded that the technology is not yet mature enough, and that ID-verification can still be achieved better with current technologies. Nonetheless, New South Wales is <a href="https://www.maxcryptonews.com/digital-drivers-licenses-rolling-out-statewide-in-nsw">going ahead</a> with a new large-scale trial of a driver licence registration system based on a blockchain platform at the end of 2019, covering some 140,000 licence holders. </p>
<p>Another country to watch closely is China. It sees blockchain products as a good mechanism for regulation. Given the huge market that China represents, if the government decides to sanction (or indeed mandate) the use of online identity underpinned by blockchain it will give a phenomenal boost to the technology. It also raises the worrying prospect of the government being able to monitor all the purchasing transactions of its citizens, were it to introduce a system in which it knew the identities of everyone on the blockchain. </p>
<h2>The business opportunity</h2>
<p>Numerous blue-chip tech companies are vying to be part of this blockchain ID future – pointing to the potential for a very big market. IBM is trialling the “alpha” version of <a href="https://docs.info.verify-creds.com/">IBM Verify Credentials</a>, an ID system underpinned by blockchain technology aimed at both businesses and governments. If typical software industry production cycles for large projects are anything to go by, a market-ready product could be two to three years away. Microsoft, too, is <a href="https://www.ledgerinsights.com/microsoft-azure-blockchain-based-digital-identity-ion/">developing</a> blockchain identity systems based on its well established and very successful Azure cloud computing platform. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/304454/original/file-20191129-95272-i2gcw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Identity accepted.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/online-identity-digital-illustration-144753490">Andrea Danti</a></span>
</figcaption>
</figure>
<p>Either of these systems could potentially provide the kinds of services that governments are starting to envisage. As for corporate clients, one application could be online retail, which <a href="https://www.scmagazineuk.com/cyber-crime-endangering-future-online-retail-businesses/article/1491110">attracts a lot</a> of cybercrime. These systems might provide a reliable means of identifying a buyer to confirm they are authorised to pay. Presumably this would be underpinned by something akin to transaction fees per payment. Given the hundreds of billions of transactions that take place every day, the potential revenues could clearly be enormous. </p>
<p>This is why companies like PayPal cannot stay on the sidelines. Blockchain hype or not, you don’t want to be <a href="https://www.theatlantic.com/business/archive/2012/01/what-killed-kodak/250925/">the Kodak</a> of this industry, if or when encrypted ID becomes the next killer online technology. </p>
<p>Until governments of major economies around the world are prepared to significantly overhaul financial systems to accept cryptocurrencies as mainstream payment methods, the less glamorous business of taking an ID-verification cut from billions of payment transactions might be the safer bet. And since this is very similar to the fee-taking business model that made PayPal a giant, it will be very interesting to see what Schulman does next.</p><img src="https://counter.theconversation.com/content/128028/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lucian Tipi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>PayPal CEO Dan Schulman sees much more potential in blockchain ID than payments at present. He’s absolutely right.Lucian Tipi, Deputy Head of Department (Finance, Accounting and Business Systems), Sheffield Hallam UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1258272019-10-25T13:24:50Z2019-10-25T13:24:50ZGoogle and IBM are at odds over ‘quantum supremacy’ – an expert explains what it really means<figure><img src="https://images.theconversation.com/files/298733/original/file-20191025-173579-1gulgyn.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Google</span></span></figcaption></figure><p>Google claims to have demonstrated something called “quantum supremacy”, in a paper <a href="https://www.nature.com/articles/s41586-019-1666-5">published in Nature</a>. This would mark a significant milestone in the development of a new type of computer, known as a quantum computer, that could perform very difficult calculations much faster than anything possible on conventional “classical” computers. But a team from IBM has published their own <a href="https://arxiv.org/abs/1910.09534">paper</a> claiming they can reproduce the Google result on existing supercomputers.</p>
<p>While Google vs. IBM might make a good story, this disagreement between two of the world’s biggest technology companies rather distracts from the real scientific and technological progress behind both teams’ work. Despite how it might sound, even exceeding the milestone of quantum supremacy wouldn’t mean quantum computers are about to take over. On the other hand, just approaching this point has exciting implications for the future of the technology.</p>
<p>Quantum computers represent a new way of processing data. Instead of storing information in “bits” as 0s or 1s like classical computers do, quantum computers use the principles of quantum physics to store information in “qubits” that can also be in states of 0 and 1 at the same time. In theory, this allows quantum machines to perform certain calculations much faster than classical computers.</p>
<p>In 2012, Professor John Preskill coined the term <a href="https://arxiv.org/abs/1203.5813">“quantum supremacy”</a> to describe the point when quantum computers become powerful enough to perform some computational task that classical computers could not do in a reasonable timeframe. He deliberately didn’t require the computational task to be a useful one. Quantum supremacy is an intermediate milestone, something to aim for long before it is possible to build large, general-purpose quantum computers.</p>
<p>In its quantum supremacy experiment, the Google team performed one of these difficult but useless calculations, sampling the output of randomly chosen quantum circuits. They also carried out computations on the world’s most powerful classical supercomputer, Summit, and estimated it would take 10,000 years to fully simulate this quantum computation. IBM’s team have proposed a method for simulating Google’s experiment on the Summit computer, which they estimated would take only two days rather than 10,000 years.</p>
<p>Random circuit sampling has no known practical use, but there are very good mathematical and empirical reasons to believe it is very hard to replicate on classical computers. More precisely, for every additional qubit the quantum computer uses to perform the calculation, a classical computer would need to double its computation time to do the same.</p>
<p>The IBM paper does not challenge this exponential growth. What the IBM team did was find a way of trading increased memory usage for faster computation time. They used this to show how it might be possible to squeeze a simulation of the Google experiment onto the Summit supercomputer, by exploiting the vast memory resources of that machine. (They estimate simulating the Google experiment would require memory equivalent to about 10m regular hard drives.)</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/298735/original/file-20191025-173524-kf3d23.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Larger quantum circuits could do the same calculations as huge amounts of classical computing memory.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/futuristic-cpu-quantum-processor-global-computer-1210158169?src=kwTkio1WBdTyeFGIocGfCQ-1-0">Yurchanka Siarhei</a></span>
</figcaption>
</figure>
<p>The 53-qubit Google experiment is right at the limit of what can be simulated classically. IBM’s new algorithm might just bring the calculation within reach of the world’s biggest supercomputer. But add a couple more qubits, and the calculation will be beyond reach again. The Google paper anticipates this, stating: “We expect that lower simulation costs than reported here will eventually be achieved, but we also expect that they will be consistently outpaced by hardware improvements on larger quantum processors.”</p>
<p>Whether this experiment is just within reach of the world’s most powerful classical supercomputer, or just beyond, isn’t really the point. The term “supremacy” is somewhat misleading in that it suggests a point when quantum computers can outperform classical computers at everything. In reality, it just means they can outperform classical computers at something. And that something might be an artificial demonstration with no practical applications. In retrospect, the choice of terminology was perhaps unfortunate (though Preskill recently wrote a <a href="https://www.quantamagazine.org/john-preskill-explains-quantum-supremacy-20191002">reasoned defence</a> of it). </p>
<h2>Impressive science</h2>
<p>Yet Google’s work is a significant milestone. With quantum hardware reaching the limits of what can be matched classically, it opens up the intriguing possibility that these devices – or devices only slightly larger – could have <a href="https://arxiv.org/abs/1801.00862">practical applications</a> that cannot be done on classical supercomputers. On the other hand, we don’t know of any such applications yet, even for devices with a few hundred qubits. It’s a very interesting and challenging scientific question, and an extremely active area of research.</p>
<p>As such, the Google results are an <a href="https://theconversation.com/why-are-scientists-so-excited-about-a-recently-claimed-quantum-computing-milestone-124082">impressive piece of experimental science</a>. They do not imply that quantum computers are about to revolutionise computing overnight (and the Google paper never claims this). Nor are these useless results that achieve nothing new (and the IBM paper doesn’t claim this). The truth is somewhere in between. These new results undoubtedly move the technology forward, just as it has been steadily progressing for the last couple of decades.</p>
<p>As quantum computing technology develops, it is also pushing the design of new classical algorithms to simulate larger quantum systems than were previously possible. IBM’s paper is an example of that. This is also useful science. Not only in ensuring quantum computing progress is continually being fairly benchmarked against the best classical techniques, but also because simulating quantum systems is itself an important scientific computing application.</p>
<p>This is how science and technology progresses. Not in one dramatic and revolutionary breakthrough, but in a whole series of small breakthroughs, with the academic community carefully scrutinising, criticising and refining each step along the way. Only a few of these advances and debates hit the headlines. The reality is both less dramatic and more interesting.</p><img src="https://counter.theconversation.com/content/125827/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Toby Cubitt receives funding from the Royal Society, and is the co-founder of quantum software spinout company PhaseCraft. Toby Cubitt has previously coauthored papers with members of the IBM Physics of Information group, and is currently a participant in two joint grants with Google.</span></em></p>Quantum computers aren’t about to take over, but this is an important milestone.Toby Cubitt, Reader (Associate Professor) in Quantum Information, UCLLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1223032019-10-15T11:15:45Z2019-10-15T11:15:45ZHow the US census kickstarted America’s computing industry<figure><img src="https://images.theconversation.com/files/292231/original/file-20190912-190021-66ngy7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An employee creates punch cards using information from a filled in 1950 Census Population Form.</span> <span class="attribution"><a class="source" href="https://www.census.gov/library/photos/1950_08010.html">U.S. Census Bureau</a></span></figcaption></figure><p><em>An updated version of this article was published on December 1, 2021. <a href="https://theconversation.com/how-the-us-census-led-to-the-first-data-processing-company-125-years-ago-and-kickstarted-americas-computing-industry-172850">Read it here</a>.</em></p>
<p>The U.S. Constitution requires that a population count be conducted at the beginning of every decade. </p>
<p>This census has always been charged with political significance, and continues to be. That’s clear from <a href="https://www.npr.org/2019/06/29/735073641/how-the-fight-over-the-census-citizenship-question-could-rage-on">the controversy over the conduct of the upcoming 2020 census</a>. </p>
<p>But it’s less widely known how important the census has been in developing the U.S. computer industry, a story that I tell in my new book, <a href="https://jhupbooks.press.jhu.edu/title/republic-numbers">“Republic of Numbers: Unexpected Stories of Mathematical Americans through History.”</a> </p>
<h2>Population growth</h2>
<p>The only use of the census clearly specified in the Constitution is to allocate seats in the House of Representatives. More populous states get more seats. </p>
<p>A minimalist interpretation of the census mission would require reporting only the overall population of each state. But the census has never confined itself to this. </p>
<p>A complicating factor emerged right at the beginning, with the Constitution’s distinction between “free persons” and <a href="http://www.digitalhistory.uh.edu/disp_textbook.cfm?smtID=3&psid=163">“three-fifths of all other persons.”</a> This was the Founding Fathers’ infamous mealy-mouthed compromise between those states with a large number of enslaved persons and those states where relatively few lived. </p>
<p><a href="https://www.census.gov/history/www/through_the_decades/index_of_questions/1790_1.html">The first census</a>, in 1790, also made nonconstitutionally mandated distinctions by age and sex. In subsequent decades, many other personal attributes were probed as well: occupational status, marital status, educational status, place of birth and so on.</p>
<p>As the country grew, each census required greater effort than the last, not merely to collect the data but also to compile it into usable form. <a href="https://www.jstor.org/stable/24987147?seq=1#page_scan_tab_contents">The processing of the 1880 census</a> was not completed until 1888. </p>
<p>It had become a mind-numbingly boring, error-prone, clerical exercise of a magnitude rarely seen. </p>
<p>Since the population was evidently continuing to grow at a rapid pace, those with sufficient imagination could foresee that processing the 1890 census would be gruesome indeed without some change in procedure. </p>
<p><iframe id="1Onyi" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/1Onyi/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>A new invention</h2>
<p>John Shaw Billings, a physician assigned to assist the Census Office with compiling health statistics, had closely observed the immense tabulation efforts required to deal with the raw data of 1880. He expressed his concerns to a young mechanical engineer assisting with the census, Herman Hollerith, a recent graduate of the Columbia School of Mines. </p>
<p>On Sept. 23, 1884, the U.S. Patent Office recorded a submission from the 24-year-old Hollerith, titled <a href="https://pdfpiw.uspto.gov/.piw?PageNum=0&docid=00395782&IDKey=73D9506C5930%0D%0A&HomeUrl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D0395782.PN.%2526OS%3DPN%2F0395782%2526RS%3DPN%2F0395782">“Art of Compiling Statistics.”</a> </p>
<p>By progressively improving the ideas of this initial submission, Hollerith would decisively win an 1889 competition to improve the processing of the 1890 census. </p>
<p>The technological solutions devised by Hollerith involved a suite of mechanical and electrical devices. The first crucial innovation was to translate data on handwritten census tally sheets to patterns of holes punched in cards. As Hollerith phrased it, in the 1889 revision of his patent application,</p>
<blockquote>
<p>“A hole is thus punched corresponding to person, then a hole according as person is a male or female, another recording whether native or foreign born, another either white or colored, &c.”</p>
</blockquote>
<p>This process required developing special machinery to ensure that holes could be punched with accuracy and efficiency. </p>
<p>Hollerith then devised a machine to “read” the card, by probing the card with pins, so that only where there was a hole would the pin pass through the card to make an electrical connection, resulting in advance of the appropriate counter. </p>
<p>For example, if a card for a white male farmer passed through the machine, a counter for each of these categories would be increased by one. The card was made sturdy enough to allow passage through the card reading machine multiple times, for counting different categories or checking results.</p>
<p>The count proceeded so rapidly that the <a href="https://play.google.com/books/reader?id=MGZqAAAAMAAJ&pg=GBS.PA1">state-by-state numbers needed for congressional apportionment</a> were certified before the end of November 1890. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=470&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=470&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=470&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292233/original/file-20190912-190021-1a7j7d1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This ‘mechanical punch card sorter’ was used for the 1950 census.</span>
<span class="attribution"><a class="source" href="https://www.census.gov/library/photos/machinists_technicians_5.html">U.S. Census Bureau</a></span>
</figcaption>
</figure>
<h2>Rise of the punched card</h2>
<p>After his census success, <a href="https://www.worldcat.org/title/computer-a-history-of-the-information-machine/oclc/1110437971?referer=br&ht=edition">Hollerith went into business selling this technology</a>. The company he founded would, after he retired, become International Business Machines – IBM. IBM led the way in perfecting card technology for recording and tabulating large sets of data for a variety of purposes. </p>
<p>By the 1930s, many businesses were using cards for record-keeping procedures, such as payroll and inventory. Some data-intensive scientists, especially astronomers, were also finding the cards convenient. IBM had by then standardized an 80-column card and had developed keypunch machines that would change little for decades. </p>
<p>Card processing became one leg of the mighty computer industry that blossomed after World War II, and IBM for a time would be the third-largest corporation in the world. Card processing served as a scaffolding for vastly more rapid and space-efficient purely electronic computers that now dominate, with little evidence remaining of the old regime. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/292229/original/file-20190912-190061-1af81fk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blue IBM punch card.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Blue-punch-card-front.png">Gwern/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Those who have grown up knowing computers only as easily portable devices, to be communicated with by the touch of a finger or even by voice, may be unfamiliar with the room-size computers of the 1950s and ’60s, where the primary means of loading data and instructions was by creating a deck of cards at a keypunch machine, and then feeding that deck into a card reader. This persisted as the default procedure for many computers well into the 1980s. </p>
<p><a href="https://www.worldcat.org/title/grace-hopper-navy-admiral-and-computer-pioneer/oclc/19516564&referer=brief_results">As computer pioneer Grace Murray Hopper recalled</a> about her early career, “Back in those days, everybody was using punched cards, and they thought they’d use punched cards forever.”</p>
<p>Hopper had been an important member of the team that created the first commercially viable general-purpose computer, the Universal Automatic Computer, or UNIVAC, one of the card-reading behemoths. Appropriately enough the first UNIVAC delivered, in 1951, was to the U.S. Census Bureau, still hungry to improve its data processing capabilities.</p>
<p>No, computer users would not use punched cards forever, but they used them through the Apollo Moon-landing program and the height of the Cold War. Hollerith would likely have recognized the direct descendants of his 1890s census machinery almost 100 years later. </p>
<p>
<section class="inline-content">
<img src="https://images.theconversation.com/files/248894/original/file-20181204-133095-1p2xxs2.png?w=128&h=128">
<div>
<header>David Lindsay Roberts is the author of:</header>
<p><a href="https://jhupbooks.press.jhu.edu/title/republic-numbers">Republic of Numbers: Unexpected Stories of Mathematical Americans through History.</a></p>
<footer>Johns Hopkins University Press provides funding as a member of The Conversation US.</footer>
</div>
</section>
</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/122303/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Lindsay Roberts does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>As the country grew, each census required greater effort than the last. That problem led to the invention of the punched card.David Lindsay Roberts, Adjunct Professor of Mathematics, Prince George's Community CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1192832019-07-17T11:18:19Z2019-07-17T11:18:19Z3 myths to bust about breaking up ‘big tech’<figure><img src="https://images.theconversation.com/files/284118/original/file-20190715-173342-1ji5sep.jpg?ixlib=rb-1.1.0&rect=45%2C0%2C5069%2C3376&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Before taking on tech giants, shatter a few misconceptions.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/iron-hammer-breaking-glass-window-340890053">W. Scott McGill/Shutterstock.com</a></span></figcaption></figure><p>As the public and government regulators around the world discuss <a href="https://www.npr.org/2019/06/09/731044346/big-tech-and-antitrust">whether and how</a> to manage the power of technology companies, one idea that keeps coming up is breaking up these large conglomerate corporations into smaller pieces. Public distrust for tech companies has shifted to talk of <a href="https://www.wsj.com/articles/justice-department-is-preparing-antitrust-investigation-of-google-11559348795">antitrust action</a> against them. Facebook, for instance, might then have to <a href="https://www.mercurynews.com/2018/05/21/facebook-owns-instagram-messenger-whatsapp-now-theres-a-call-to-break-it-all-up/">compete with Instagram for photo-sharing</a> and WhatsApp for messaging – rather than owning both. </p>
<p>The idea has managed to garner support from both <a href="https://www.politico.com/2020-election/candidates-views-on-the-issues/technology/tech-competition-antitrust/">Massachusetts Sen. Elizabeth Warren</a>, a Democrat, and <a href="https://www.nbcnews.com/politics/donald-trump/trump-claims-collusion-between-big-tech-democrats-backs-antitrust-fines-n1015726">Republican President Donald Trump</a>.</p>
<p>However, <a href="https://www.politico.com/2020-election/candidates-views-on-the-issues/technology/tech-competition-antitrust/">advocates</a> and <a href="https://www.weforum.org/agenda/2019/07/these-are-some-of-the-best-quotes-about-technology-monopolies-in-2019/">opponents</a> of breaking up big technology firms are falling prey to some serious misconceptions. I study the effects of digital technologies on lives and livelihoods across 85 countries and lead Tufts Fletcher School’s <a href="https://sites.tufts.edu/digitalplanet/">Digital Planet</a> initiative studying technological innovation around the world. In my opinion, there are three myths worth busting before considering taking on big tech. </p>
<h2>Myth 1: Comparing Standard Oil and Google</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1132&fit=crop&dpr=1 754w, https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1132&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/284102/original/file-20190715-173370-5ovggf.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1132&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">John D. Rockefeller, founder of Standard Oil.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:John_D_Rockefeller_1872.png">Urbanrenewal/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>Arguments for and against antitrust action against tech firms rely heavily on the <a href="https://www.nytimes.com/1998/10/19/business/microsoft-trial-precedents-previous-antitrust-cases-leave-room-for-both-sides.html">experiences of earlier cases</a>. The massive <a href="https://theconversation.com/for-tech-giants-a-cautionary-tale-from-19th-century-railroads-on-the-limits-of-competition-91616">19th-century monopoly Standard Oil</a> has, in fact, been referred to as the “<a href="https://www.nytimes.com/2018/02/20/magazine/the-case-against-google.html">Google of its day</a>.” There are also people who are recalling the 1990s <a href="https://www.nytimes.com/2018/05/18/opinion/microsoft-antitrust-case.html">antitrust case against Microsoft’s dominant position</a> in the era of personal computers. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=490&fit=crop&dpr=1 600w, https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=490&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=490&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=615&fit=crop&dpr=1 754w, https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=615&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/284103/original/file-20190715-173360-2qxmqd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=615&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Google co-founders Sergey Brin, left, and Larry Page.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Schmidt-Brin-Page-20080520_(cropped).jpg">Joi Ito/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Those cases from the past may seem similar to today’s situation, but this era is different in one crucial way: the global technology marketplace. Currently, there are two parallel “big tech” clusters. One is in the U.S., dominated by <a href="https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791">Google, Amazon, Facebook and Apple</a>. The other is based in China, dominated by <a href="https://singularityhub.com/2018/08/17/baidu-alibaba-and-tencent-the-rise-of-chinas-tech-giants/">Baidu, Alibaba, Tencent and Huawei</a>. This global market is subject to different political and policy pressures than regulators faced when dealing with Standard Oil and Microsoft.</p>
<p>Both clusters are attempting to add users to <a href="https://hbr.org/2019/01/which-countries-are-leading-the-data-economy">accumulate reservoirs of data</a>, which will fuel the next stage of competitiveness in a future run by artificial intelligence. The Chinese government has blocked most of the U.S. companies from entering the Chinese market, protecting its “<a href="https://www.scmp.com/tech/china-tech/article/2120913/china-recruits-baidu-alibaba-and-tencent-ai-national-team">AI national team</a>.” The <a href="https://www.bloomberg.com/news/articles/2018-06-27/alibaba-pulls-back-in-u-s-amid-trump-crackdown-on-chinese-investment">U.S. government has done likewise</a>, blacklisting some Chinese outfits for a period while discouraging others.</p>
<p>If the U.S. technology giants are broken up, the result would be a vastly uneven global playing field, pitting fragmented U.S. companies against consolidated state-protected Chinese firms.</p>
<p>Geopolitical factors aren’t limited to the U.S.-China rivalry. The European Union, Russia and India are also heavy users of Silicon Valley technologies, and each is <a href="https://www.ft.com/content/3eb00398-9815-11e9-8cfb-30c211dcd229">exploring its own options</a> for legislation and regulation too.</p>
<p>U.S. companies’ size and data accumulation capabilities give the country economic and political influence around the globe. Their power would change if they were broken up – and, in my view, that should be a key consideration in regulators’ decisions.</p>
<h2>Myth 2: Price is right</h2>
<p>There are two main views of antitrust action in these discussions. One focuses on consumer welfare, which has been the prevailing approach federal lawyers have taken <a href="https://www.jstor.org/stable/724991">since the 1960s</a>. The other view suggests that regulators should look at the <a href="https://www.yalelawjournal.org/note/amazons-antitrust-paradox">underlying structure of the market</a> and potential for <a href="https://www.pbwt.com/antitrust-update-blog/a-brief-overview-of-the-new-brandeis-school-of-antitrust-law">powerful players to exploit</a> their positions.</p>
<p>Those two sides seem to agree that price plays a key role. People who argue against breaking up the tech giants point out that Facebook and Google provide services that are <a href="https://slate.com/technology/2019/06/facebook-big-tech-antitrust-breakup-mistake.html">free to the consumer</a>, and that Amazon’s marketplace power drives its products’ costs down. On the other side, though, are those who say that <a href="https://www.yalelawjournal.org/note/amazons-antitrust-paradox">having low or no prices</a> is evidence that these companies are artificially lowering consumer costs to draw users into company-controlled systems that are <a href="https://techcrunch.com/2019/02/04/why-no-one-really-quits-google-or-facebook/">hard to leave</a>.</p>
<p>Both sides are missing the fact that the monetary price is less relevant as measure of what users pay in the technology industry than it is in other types of business. Users <a href="https://theconversation.com/how-much-is-your-data-worth-to-tech-companies-lawmakers-want-to-tell-you-but-its-not-that-easy-to-calculate-119716">pay for digital products with their data</a>, rather than just money. Regulators shouldn’t focus only on the monetary costs to the users. Rather, they should ask whether users are being asked for more data than is strictly necessary, whether information is being collected in <a href="https://theconversation.com/7-in-10-smartphone-apps-share-your-data-with-third-party-services-72404">intrusive or abusive ways</a> and whether customers are <a href="https://www.axios.com/mark-warner-josh-hawley-dashboard-tech-data-4ee575b4-1706-4d05-83ce-d62621e28ee1.html">getting good value in exchange for their data</a>.</p>
<h2>Myth 3: Trust-busting is all or nothing</h2>
<p>There aren’t just two ways for this debate to end, with either a breakup of one or more technology giants or simply leaving things as they are for the market to develop further. </p>
<p>My own idea of the best outcome would take a page from the history of antitrust litigation: The company that is sued is not broken up, and yet the very fact that there was a lawsuit leads to progress. That has happened in the past, in the cases against the Bell System, IBM and Microsoft.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=538&fit=crop&dpr=1 600w, https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=538&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=538&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/284111/original/file-20190715-173376-1k7ro27.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A replica of the first transistor, developed at AT&T’s Bell Laboratories in 1947.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Replica-of-first-transistor.jpg">National Archives</a></span>
</figcaption>
</figure>
<p>In the 1956 federal consent decree against the Bell System, which settled a seven-year legal proceeding against the company, the company wasn’t split up, but Bell was required to <a href="https://economics.yale.edu/sites/default/files/how_antitrust_enforcement.pdf">license all its patents royalty-free</a> to other firms. This meant that some of the most profound technological innovations in history – including the <a href="https://www.computerhistory.org/atchm/who-invented-the-transistor/">transistor</a>, the <a href="https://www.popsci.com/article/science/invention-solar-cell/">solar cell</a> and the <a href="https://www.photonics.com/Articles/A_History_of_the_Laser_1960_-_2019/a42279">laser</a> – became widely available, yielding computers, solar power and other technologies that are crucial to the modern world. When the Bell System was <a href="https://www.cio.com/article/3267826/breaking-up-is-hard-to-do-why-the-bell-system-breakup-isn-t-a-model-for-tech.html">eventually broken up</a> in 1982, it did not do nearly as much to spread <a href="https://si.wsj.net/public/resources/images/BF-AV826_ATT_16U_20171120171814.jpg">innovation and competition</a> as the agreement that kept the Bells together a quarter-century earlier. </p>
<p>The antitrust action against IBM lasted 13 years and didn’t break up the firm. However, as part of its tactics to avoid appearing to be a monopoly, IBM agreed to <a href="https://www.cnet.com/news/ibm-and-microsoft-antitrust-then-and-now/">separate pricing for its hardware and software products</a>, previously sold as an indivisible bundle. This created an opportunity for entrepreneurs Bill Gates and Paul Allen to create a new software-only company, called Microsoft. The surge of software innovations that have followed can clearly trace their origins to the IBM settlement. </p>
<p>Two decades later, Microsoft was itself the target of an antitrust action. In the resulting settlement, <a href="https://www.theverge.com/2018/9/6/17827042/antitrust-1990s-microsoft-google-aol-monopoly-lawsuits-history">Microsoft agreed to ensure its products were compatible</a> with competitors’ software. That made room in the emerging internet marketplace for web browsers, the predecessors of Apple’s Safari, Mozilla’s Firefox and Google Chrome.</p>
<p>Even Margrethe Vestager, the European Union’s top antitrust official and frequent tech-giant nemesis, has said that “<a href="https://www.nytimes.com/2018/02/20/magazine/the-case-against-google.html">Antitrust prosecutions are part of how technology grows</a>.” But that doesn’t mean they all have to achieve their most extreme ends, of breaking up the companies. </p>
<p>Antitrust rules are complicated enough, and plenty of experts will be called on to give their views on what to do with “big tech.” Technology pervades every aspect of modern lives, giving each person a responsibility to weigh in on this issue without misconceptions clouding their judgments. Technology has become a political issue. In a politically overheated climate, public sentiments may matter even more than the opinions of experts.</p><img src="https://counter.theconversation.com/content/119283/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bhaskar Chakravorti has founded and directs the Institute for Business in the Global Context at Fletcher/Tufts that has received funding from Mastercard, Microsoft, the Gates Foundation, the Rockefeller Foundation and the Onassis Foundation. He is a Non-Resident Senior Fellow at Brookings India and a Senior Advisor on Digital Inclusion at the Mastercard Center for Inclusive Growth.</span></em></p>Advocates and opponents of breaking up Facebook, Google and other technology giants are falling prey to some serious misconceptions.Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1182342019-06-05T10:41:12Z2019-06-05T10:41:12Z2D spintronics has already transformed computing – now we’re making it work in three dimensions<figure><img src="https://images.theconversation.com/files/277878/original/file-20190604-69075-1s3e7n3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/abstract-background-163743035?src=avU0DY-XMpU4W1RuKZPhfA-1-86">Deniseus</a></span></figcaption></figure><p><a href="https://theconversation.com/shift-from-electronics-to-spintronics-opens-up-possibilities-of-faster-data-45864">Spintronics</a> might not be the sort of word that comes up in everyday discussions, but it has been revolutionising computer technology for years. It’s the branch of physics that involves manipulating the spin of a flow of electrons, which first reached consumers in the late 1990s in the form of magnetic computer hard drives with several hundreds of times the storage capacity of their predecessors. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1026&fit=crop&dpr=1 600w, https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1026&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1026&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1289&fit=crop&dpr=1 754w, https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1289&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/277923/original/file-20190604-69059-1rnkyks.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1289&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Remember me?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/vector-mp3-player-53699233?src=qYJ2tOgj7jex4cr1-kFz9w-1-23">leviana</a></span>
</figcaption>
</figure>
<p>These and other electronic devices have <a href="https://www.youtube.com/watch?v=3PMbN0PVyy0">since been refined</a> to make computers many times more powerful again, not to mention much cooler and more energy efficient – enabling everything from MP3 players to the smartphones of today. <a href="https://newsroom.intel.com/news/intel-starts-testing-smallest-spin-qubit-chip-quantum-computing/#gs.g5k0jo">Intel</a> and <a href="https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html">Google</a> began unveiling quantum processors last year, and <a href="https://news.samsung.com/global/samsung-electronics-starts-commercial-shipment-of-emram-product-based-on-28nm-fd-soi-process">Samsung</a> and <a href="https://www.everspin.com/news/everspin-ships-world%E2%80%99s-first-pre-production-28-nm-1-gb-stt-mram-customer-samples">Everspin</a> launched MRAM (magnetic random access memory) chips a few months ago. This new technology is expected to substantially improve computing performance – by <a href="https://www.spintronics-info.com/nec-and-tohoku-university-developed-spintronics-text-search-chip-cuts-power-reduction-99">one estimate</a>, for example, the potential reduction in power requirements could be over 99%. </p>
<p>Even so, all these advances have been labouring under a major limitation: the spin manipulation is confined to a single ultra-thin layer of magnetic material. Tens of these layers are typically stacked in a “sandwiched” structure, which interact through complex interfaces and interconnects, but their functionality is fundamentally 2D in nature. </p>
<p>Industry leaders like Stuart Parkin, who created IBM’s original spintronics-driven computer hard drive, the <a href="https://www.ibm.com/ibm/history/ibm100/us/en/icons/spintronics/">Deskstar 16GP Titan</a>, have <a href="https://youtu.be/kB0ixO5lrzQ">been saying</a> for years that one of the biggest challenges in magnetic computing is to shift to a much more flexible and capable 3D version.</p>
<p>This would see information transmitted, stored and processed across any point of the three-dimensional stack of magnetic layers. Recent pioneering <a href="https://www.spintronics-info.com/worlds-first-3d-spintronics-chip-developed-cambridge">advances</a> are starting to bring this paradigm shift <a href="https://www.agenciasinc.es/en/News/Three-Dimensional-Nanomagnets-for-the-computer-of-tomorrow">closer</a>, but we still face great challenges to reach the same degree of control as we have in two dimensions. </p>
<p>In a <a href="https://www.nature.com/articles/s41563-019-0386-4">new paper</a> led by the universities of Glasgow and Cambridge, in collaboration with researchers at the University of Hamburg, the Technical University of Eindhoven and the Aalto University School of Science, we have taken a significant step towards achieving that goal.</p>
<h2>Spins and charges</h2>
<p>Traditional electronics is based on the fact that electrons have electrical charges. In a basic computer, chips and other units transmit information by sending and receiving tiny electrical pulses. They register a “one” for a pulse and a “zero” for no pulse, and by counting these over millions of repetitions, it becomes the basis of a language of instructions. </p>
<p>Traditional magnetic hard drives rely on properties associated to electrical charges too, but they work on a different principle, with very tiny regions of a flat magnetic disk recording zeroes and ones via its two possible magnetic orientations. Magnetic drives have the great benefit that data is still there even when the power is switched off, though the information is recorded and retrieved much more slowly than using the transistors that we find in computer circuits. </p>
<p>Spintronics is different: it exploits both the charge and the intrinsic magnetism of electrons - otherwise known as its spin. The difference between spin and charge is sometimes likened to the way that the Earth orbits the sun but also spins on its axis at the same time. But whereas electrons are always negatively charged, they can spin “up” or “down”.</p>
<p>It was <a href="https://www.nobelprize.org/prizes/physics/2007/summary/">discovered</a> in the late 1980s that if an electrical current was conducted through a device formed by a non-magnetic sheet sandwiched between two magnetic sheets, the resistance of this device to the electron flow would change dramatically depending on the orientation of the magnets within the two magnetic sheets. </p>
<p>This effect was readily exploited in hard drives, with these spintronic systems acting as very sensitive sensors that could read many more zeroes and ones of magnetic information within the same area than previous hard drives – thus transforming storage capacity. Known as giant magnetoresistance, this later yielded the <a href="https://www.nobelprize.org/prizes/physics/2007/summary/">Nobel Prize in Physics</a> for Albert Fert and Peter Grunberg, the two scientists who discovered it simultaneously. </p>
<h2>Chiral spintronics</h2>
<p>Since the birth of spintronics, there have been many important advances, including some recent exciting ones in an area called chiral spintronics. Whereas we usually think of two magnets as having a “north” and “south” that rotate towards or away from one another along a 180º line – watch the compass towards the end of <a href="https://youtu.be/Mp0Bu75MSj8">this video</a> for example – under particular conditions, tiny magnets at the atomic level also present chiral spin interactions. This means that neighbouring magnets have a preference to orient at angles of 90º. </p>
<p>The existence of these interactions is a key ingredient to create and manipulate pseudo-particles called magnetic skyrmions, which have topological properties that <a href="https://eandt.theiet.org/content/articles/2019/05/could-skyrmions-change-the-future-of-computing/">enable them</a> to perform computing applications more effectively, with huge potential to further improve data storage. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=375&fit=crop&dpr=1 600w, https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=375&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=375&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=471&fit=crop&dpr=1 754w, https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=471&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/277933/original/file-20190604-69075-940e9s.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=471&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An attractive notion.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/magnet-plant-lines-triangles-point-connecting-753575932?src=H1nOZS9ebp625GHgnLRDlQ-1-22">piick</a></span>
</figcaption>
</figure>
<p>Until now, however, chiral spin interactions had only been observed and exploited in 2D spintronics. In our new paper, we show for the first time that this interaction can be also created between magnets located at two neighbouring magnetic layers separated by an ultra-thin non-magnetic metallic layer. </p>
<p>For this, we created a device with a total of eight layers using a technique called <a href="https://www.youtube.com/watch?v=L6ZIkmIVm6c">sputtering</a> to deposit nanoscale thin films. We had to carefully tune the interfaces of the layers to balance other magnetic interactions, and we studied the behaviour of the system under magnetic fields at room temperature employing lasers. The way the device behaved was confirmed by complementary magnetic simulations performed by our collaborator at the University of Hamburg. </p>
<p>This discovery opens new exciting routes to exploit further 3D spintronic effects, with chiral spin interactions playing a pivotal role to create more compact and efficient ways to store and move magnetic data along the whole 3D space. Future work will focus on finding ways to increase the strength of this interaction and expand the range of devices where the effect is present. We expect our work will attract great interest within the spintronic community and stimulate industry to continue working on magnetic computing devices based on these radically new concepts.</p>
<p>The first impact of spintronics in the computing market was extremely fast – it took just eight years from the discovery of giant magnetoresistance to the launch of IBM’s Deskstar 16GP Titan in 1997. The leap to 3D still needs to overcome multiple obstacles, from precisely fabricating the necessary devices to exploiting magnetic interactions in unconventional computing architectures. Our recent discovery brings us a step closer to achieving this very challenging but exciting objective.</p><img src="https://counter.theconversation.com/content/118234/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amalio Fernandez-Pacheco receives funding from the UK Engineering and Physical Science Research Council, the Winton Programme for the Physics of Sustainability, and the Royal Society. He is also affiliated with the University of Cambridge.</span></em></p>Manipulating electron spin has heralded everything from iPods to the latest laptops. Stand by for the next paradigm shift.Amalio Fernandez-Pacheco, EPSRC Early Career Fellow, Physics and Astronomy, University of GlasgowLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1117162019-02-25T11:39:19Z2019-02-25T11:39:19ZLessons from IBM for Google, Amazon and Facebook<figure><img src="https://images.theconversation.com/files/259122/original/file-20190214-1726-ovse4m.jpg?ixlib=rb-1.1.0&rect=25%2C84%2C5582%2C3648&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IBM has experience that will be relevant for the future of technology.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hannover-germany-march-2017-man-vr-710420449">Alexander Tolstykh/Shutterstock.com</a></span></figcaption></figure><p>It’s impressive when companies last for decades – or even more than a century – and especially so when they’re in a fast-changing industry like computer technology. IBM, which <a href="https://www.ibm.com/ibm/history/history/history_intro.html">traces its roots to the 1880s</a>, grew from three small firms to a multi-billion-dollar information technology services company today. Its ups and downs along the way offer some insights into the global technology industry, and may contain some instructive lessons for up-and-coming digital giants like Google, Amazon and Facebook – all of which are far younger than IBM.</p>
<p>In my new book, “<a href="https://mitpress.mit.edu/books/ibm">IBM: The Rise and Fall and Reinvention of a Global Icon</a>,” I explore the company’s history of creating and selling data processing equipment and software. As a former IBM employee and <a href="https://scholar.google.com/citations?user=RjobE18AAAAJ&hl=en">a historian</a>, the most important lesson I found is that many people confuse incremental changes in technology with more fundamental ones that actually shape the course of a company’s destiny. </p>
<p>There is a difference between individual products – successive models of PCs or typewriters – and the underlying technologies that make them work. Over 130 years, IBM released <a href="https://en.wikipedia.org/wiki/List_of_IBM_products">well over 3,600 hardware products and nearly a similar amount of software</a>. But all those items and services were based on just a handful of real technological advances, such as shifting from mechanical machines to those that relied on computer chips and software, and later to networks like the internet. The transitions between those advances took place far more slowly than the steady stream of new products might suggest.</p>
<p>These transitions from the mechanical, to the digital, and now to the networked reflected an ever-growing ability to collect and use greater amounts of information easily and quickly. IBM moved from manipulating statistical data to using technologies that teach themselves what people want and are interested in seeing.</p>
<h2>A focus that can adapt</h2>
<p>Between 1914 and 1918, IBM management decided that the business the company would be in was data processing. In more modern terms, that business has become “big data” and analytics. But it’s still collecting and organizing data, and performing calculations and computations on it.</p>
<p>Since the early 1920s, IBM has taken a disciplined approach to product development and research, focusing on developing the underpinning technologies for its data processing products. Nothing seemed to be done by accident.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1334&fit=crop&dpr=1 600w, https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1334&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1334&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1676&fit=crop&dpr=1 754w, https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1676&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/259099/original/file-20190214-1730-rphrv0.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1676&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blue IBM punch card.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Blue-punch-card-front.png">Gwern/Wikimedia Commons</a></span>
</figcaption>
</figure>
<p>In its first half-century, IBM’s basic technology platform from which many products emerged was the punch-card, yielding tabulators, card sorters, card readers and the famous <a href="https://www.ibm.com/ibm/history/ibm100/us/en/icons/punchcard/">IBM Card</a>. In its second half-century, the basic technology platform was the computer, including mainframes, minicomputers, PCs and laptops. In its most recent 30 years, computer sales have brought in a declining share of the company’s total revenue, as IBM transitions to providing more internet-based services, including software and technical and managerial consulting.</p>
<p>The rise of each succeeding technology happened during the maturity and decline of its predecessor. IBM first started selling computers in the 1950s, but kept selling tabulating equipment that still used punch cards until the early 1960s. As recently as the early 1990s, <a href="https://www.ibm.com/investor/financials/financial-reporting.html">over 90 percent of IBM’s revenues</a> came from selling computers, though it was introducing new services like management and process consulting, information technology management and software sales.</p>
<p>It wasn’t until the end of 2018 that IBM announced that <a href="https://newsroom.ibm.com/2019-01-22-IBM-Reports-2018-Fourth-Quarter-and-Full-Year-Results">50 percent of its business</a> now came from services and software, most of which were new offerings developed in the previous decade.</p>
<p>The news media – and even IBM employees – may have perceived that IBM was <a href="https://www.forbes.com/sites/bobevans1/2017/06/15/inside-ibms-stunning-transformation-to-the-cloud-10-key-insights/">transforming itself quickly and frequently</a>. In fact the company had planted seeds for growth early and carefully tended new technologies until they bore fruit – fortunately, around the same time as earlier systems were ending their useful lives.</p>
<p>This strategic approach is not uncommon – Apple has been selling personal computers for <a href="https://www.loc.gov/rr/business/businesshistory/April/apple.html">more than 40 years</a>. Its management, of course, talks much more about its role in the smartphone business, which is <a href="https://www.theatlantic.com/technology/archive/2019/01/apples-earnings-stumble-could-improve-iphone/579445/">already beginning to level off</a>. Apple may soon need – or already be working on – a new technological focus to remain relevant.</p>
<h2>The future of the giants</h2>
<p>Microsoft, like Apple, evolved away from selling just computer software and operating systems. It started internet-based projects like its Bing search engine and OneDrive cloud storage – as well as providing <a href="https://blogs.microsoft.com/blog/2018/07/19/powering-our-customers-the-innovation-story-behind-microsofts-earnings/">cloud-based computing services</a> for businesses.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1130&fit=crop&dpr=1 600w, https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1130&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1130&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1421&fit=crop&dpr=1 754w, https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1421&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/259121/original/file-20190214-1721-1du19ow.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1421&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">IBM is already exploring quantum computing, as a new frontier of data processing.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/ibm_research_zurich/23518086798">IBM Research</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Companies that started on the internet may also face similar transitions. Amazon, Google and Facebook at times claim to have transformed themselves, but haven’t yet fully left their original businesses. </p>
<p>Amazon still makes <a href="https://www.businessinsider.com/how-amazon-makes-money-2017-12">most of its money</a> selling physical items online, though its internet-based cloud services division is <a href="https://www.cnbc.com/2018/04/26/amazon-earnings-q1-2018.html">growing rapidly</a>. Amazon has also invested in a <a href="https://www.wired.com/story/why-hard-escape-amazons-long-reach/">wide range of other business</a> that might grow in the future, such as health care and entertainment content. </p>
<p>Google and Facebook still make most of their money <a href="https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791">selling information about how users behave</a> to advertisers and groups that want to attract people to a particular point of view. Both are exploring other avenues, whether it’s Google’s <a href="https://www.cbinsights.com/research/report/google-strategy-teardown/">self-driving cars</a> or Facebook’s <a href="http://panmore.com/facebook-inc-generic-strategy-intensive-growth-strategies">experiments with virtual reality</a>.</p>
<p>But at their core, all three internet giants are still finding new ways to capitalize on the vast quantities of information they accumulate about customers’ activities and interests – just as decades earlier IBM found new ways to use tabulating equipment and computers. If they’re to last decades or centuries into the future, the companies will need to probe, experiment and innovate to find new ways to profit as technologies change.</p>
<p>
<section class="inline-content">
<img src="https://images.theconversation.com/files/248895/original/file-20181204-133100-t34yqm.png?w=128&h=128">
<div>
<header>James Cortada is the author of:</header>
<p><a href="https://mitpress.mit.edu/books/ibm">IBM: The Rise and Fall and Reinvention of a Global Icon</a></p>
<footer>MIT Press provides funding as a member of The Conversation US.</footer>
</div>
</section>
</p><img src="https://counter.theconversation.com/content/111716/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Cortada does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The history of IBM shows how a technology titan can grow and change, while still remaining focused on its core business.James Cortada, Senior Research Fellow, Charles Babbage Institute, University of MinnesotaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1103312019-01-25T13:08:59Z2019-01-25T13:08:59ZIBM launches commercial quantum computing – we’re not ready for what comes next<figure><img src="https://images.theconversation.com/files/255551/original/file-20190125-108364-1agoxld.jpg?ixlib=rb-1.1.0&rect=0%2C400%2C2700%2C1992&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IBM's quantum computer, Q System One.</span> <span class="attribution"><span class="source">IBM</span></span></figcaption></figure><p>IBM <a href="https://newsroom.ibm.com/2019-01-08-IBM-Unveils-Worlds-First-Integrated-Quantum-Computing-System-for-Commercial-Use">recently unveiled</a> what it claimed was the world’s first commercial quantum computer. While the announcement of the Q System One wasn’t scientifically groundbreaking, the fact that IBM sees this as a commercial product that organisations (if not individuals) will want to use is an important breakthrough.</p>
<p>IBM has taken a prototype technology that has existed in the lab for over 20 years and launched it in the real world. In doing so, it marks an important step towards the next generation of computing technology becoming ubiquitous, something the world isn’t yet ready for. In fact, <a href="https://www.youtube.com/watch?v=jkcTHhqk_lE">quantum may well</a> prove to be the most disruptive technology of the information age.</p>
<p><a href="https://theconversation.com/get-used-to-it-quantum-computing-will-bring-immense-processing-possibilities-46420">Quantum computers</a> work by exploiting the weird phenomenon described by quantum physics, like the ability of an object to be, in a very real sense, in more than one place at the same time. Doing so enables them to solve problems in seconds that would take the age of the universe to solve on even the most powerful of today’s supercomputers. </p>
<h2>Too expensive?</h2>
<p>The <a href="https://www.ncsc.gov.uk/whitepaper/quantum-key-distribution">one criticism</a> typically laid against quantum technologies is that they are “too expensive”, and will continue to be so even as they become more readily available. This is certainly the case today. IBM isn’t making its quantum computer available to buy but rather to access over the internet. But this shows the technology is on its way to becoming affordable in the near future.</p>
<p>Quantum computers are very easily disrupted by changes in the environment and take a long time to reset. So IBM has developed a <a href="https://www.theverge.com/2019/1/8/18171732/ibm-quantum-computer-20-qubit-q-system-one-ces-2019">protective system</a> to keep the Q System One stable enough to perform tasks for commercial customers, which are <a href="https://newsroom.ibm.com/2019-01-08-ExxonMobil-and-Worlds-Leading-Research-Labs-Collaborate-with-IBM-to-Accelerate-Joint-Research-in-Quantum-Computing">likely to include</a> large companies, universities and research organisations that want to run complex simulations. As a result, IBM believes it has a commercially viable product, and is putting its money where its mouth is.</p>
<p>History shows us that technologies can experience rapid growth in use and capability once they become viable commercial products. After conventional digital computers became commercially viable, they experienced an exponential explosion referred to commonly as <a href="https://theconversation.com/moores-law-is-50-years-old-but-will-it-continue-44511">Moore’s Law</a>. Roughly every two years, computers have doubled in power while their size and costs have fallen by half. This “law” is really just a trend that has been made possible, in part, by market forces.</p>
<p>The IBM announcement does not guarantee that quantum computers will now experience Moore’s Law-style exponential growth of their own. It does, however, make that explosion likelier and sooner.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/LAA0-vjTaNY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>In the long run, this means better, more advanced technology overall, for all of us. Quantum measurement devices are <a href="https://phys.org/news/2018-11-probing-quantum-physics-macroscopic-scale.html">more accurate</a>. Quantum imaging devices can produce <a href="http://iopscience.iop.org/article/10.1088/2040-8978/18/7/073002">better pictures</a>. Quantum batteries can <a href="https://physicsworld.com/a/quantum-battery-could-get-a-boost-from-entanglement/">charge faster</a>. Quantum cybersecurity offers <a href="https://www.information-age.com/quantum-cryptography-123477496/">better protection</a>. And quantum computers can <a href="https://www.scottaaronson.com/blog/?p=208">solve problems</a> no classical computer could ever hope to. </p>
<p>These are just the tip of the iceberg. In the short to mid-term, however, this also means we have something of an approaching crisis.</p>
<h2>Skills crisis</h2>
<p>Quantum technologies are disruptive, and more so in cybersecurity than any other field. Once large-scale quantum computers become available (which at the current rate could take another ten to 15 years), they could be used to access pretty much every secret on the internet. Online banking, private emails, passwords and secure chats would all be opened up. You would be able to impersonate any person or web page online.</p>
<p>This is because the information locks we use to secure privacy and authentication online <a href="https://theconversation.com/will-superfast-quantum-computers-mean-the-end-of-unbreakable-encryption-64402">are like butter</a> to a quantum computer’s hot knife. Quantum technology is disruptive in many other areas as well. If your business decides not to “go quantum” and your competitor or adversary does, you may well be at a strong disadvantage. </p>
<p>As the technology landscape realigns itself, it is quite likely that many tech professionals will see their skills turn obsolete very quickly. Simultaneously, companies may find themselves frantic to hire expertise that does not readily exist.</p>
<p>When geopolitical and market forces realign, it’s common for people in business to say everyone now has to learn a new language. For example, as China has grown in power and influence, it is not uncommon in business communities to <a href="https://theconversation.com/boris-is-right-its-time-for-us-to-learn-chinese-19354">hear the phrase</a> “we’ll all have to learn Mandarin now”. Perhaps it’s time for all of us to start learning to speak quantum.</p><img src="https://counter.theconversation.com/content/110331/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Carlos Perez-Delgado is a consultant at QandI (<a href="http://www.qandi.co.uk">www.qandi.co.uk</a>). </span></em></p>Quantum computers are set to revolutionise technology, but very few people know how to use them.Carlos Perez-Delgado, Lecturer in Computing, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/999062018-07-16T10:39:53Z2018-07-16T10:39:53ZTrade war could chill China’s growing investment in US economy<figure><img src="https://images.theconversation.com/files/227706/original/file-20180715-27042-fhkag9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The U.S. is the biggest destination for Chinese foreign investment.</span> <span class="attribution"><span class="source">Jason Lee/Pool Photo via AP</span></span></figcaption></figure><p>The U.S. and China are currently engaged in an <a href="https://www.brookings.edu/blog/unpacked/2018/07/12/unpacked-the-us-china-trade-war/">ever-escalating trade war</a> with no end in sight. While the focus of the dispute has centered on tariffs, the consequences <a href="https://www.independent.co.uk/news/business/analysis-and-features/trade-war-explained-tariffs-donald-trump-us-china-imports-exports-a8434626.html">are expected to spill</a> well beyond imports and exports to other aspects of the countries’ complex relationship. </p>
<p>One such area is what economists call foreign direct investment, in which companies invest in businesses in another country. The United States’ ability to draw investments from around the world has been a <a href="http://www.areadevelopment.com/LocationUSA/2017-US-inward-investment-guide/importance-of-FDI-to-US-economy.shtml">significant driver</a> of its economic growth. Indeed, the U.S. was the <a href="https://ofii.org/sites/default/files/FDIUS%202017.pdf">top destination</a> for foreign investment in 2016, as it usually is. </p>
<p>China’s investments in the U.S., however, remain relatively paltry, despite the country’s growing clout on the world stage. And while most have been small and low-profile, a few bigger deals have made headlines and even been blocked over “national security” concerns. </p>
<p><a href="https://scholar.google.com/citations?user=eubX-aYAAAAJ&hl=en&oi=ao">I research</a> the international political economy of China’s rise. Even though most Chinese investment in the U.S. has little to do with national security, I believe the current tense environment will put a chill on Chinese-American deals – with severe long-term consequences. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/227686/original/file-20180715-27027-7xzovp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Tesla CEO Elon Musk greets new owners of his company’s Model S sedans in Beijing in 2014. China’s Tencent took a 5 percent stake in Tesla in 2017.</span>
<span class="attribution"><span class="source">AP Photo/Ng Han Guan</span></span>
</figcaption>
</figure>
<h2>A snapshot of China FDI in the US</h2>
<p>The reality is that the vast majority of the <a href="https://www.aei.org/wp-content/uploads/2018/01/Chinese-Investment-Jan-2018.pdf">232 investments</a> made by Chinese companies in the United States since 2005 have little to do with national security. </p>
<p>A typical example is <a href="https://www.theguardian.com/technology/2004/dec/08/business.china">Beijing-based Lenovo’s acquisition</a> of IBM’s personal computer business in 2004 for US$1.75 billion, which raised little fanfare or objection. Or consumer electronics company <a href="https://www.scmp.com/business/companies/article/2116486/chinas-haier-has-plan-help-continue-turnaround-ge-appliances">Haier’s purchase</a> of General Electric’s home appliance unit in 2016 for $5.6 billion, again without a fuss. </p>
<p>In more recent years, Chinese companies have taken stakes in some well-known Silicon Valley companies. For example, last year, Chinese tech and media investment firm Tencent <a href="https://www.reuters.com/article/us-snap-tencent-stake/chinas-tencent-takes-12-percent-stake-in-snap-as-shares-plunge-idUSKBN1D81G3">acquired</a> a 12 percent stake in the owner of the messaging app Snapchat and <a href="https://www.bloomberg.com/news/articles/2017-03-28/tencent-buys-1-8-billion-tesla-stake-ahead-of-musk-s-model-3">5 percent</a> of Elon Musk’s Tesla. Also in 2017, China’s sovereign wealth fund invested $100 million in room-sharing service Airbnb. </p>
<p>Overall, China remains a minor U.S. investor – and the data suggest the president’s <a href="https://abcnews.go.com/Politics/10-times-trump-attacked-china-trade-relations-us/story?id=46572567">rhetoric on the campaign trail</a> may have already had a disruptive impact. Last year, China <a href="https://www.aei.org/wp-content/uploads/2018/01/Chinese-Investment-Jan-2018.pdf">invested</a> $24 billion in the U.S., down from $54 billion in 2016, excluding deals under $100 million in size. </p>
<p><iframe id="pnI8Q" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/pnI8Q/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>While that’s a sharp rise from just $5 billion a decade ago, it’s barely a drop in the bucket for the U.S. economy. <a href="https://ofii.org/sites/default/files/FDIUS%202017.pdf">China’s cumulative investments</a> in 2016 made up less than 2 percent of all $3.7 trillion invested in the U.S., ranking it 11th, a fraction of the U.K.’s $598 billion and Canada’s $454 billion, the top sources of funding.</p>
<p>California and New York alone <a href="https://www.bloomberg.com/news/articles/2017-04-25/chinese-investment-creates-and-protects-u-s-jobs-rhodium-says">received</a> the lion’s share of China’s $171 billion in investments from 2005 to 2017, or 51 percent. All but 14 states have received at least one investment in the period. </p>
<p><iframe id="OSbMV" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/OSbMV/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>By sector, the biggest chunk has gone into property investments – such as prime real estate in New York along Park Avenue – which tallied $26 billion, or 15 percent, in the period. Financial firms such as BlackRock took in the next largest share of 14 percent, while 13 percent went to technology businesses like IBM and Motorola. </p>
<h2>National security and politics</h2>
<p>Two of the reasons Chinese investment in the U.S. isn’t higher are national security and politics. A number of high-profile deals have rung alarm bells among U.S. officials and politicians and ended up getting killed as a result. </p>
<p>For example, in 2003, Hong Kong conglomerate Hutchison Whampoa <a href="https://www.wsj.com/articles/SB105168669140493600">withdrew</a> from a joint bid for fiber-optic carrier Global Crossing after the <a href="https://fas.org/sgp/crs/natsec/RL33388.pdf">Committee on Foreign Investment</a> opened an investigation of the deal as some defense officials grew concerned the company’s chairman was too close to Chinese government officials. </p>
<p>Two years later, China oil producer CNOOC <a href="https://www.wsj.com/articles/SB112295744495102393">dropped its effort</a> to buy U.S. rival Unocal for $18.5 billion. In this case, it was lawmakers in Congress who managed to scuttle the deal. CNOOC <a href="https://www.wsj.com/articles/SB112298888643902543">blamed</a> a “political environment.”</p>
<p>The Committee on Foreign Investment, <a href="https://fas.org/sgp/crs/natsec/RL33388.pdf">established</a> by President Gerald Ford in 1975, has the power to veto investments if they might damage U.S. national security. Proposed Chinese investments get reviewed more often than those from any other country. Though the launch of an investigation is often enough to stop a deal – as was the case with Hutchison – the committee has only vetoed five deals, four of which involved China. </p>
<p>One came in 2012, when <a href="https://www.nytimes.com/2012/09/29/us/politics/chinese-company-ordered-to-give-up-stake-in-wind-farms-near-navy-base.html">President Barack Obama cited the committee’s recommendation</a> as he ordered Ralls Corp., a U.S. company owned by Chinese nationals, to divest its interests in wind turbines being built close to a Navy military site in Oregon. It was the first time the power was used since 1990, when President George Bush blocked the sale of an American aircraft manufacturing company to a Chinese agency. </p>
<p>And last year, President Donald Trump <a href="https://www.reuters.com/article/us-lattice-m-a-canyonbridge-trump/trump-bars-chinese-backed-firm-from-buying-u-s-chipmaker-lattice-idUSKCN1BO2ME">prevented</a> Chinese investment firm Canyon Bridge Capital Partners from acquiring U.S. chipmaker Lattice Semiconductor. </p>
<p>And the president <a href="https://www.nytimes.com/2018/06/27/us/politics/cfius-expansion-trump.html">supports a bipartisan bill</a> in Congress that would grant the Committee on Foreign Investments even more power. </p>
<h2>FDI as foreign policy</h2>
<p>While China may not make up a significant portion of the U.S. total, its spending there makes up the <a href="https://www.aei.org/wp-content/uploads/2018/01/China-Tracker-Jan2018.pdf">largest share</a> of Chinese outbound FDI by country. </p>
<p>From 2005 to these days, China invested $171 billion of its $1.87 trillion in total foreign investment in the U.S. </p>
<p><a href="https://www.cambridge.org/core/journals/business-and-politics/article/dissuasive-effect-of-us-political-influence-on-chinese-fdi-during-the-going-global-policy-era/34345FFDB008BD612F7469857CBCA10C">My own research</a> into China’s investments shows that state-owned companies are very sensitive to the government’s foreign policy goals. An agency known as the <a href="https://www.bloomberg.com/news/videos/2018-04-12/sasac-s-xiao-on-soe-reform-china-soe-investment-in-u-s-video">State-owned Assets Supervision and Administration Commission</a> of the State Council coordinates all foreign investments by major Chinese businesses. </p>
<p>Any drop in investments to the U.S. will probably be compensated by more spending in other destinations, especially those countries that are part of the <a href="https://theconversation.com/us/topics/one-belt-one-road-33049">One Belt, One Road</a> initiative such as Australia, Singapore and Vietnam. </p>
<h2>Going forward</h2>
<p>And in fact, the current trade dispute between the U.S. and China will most likely lead to less Chinese investment as deals will encounter increased scrutiny and resistance. </p>
<p>The president has said he <a href="https://www.express.co.uk/news/world/986966/trump-news-trade-war-us-china-tariffs">launched</a> the trade war because Chinese companies <a href="https://www.wsj.com/articles/china-started-the-trade-war-not-trump-1521797401">have a track record</a> of “stealing” Western technology and not respecting intellectual property. Hence, the administration will likely block investments that look along these lines or threaten national security.</p>
<p>But politics will also play a role as members of Congress and others <a href="https://www.bbc.com/news/av/world-us-canada-42405458/trump-china-and-russia-rivals-in-new-era-of-competition">regard China</a> warily as a growing rival that must be confronted. One risk is that <a href="http://www.pewglobal.org/2013/07/18/chapter-3-attitudes-toward-china/">anti-China sentiment</a> in the U.S. increases and makes it harder for the country to use “soft power” via cultural and economic means to achieve its ends – which is preferable to hard power at the end of a bayonet. </p>
<p>It’s unfortunate because <a href="https://dash.harvard.edu/bitstream/handle/1/3450062/helpman_tradewars.pdf?sequence=4">years on international political economy research</a> suggest trade wars and discouraging investment <a href="http://www3.nccu.edu.tw/%7Elorenzo/Ikenberry%20Rise%20of%20China.pdf">are exactly the wrong ways</a> for the U.S. to deal with China’s rise. The U.S. can find other strategies to challenge any unfair trading or business practices without jeopardizing good economic relations, which <a href="https://www.cambridge.org/core/books/renegotiating-the-world-order/B0878F74F44B1F08F3C7535019FBAEE3">have always been</a> the best way to prevent clashes and even war among great powers. </p>
<p>Beyond that, deeper business ties lead to better relations and stronger economies. Economic interdependence raises the costs of direct confrontation, leading to a more peaceful international system.</p>
<p>What concerns me from the current trade war is that it could make geopolitical clashes between China and U.S. stronger and more frequent in the long run.</p><img src="https://counter.theconversation.com/content/99906/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Francisco Urdinez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Chinese investment in the US has never been high, but the ongoing trade war could dampen it further, with significant long-term repercussions.Francisco Urdinez, Professor of International Political Economy, Universidad Católica de ChileLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/987832018-06-26T12:59:20Z2018-06-26T12:59:20ZIBM’s debating computer: an AI expert’s verdict<figure><img src="https://images.theconversation.com/files/224934/original/file-20180626-112604-1rfmhep.jpg?ixlib=rb-1.1.0&rect=14%2C206%2C1183%2C591&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">IBM</span></span></figcaption></figure><p>The competition got underway when the computer’s female voice, a mix of Amazon’s Alexa and Stephen Hawking’s communicator, spoke to its human opponent: “Hello Noa. We meet again.”</p>
<p>I was the only academic invited into the crowded room of 50 or so journalists to witness the <a href="https://www.ibm.com/blogs/research/2018/06/ai-debate/">recent contest</a> between the artificial intelligence of IBM’s Project Debater and Israeli debate champions Noa Ovadia and Dan Zafrir. The opening gambit produced titters and eye-rolling from the audience. I was more of an eye-roller – I’m not convinced that obvious pre-scripted material really helps the cause of showcasing AI technologies. What followed though was undeniably an impressive feat of engineering - but it might be too easy to think that sci-fi AI is now just round the corner.</p>
<p>Project Debater follows <a href="https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html">the announcement</a> that Google has developed an AI technology known as Duplex that can conduct natural-sounding phone conversations in order to book appointments and carry out other tasks. Both projects look like they involve AI that is nearing human-level competence, that could pass the <a href="http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html">Turing test</a>, and imminently dominate the world, perhaps. But this is an illusion borne
of the careful marketing of these huge corporations. The reality is that we’re still in the earliest days of understanding AI.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/UeF_N1r91RQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>After the initial crowd-pleasing tactics, IBM’s computer produced a four-minute speech, on the fly, on a topic selected at random from a list of 40 on which it hadn’t already been trained to debate. It did this by identifying, classifying, selecting and then stitching together snippets from a library of 300m news articles. The result was largely grammatically correct, semantically on message and more or less coherent. The system was then able to listen and respond to a similar statement from its human opponent. </p>
<p>It’s maybe worth reflecting on just how difficult these tasks
are. Holding a conversation is enormously challenging once you go beyond
very structured, tightly controlled domains. <a href="https://theconversation.com/deep-learning-and-neural-networks-77259">Deep learning systems</a>, inspired by the human brain, are trying to map whatever the human says to a relatively small number of possible moves with a small number of possible values. Google Duplex still works within a specific domain, like booking dinner, and so can be very robust.</p>
<p>Having an argument is even more demanding. It is remarkably difficult to build an algorithm to reliably determine whether a given sentence supports your position or not. On one level, the IBM team nailed it, with Project Debater producing its coherent and persuasive four-minute statement. I was also very impressed that the computer’s grammatical structure was so good, especially as each sentence may have drawn from multiple articles in the library.</p>
<h2>Technology still limited</h2>
<p>Yet as the speech went on, I got the distinct sense that the thematic structure was breaking down, with the flow flitting between topics. The machine finished bang on the four-minute mark with a nice rhetorical flourish of anticipating and attacking the opponent’s argument (known <a href="http://rhetoric.byu.edu/Figures/P/procatalepsis.htm">as procatalepsis</a>). But later, the computer’s two-minute rebuttal to its human opponent sounded increasingly like mere repetition.</p>
<p>Project Debater has achieved significant new advances in areas such as searching texts for arguments <a href="http://www.i3s.unice.fr/%7Evillata/tutorialIJCAI2016.html">(argument mining)</a> coupled with
technical solutions such as grammatical repair that involves gluing sentence parts together. But, as an orator, the computer is still making its first tiny squeaks. </p>
<p>The system has only the most rudimentary notion of argument structure and so often deviates from the main theme. It pays no heed to its audience, nor its opponent, and has no way of adapting its language or exploiting any of the hundreds of clever rhetorical techniques that <a href="http://rhetoric.byu.edu/">help win over audiences</a></p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/224904/original/file-20180626-112634-1kdgju1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Computers that can really argue are a long way away.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Neither IBM nor Google are claiming, or even intimating, that they’ve solved all AI problems, or built machines with human-level performance. In both cases, the programmers have specific goals in mind that more or less lead directly to commercial technology.</p>
<p>The <a href="https://theconversation.com/how-ai-can-make-us-better-at-arguing-85938">real value of argument technology</a> as a whole is going to be delivered not in the debating chamber but in applications in which AI systems can contribute to human decision-making teams. Whether in the <a href="http://www.ai-policing.nl/">police incident room</a>, the <a href="https://cispaces.org/">intelligence analysis bunker</a> or <a href="https://www.bbc.co.uk/taster/pilots/evidence-toolkit-moral-maze">the classroom</a>, it can only be a good thing to increase the robustness of evidence-based decision making by introducing AI systems that can contribute to the conversation. They will be able to add new information or critique human reasoning.</p>
<p>Project Debater is a valuable step forward toward this goal, and the broader aim of building AI that can really understand and respond to us. But we are most certainly not on the verge of seeing AI systems out-debating their human counterparts. Today’s AI technology is as far from these scenarios as the Romans’ experiments with steam power were from the industrial revolution.</p><img src="https://counter.theconversation.com/content/98783/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Reed receives funding from EPSRC and is a collaborator in a research project with IBM in the UK but has no connection with the Debater project or IBM Research Haifa where the Debater work is run. </span></em></p>Project Debater looks like machines are ready to understand humans, but the reality is we’re still in the earliest days of AI.Chris Reed, Professor of Computer Science and Philosophy, University of DundeeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/944372018-04-05T14:44:14Z2018-04-05T14:44:14ZTo drive AI forward, teach computers to play old-school text adventure games<figure><img src="https://images.theconversation.com/files/214305/original/file-20180411-543-1dcho2z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Ready player one?</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Colossal_Cave_Adventure_on_VT100_terminal.jpg#/media/File:Colossal_Cave_Adventure_on_VT100_terminal.jpg">Wikimedia</a></span></figcaption></figure><p>Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind’s <a href="https://theconversation.com/googles-latest-go-victory-shows-machines-are-no-longer-just-learning-theyre-teaching-78410">AlphaGo</a> and <a href="https://www.theregister.co.uk/2017/12/19/poker_bot_libratus_ai/">poker bot Libratus</a> from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM’s DeepBlue achieved the same feat <a href="https://www.theguardian.com/theguardian/2011/may/12/deep-blue-beats-kasparov-1997">in chess</a>. </p>
<p>Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. </p>
<p>With the arrival of the latest techniques in AI and machine learning, attention is <a href="https://project.dke.maastrichtuniversity.nl/cig2018/?page_id=255">now shifting</a> to visually detailed computer games – including the 3D shooter Doom, <a href="https://github.com/mgbellemare/Arcade-Learning-Environment">various 2D Atari games</a> such as Pong and Space Invaders, and the real-time strategy game StarCraft. </p>
<p>This is all certainly progress, but a key part of the bigger AI picture is being overlooked. Research has prioritised games in which all the actions that can be performed are known in advance, be it moving a knight or firing a weapon. The computer is given all the options from the outset and the focus is on how well it chooses between them. The problem is that this disconnects AI research from the task of making computers genuinely autonomous. </p>
<h2>Banana skins</h2>
<p>Getting computers to determine which actions even exist in a given context presents conceptual and practical challenges which games researchers have barely attempted to resolve so far. The “monkey and bananas” problem is one example of a longstanding AI conundrum in which no recent progress has been made. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=690&fit=crop&dpr=1 600w, https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=690&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=690&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=867&fit=crop&dpr=1 754w, https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=867&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/213170/original/file-20180404-189807-zzpsqv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=867&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Headscratcher.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/monkey-man-holding-banana-over-colorful-283183991?src=UaQrgHzv4Gm6OcQmcC2fzA-1-42">Luis Molinero</a></span>
</figcaption>
</figure>
<p>The problem was <a href="https://www.sciencedirect.com/science/article/pii/S0004370210001827">originally posed</a> by John McCarthy, one of the founding fathers of AI, in 1963: there is a room containing a chair, a stick, a monkey and a bunch of bananas hanging on a ceiling hook. The task is for a computer to come up with a sequence of actions to enable the monkey to acquire the bananas. </p>
<p>McCarthy made a key distinction between two aspects of this task in terms of artificial intelligence. Physical feasibility – determining whether a particular sequence of actions is physically realisable; and epistemic or knowledge-related feasibility – determining which possible actions for the monkey actually exist. </p>
<p>Determining what is physically feasible for the monkey is very easy for a computer if it is told all the possible actions in advance – “climb on chair”, “wave stick” and so forth. A simple program that instructs the computer to go through all the possible sequences of actions one by one will quickly arrive at the best solution. </p>
<p>If the computer has to first determine which actions are even possible, however, it is a much tougher challenge. It raises questions about how we represent knowledge, the necessary and sufficient conditions of knowing something, and how we know when enough knowledge has been acquired. In highlighting these problems, McCarthy <a href="https://dl.acm.org/citation.cfm?id=216000">said</a>:</p>
<blockquote>
<p>Our ultimate objective is to make programs that learn from their experience as effectively as humans do.</p>
</blockquote>
<p>Until computers can tackle problems without any predetermined description of possible actions, this objective can’t be achieved. It is unfortunate that AI researchers are neglecting this: not only are these problems harder and more interesting, they look like a prerequisite for making further meaningful progress in the field. </p>
<h2>Text appeal</h2>
<p>To operate autonomously in a complex environment, it is impossible to describe in advance how best to manipulate – or even characterise – the objects there. Teaching computers to get around these difficulties immediately leads to deep questions about learning from previous experience.</p>
<p>Rather than focusing on games like Doom or StarCraft, where it is possible to avoid this problem, a more promising test for modern AI could be the humble text adventure from the 1970s and 1980s. </p>
<p>In the days before computers had sophisticated graphics capabilities, games like Colossal Cave and Zork were popular. Players were told about their environment by messages on the screen:</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=376&fit=crop&dpr=1 600w, https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=376&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=376&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=473&fit=crop&dpr=1 754w, https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=473&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/213173/original/file-20180404-189816-e49be.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=473&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Picture this.</span>
</figcaption>
</figure>
<p>They had to respond with simple instructions, usually in the form of a verb or a verb plus a noun – “look”, “take box” and so on. Part of the challenge was to work out which actions were possible and useful and to respond accordingly. </p>
<p>A good challenge for modern AI would be to take on the role of a player in such an adventure. The computer would have to make sense of the text descriptions on the screen and respond to them with actions, using some predictive mechanism to determine their likely effect. </p>
<p>More sophisticated behaviours on part of the computer would involve exploring the environment, defining goals, making goal-oriented action choices and solving the various intellectual challenges typically required to progress. </p>
<p>How well modern AI methods of the kind promoted by tech giants like IBM, Google, Facebook or Microsoft would fare in these text adventures is an open question – as is how much specialist human knowledge they would require for each new scenario. </p>
<p>To measure progress in this area, for the past two years we <a href="http://atkrye.github.io/IEEE-CIG-Text-Adventurer-Competition/2018/01/16/announceThirdYear/">have been running a competition</a> at the IEEE Conference on Computational Intelligence and Games, which <a href="https://project.dke.maastrichtuniversity.nl/cig2018/">this year takes place in Maastricht</a> in the Netherlands in August. Competitors submit entries in advance, and can use the AI technology of their choice to build programs that can play these games by making sense of a text description and outputting appropriate text commands in return. </p>
<p>In short, researchers need to reconsider their priorities if AI is to keep progressing. If unearthing the discipline’s neglected roots turns out to be fruitful, the monkey may finally gets his bananas after all.</p><img src="https://counter.theconversation.com/content/94437/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s time programmers looked out old computer text adventures like Zork and Colossal Cave from the 1970s and 1980s.Jerry Swan, Senior Research Fellow, University of YorkHendrik Baier, Research Associate for Artificial Intelligence and Data Analytics, University of YorkTimothy Atkinson, Doctoral Researcher, University of YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/858102017-11-16T01:41:16Z2017-11-16T01:41:16ZHow Silicon Valley industry polluted the sylvan California dream<figure><img src="https://images.theconversation.com/files/192293/original/file-20171027-2402-15ejnas.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Aerial view of San Jose, California, 2016.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/gordon-s/29670306746/in/photolist-ZHvaXX-Uo5oeM-RcXnY3-UjXT4J-N7tLMF-T9Tnno-Xea8Ym-McS5uS-Ui2ybJ-qEMBub-PfD9EU-e9RRfi-VWgfbi-QiHUXk-S4wGvz-LzFHTp-S64S7H-VWge8X-ABtoak-qg77S5-URsuhd-SrcUo8-eUPCUc-AePQJj-qzF7PW-Vy2pDG-pjpyPc-BE9Ed4-Rvoc4U-szHCZC-QBgQpX-Hg3Lgy-PtWFnc-Gjc4CG-PJMPp1-Liz43E-TTfx1R-ML7E2u-Sht5zW-eTpurL-TgKPJq-S64THD-8hxSP2-8hBsFA-Jy7ccp-TLXcao-pjb9gm-hskJKv-ACsLw6-rqdGgk">Gordon-Shukwit</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>On Labor Day 1956, a caravan of moving trucks wound their way into Santa Clara County, just south of San Francisco, carrying the possessions of 600 families and equipment for the missile and space labs of the Lockheed Corporation. One month later, Lockheed’s Sunnyvale campus opened for business. Many of the arriving families were relocating to Sunnyvale from the company’s facility in Burbank, in Southern California.</p>
<p>The draw included good jobs in the emerging businesses of electronics research and development, as well as manufacturing of semiconductors and other electronic components for machinery and computers. Affordable housing, a pastoral landscape and a pleasant environment proved very attractive for newcomers. Local boosters, corporate executives and new residents alike <a href="https://books.google.com/books?id=KwvEBAAAQBAJ&pg=PT54&dq=Margaret+O%27Mara+environmental+contradictions&hl=en&sa=X&ved=0ahUKEwjDxbPal7LXAhUS3YMKHbuvBiwQ6AEIKDAA#v=onepage&q&f=false">envisioned a modern future</a> in stark contrast with the declining dirty urban industrial model of the Northeast and Midwest. </p>
<p>This type of industrial work and manufacturing didn’t need smokestacks, large warehouses, or other markers of the industrial age. The Santa Clara Valley’s promise for leading Northern California into a bright economic future quickly brought the area the nickname “Silicon Valley.” But in the book I am writing, I note that if this convergence of natural surroundings, suburban homes and high-tech industrialization represented a facet of the California dream, it also betrayed it.</p>
<h2>A bright illusion of the future</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1160&fit=crop&dpr=1 600w, https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1160&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1160&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1458&fit=crop&dpr=1 754w, https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1458&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/191217/original/file-20171020-13995-1qaicjp.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1458&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A promising advertisement for homes in San Jose.</span>
<span class="attribution"><span class="source">San Jose Mercury, January 18, 1956</span></span>
</figcaption>
</figure>
<p>In addition to jobs in electronics and aerospace, the emerging suburbs of Silicon Valley promised newcomers a countryside experience. David Beers, whose father worked at the Sunnyvale Lockheed campus, <a href="https://books.google.com/books?id=bVMuLOrHoU8C&pg=PT50&dq=Beers+%22all-year+garden%22&hl=en&sa=X&ved=0ahUKEwjz7KDatP_WAhWB64MKHRy-BGgQ6AEIKDAA#v=onepage&q=Beers%20%22all-year%20garden%22&f=false">remembered</a> the chamber of commerce brochures claiming an “all-year garden” and “the most beautiful valleys in the world.” Such advertisements were common, assuring home buyers “good living,” the “calm of the country” and “a beautiful walnut and cherry orchard” that “the builder is leaving … for your enjoyment.” The white-collar workers of high tech could make their homes in what appeared to be the countryside.</p>
<p>Workplaces, too, were different, with manufacturing happening in places that didn’t look like the old industries of the East. The Stanford Industrial Park, founded in the early 1950s, had <a href="https://www.cityofpaloalto.org/civicax/filebank/documents/58349">strict building guidelines</a> that made it look more like a suburban area than a manufacturing center. Crucially, 60 percent of each lot had to be preserved as open green space, and no smokestacks were allowed. “Everyone thought of smokestacks,” <a href="https://purl.stanford.edu/dv559gn8984">recalled Alf Brandin</a>, Stanford’s business manager in the 1940s and 1950s. “These new people who came out from the East and settled here thought, ‘Don’t change it. We just left all the smoke and all that junk. Don’t change this.’”</p>
<p>The overall feeling was of much more than just a good job and a nice place to live: a new world was opening, based on computing. Promising young engineers could come west, buy a home and work in the future of the nation’s industry. “There’s a sense of being pioneers here,” Mark Leslie, founder of Synapse Computers, <a href="https://www.inc.com/magazine/19820901/3259.html">told a reporter</a> in 1982. “I view myself as the kind of guy who would have been living in Detroit in 1910. The future depends on high technology, and we are spearheading it.”</p>
<p>Recent college graduates and white-collar workers flocked to the valley to work at companies like Fairchild, Intel, Hewlett-Packard, International Business Machines and Lockheed. The county’s population <a href="http://www.bayareacensus.ca.gov/bayarea70.htm">more than quadrupled</a> in 30 years, from 290,547 in 1950 to 1,265,200 in 1980. But the clean, gleaming future they imagined was already being tarnished.</p>
<p><iframe id="GAWiv" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/GAWiv/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>Fairchild contamination</h2>
<p>Semiconductor manufacturing involves very carefully connecting microscopic electrical components to each other on large plates of silicon. Pieces of dust can block sensitive circuits, and the smallest scratches can render everything useless. So to clean the silicon wafers and the parts joined to them, manufacturers used <a href="https://books.google.com/books?id=-L0ODAAAQBAJ&pg=PA185&dq=semiconductor+chemical+solvents+cleaning+TCE&hl=en&sa=X&ved=0ahUKEwjxra6JuP_WAhXE1IMKHUqAD6cQ6AEINzAD#v=onepage&q=semiconductor%20chemical%20solvents%20cleaning%20TCE&f=false">harsh chemical solvents</a> like <a href="https://www.epa.gov/assessing-and-managing-chemicals-under-tsca/risk-management-trichloroethylene-tce">1,1,1 trichloroethane</a>, <a href="https://www.atsdr.cdc.gov/mmg/mmg.asp?id=291&tid=53">xylene</a> and <a href="https://toxtown.nlm.nih.gov/text_version/chemicals.php?id=77">methanol</a>. These chemicals were stored on-site in containers designed to safely hold them.</p>
<p>But in December 1981, construction workers discovered a leaking chemical solvents tank at Fairchild Semiconductor’s southern San José facility. A cancer-causing chemical, TCE, had found its way into <a href="https://www.inc.com/magazine/19820901/3259.html">nearby drinking-water wells</a>. The water company promptly shut off pumping water from those wells. A month later, the San Jose Mercury broke the story of the chemical leak. TCE accumulated in wells at nearly 20 times the permissible limit established by the Environmental Protection Agency. Over the course of two years, <a href="https://nepis.epa.gov/Exe/ZyNET.exe/9100976C.txt?ZyActionD=ZyDocument&Client=EPA&Index=1986%20Thru%201990&Docs=&Query=&Time=&EndTime=&SearchMethod=1&TocRestrict=n&Toc=&TocEntry=&QField=&QFieldYear=&QFieldMonth=&QFieldDay=&UseQField=&IntQFieldOp=0&ExtQFieldOp=0&XmlQuery=&File=D%3A%5CZYFILES%5CINDEX%20DATA%5C86THRU90%5CTXT%5C00000020%5C9100976C.txt&User=ANONYMOUS&Password=anonymous&SortMethod=h%7C-&MaximumDocuments=1&FuzzyDegree=0&ImageQuality=r75g8/r75g8/x150y150g16/i425&Display=hpfr&DefSeekPage=x&SearchBack=ZyActionL&Back=ZyActionS&BackDesc=Results%20page&MaximumPages=1&ZyEntry=4">more than 60,000 gallons</a> of toxic chemicals had leaked from the tank, spreading underground more than half a mile into the surrounding neighborhood of Los Paseos.</p>
<h2>Neighbors speak up</h2>
<p>For the residents of the Los Paseos neighborhood, just across the street from Fairchild, the news of the chemical leak suddenly explained the stories of birth defects among their neighbors. <a href="https://www.inc.com/magazine/19820901/3259.html">Lorraine Ross</a>, whose daughter had her first open-heart surgery at nine months old, couldn’t help but wonder if the four birth defects, two miscarriages and one stillbirth of Los Paseos in the past two years were <a href="http://www.nytimes.com/1982/05/20/us/leaking-chemicals-in-california-s-silicon-valley-alarm-neighbors.html">connected to water contamination</a>. She organized others in the neighborhood to ask questions, eventually partnering with a young lawyer, Ted Smith, who founded a new advocacy organization called the <a href="http://svtc.org/">Silicon Valley Toxics Coalition</a>. The Silicon Valley Toxics Coalition was designed to advocate for neighborhoods, helping draft new county and city ordinances related to the storage, transportation and disposal of chemicals and gases in Santa Clara County.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=359&fit=crop&dpr=1 600w, https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=359&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=359&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=451&fit=crop&dpr=1 754w, https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=451&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/190905/original/file-20171018-32348-9u7c5d.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=451&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Silicon Valley Toxics Coalition flyer.</span>
<span class="attribution"><a class="source" href="http://www.oac.cdlib.org/findaid/ark:/13030/kt2b69r7hf/">Folder 3, Box 11, Silicon Valley Toxics Coalition Papers, San Jose State University</a></span>
</figcaption>
</figure>
<p>News of the Fairchild leak captured the attention of the San Francisco Bay Area. The presence of these chemicals and synthetics were a revelation. “There was no doubt in my mind that this was a clean industry,” <a href="http://www.nytimes.com/1982/05/20/us/leaking-chemicals-in-california-s-silicon-valley-alarm-neighbors.html">remarked</a> San José Mayor Janet Gray Hayes. Lorraine Ross echoed this sentiment, telling a reporter that “we thought we were living with a clean industry.” But it wasn’t true.</p>
<h2>Widespread pollution</h2>
<p>Fairchild wasn’t alone in leaking pollution into the vibrant environment and thriving communities around its industrial sites. By 1992, one study found that <a href="https://nyupress.org/books/9780814767092/">57 private and 47 public drinking wells</a> were contaminated. Santa Clara County authorities determined that 65 of the 79 companies they investigated had contaminated the soil beneath their facilities. Several companies were forced to pay several million dollars for the cleanup of polluted sites, as well as install new monitoring equipment to prevent leaks for occurring again. Fairchild Semiconductor and other companies in the Los Paseos area found to have contaminated the water agreed to pay a multi-million-dollar settlement to 530 residents in southern San José.</p>
<p>The U.S. Environmental Protection Agency eventually <a href="http://dissertation.jasonheppler.org/visualizations/companies/">determined 29 polluted sites were eligible for Superfund</a> cleanup money over the course of the 1980s – 24 of which resulted from high-tech industries. Under <a href="https://www.epa.gov/superfund">Superfund</a>, polluted sites that particularly threaten wildlife or human health become eligible for federal funding to help clean up hazardous and contaminated sites. By the end of the 1980s, Santa Clara County had <a href="https://qz.com/1017181/silicon-valley-pollution-there-are-more-superfund-sites-in-santa-clara-than-any-other-us-county/">more Superfund sites</a> than any other county in the United States. <a href="https://www.epa.gov/superfund/search-superfund-sites-where-you-live">Twenty-three of the sites</a> remain in remediation today.</p>
<p>By accident and by neglect, the promise of clean industrialization proved elusive. Thousands of people migrated to the Santa Clara Valley hoping to take part in the remarkable convergence of affordable housing and new jobs. And while smokestacks were absent from electronics manufacturing, the presence of highly toxic chemicals – trichloroethane and chlorinated solvents – shattered the illusion behind the tech industry’s green image. The industry permanently altered the land and human bodies.</p><img src="https://counter.theconversation.com/content/85810/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jason A. Heppler does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Silicon Valley brought together natural surroundings, suburban homes and futuristic high-tech work. But industrial pollution betrayed the California dream.Jason A. Heppler, Digital Engagement Librarian and Assistant Professor of History, University of Nebraska OmahaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768822017-05-11T14:12:50Z2017-05-11T14:12:50ZTwenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution<p>On the <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-three">seventh move of the crucial deciding game</a>, black made what some now consider to have been a critical error. When black mixed up the moves for the <a href="http://www.chessgames.com/perl/chessgame?gid=1070917">Caro-Kann defence</a>, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is <a href="http://en.chessbase.com/post/deep-blue-s-cheating-move">still questioned 20 years later</a>.</p>
<p>This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue.</p>
<p>In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it. </p>
<p>In an echo of the <a href="http://www.slate.com/blogs/atlas_obscura/2015/08/20/the_turk_an_supposed_chess_playing_robot_was_a_hoax_that_started_an_early.html">chess automaton hoaxes</a> of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine.</p>
<hr>
<p><strong><em>Listen to an <a href="https://theconversation.com/twenty-years-on-from-deep-blue-vs-kasparov-how-a-chess-match-started-the-big-data-revolution-podcast-88607">audio version</a> of this article on The Conversation’s <a href="https://theconversation.com/uk/topics/in-depth-out-loud-podcast-46082">In Depth Out Loud</a> podcast.</em></strong></p>
<iframe src="https://player.acast.com/5e29c8205aa745a456af58c8/episodes/5e29c8365aa745a456af58d6?theme=default&cover=1&latest=1" frameborder="0" width="100%" height="110px" allow="autoplay"></iframe>
<hr>
<p>Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale.</p>
<p>What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the <a href="http://www.computerweekly.com/feature/How-the-financial-services-sector-uses-big-data-analytics-to-predict-client-behaviour">financial systems</a> that dominate the economy to <a href="http://www.bbc.co.uk/news/business-26613909">online dating apps</a> that try to find us the perfect partner. What started as student project, helped usher in the age of big data.</p>
<h2>A human error</h2>
<p>The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used <a href="http://www.nytimes.com/1993/09/15/arts/declining-a-draw-short-loses-to-a-kasparov-counterattack.html">against human opponents</a> in the past.</p>
<p>What surprised Kasparov was <a href="http://www.thechessmind.net/blog/2012/7/14/a-look-back-at-deeper-blue-vs-kasparov-1997game-2.html">Deep Blue’s subsequent move</a>. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">“stunning” and “exceptional”</a>. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=598&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=598&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=598&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=751&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=751&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168901/original/file-20170511-32596-fgxusq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=751&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Open file.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/Open_file">Wikipedia</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>The move was based on the strategic advantage that a player can gain from creating an <a href="https://www.chess.com/article/view/open-files3">open file</a>, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">training with the grand master Joel Benjamin</a>, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up.</p>
<p>When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down. </p>
<p>From the moment that Kasparov lost, <a href="http://www.bbc.co.uk/programmes/p03rq51h">speculation and conspiracy theories</a> started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves.</p>
<p>IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the <a href="https://www.research.ibm.com/deepblue/watch/html/c.shtml">computer’s detailed logs</a>, as Kasparov had also requested, until after the decommissioning. But the subsequent <a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">detailed analysis</a> of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes.</p>
<p>There has since been speculation that Deep Blue only triumphed because <a href="https://www.cnet.com/news/did-a-bug-in-deep-blue-lead-to-kasparovs-defeat/">of a bug in the code</a> during the first game. One of <a href="http://fivethirtyeight.com/features/rage-against-the-machines/">Deep Blue’s designers</a> has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy.</p>
<p>He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “<a href="http://en.chessbase.com/post/komodo-8-deep-blue-revisited-part-one">terrible blunder</a>”.</p>
<p>Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, <a href="https://theconversation.com/googles-go-triumph-is-a-milestone-for-artificial-intelligence-research-53762">such as Go</a>.</p>
<p>But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s. </p>
<h2>Chess race</h2>
<p>When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an <a href="https://books.google.co.uk/books?id=zV0W4729UqkC&printsec=frontcover&dq=%22Behind+Deep+Blue:+Building+the+Computer+that+Defeated+the+World+Chess+Champion,%22+rivalry">intense rivalry</a> between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding <a href="http://archive.computerhistory.org/projects/chess/related_materials/oral-history/hsu.oral_history.2005.102644995/hsu.oral_history_transcript.2005.102644995.pdf">interest in computer chess</a>. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the <a href="http://www.nytimes.com/1988/09/26/nyregion/for-first-time-a-chess-computer-outwits-grandmaster-in-tournament.html">first to beat a chess grand master</a>, and was asked to assist with hardware design.</p>
<p>But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s <a href="http://link.springer.com/chapter/10.1007%2F978-1-4757-1968-0_28">chess machine, Belle</a>. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second.</p>
<p>Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game.</p>
<p>But Deep Thought hit a road block later that year when it came up against (<a href="http://www.nytimes.com/1989/10/23/nyregion/kasparov-beats-chess-computer-for-now.html">and lost to</a>) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM. </p>
<p>Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “<a href="https://chessprogramming.wikispaces.com/Evaluation">evaluation function</a>”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together.</p>
<p>The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=901&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=901&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=901&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1132&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1132&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168911/original/file-20170511-32610-1dfvzpa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1132&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An imposing opponent.</span>
<span class="attribution"><span class="source">Jim Gardner/Flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players <a href="http://www.csis.pace.edu/%7Ectappert/dps/pdf/ai-chess-deep.pdf">learn from past experience</a> to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game.</p>
<p>By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue <a href="http://content.time.com/time/subscriber/article/0,33009,984304,00.html">lost the overall match</a> 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise.</p>
<p>To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around <a href="https://www.theguardian.com/theguardian/2011/may/12/deep-blue-beats-kasparov-1997">200,000,000 moves per second</a>. This meant it could explore how each possible strategy would play out <a href="http://www.sciencedirect.com/science/article/pii/S0004370201001291">up to 40 or more moves</a> into the future.</p>
<h2>Parallel revolution</h2>
<p>By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded <a href="http://www.telegraph.co.uk/news/matt/9885264/From-the-archive-Chess-computer-beats-Kasparov-in-19-moves.html">with a story</a> when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data.</p>
<p>Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “<a href="https://theconversation.com/explainer-what-is-big-data-13780">big data</a>”) to look for significant patterns are involved in <a href="https://www.theguardian.com/public-leaders-network/2014/apr/17/big-data-government-public-services-expert-views">planning public services</a> such as transport or healthcare, and enable companies to <a href="https://theconversation.com/the-future-of-online-advertising-is-big-data-and-algorithms-69297">target advertising</a> to specific groups of people. </p>
<p>These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">significant insight</a> into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application.</p>
<p>The science of <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/transform/">molecular dynamics</a>, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped <a href="http://pubs.acs.org/doi/pdf/10.1021/acs.jmedchem.5b01684">speed up the development</a> of successful drugs, such as some of those <a href="https://bmcbiol.biomedcentral.com/articles/10.1186/1741-7007-9-71">used to treat HIV</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168963/original/file-20170511-32588-1je0wkh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Molecular modelling.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>For very broad applications, such as modelling <a href="https://www.research.ibm.com/deepblue/learn/html/e.5.shtml">financial systems</a> and <a href="https://www.research.ibm.com/deepblue/learn/html/e.4.shtml">data mining</a>, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors.</p>
<p>Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in <a href="http://www.nvidia.com/object/what-is-gpu-computing.html">high-performance computers</a> running large data sets and to run powerful artificial intelligence tools such <a href="https://theconversation.com/what-powers-facebook-and-googles-ai-and-how-computers-could-mimic-brains-52232">Facebook’s digital assistant</a>. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations.</p>
<p>The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him <a href="http://www.thechessdrum.net/tournaments/Kasparov-DeepJr/">into a draw</a>. However, both machines convincingly beat their human counterparts in the <a href="https://en.wikipedia.org/wiki/Human%E2%80%93computer_chess_matches#Man_vs_Machine_World_Team_Championship_.282004.E2.80.932005.29">2004 and 2005 Man vs Machine World Team Championships</a>.</p>
<p>Junior and Fritz marked a <a href="https://books.google.co.uk/books?id=KkQBCAAAQBAJ&pg=PA30&dq=chess+machines+junior+fritz&hl=en&sa=X&ved=0ahUKEwiJsfvl3-TTAhWnJcAKHVKtAOgQ6AEILTAB#v=onepage&q=chess%20machines%20junior%20fritz&f=false">change in the approach</a> to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC.</p>
<p>But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines.</p><img src="https://counter.theconversation.com/content/76882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The in depth story of a student project that paved the way for a society-level shift in how we use computers.Mark Robert Anderson, Professor in Computing and Information Systems, Edge Hill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/681672016-12-21T19:01:02Z2016-12-21T19:01:02ZUse it or lose it: the search for enlightenment in dark data<figure><img src="https://images.theconversation.com/files/150035/original/image-20161214-18910-6y497b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How to make sense of it all?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Big data is <a href="https://theconversation.com/au/topics/big-data-3963">big news</a> these days. But most organisations just end up hoarding vast reams of data, leaving them with a massive repository of unstructured – or “dark” – data that is of little use to anyone.</p>
<p>Given the potential benefits of big data, it’s crucial that we find better ways to gather, store and analyse data in order to make the most of it.</p>
<p>Stories of <a href="http://www.forbes.com/sites/howardbaldwin/2015/06/08/whos-ready-for-some-big-data-success-stories/#63d23c386f38">big data successes</a> have triggered significant investments in big data initiatives. This has prompted many organisations to gather significant volumes of external and internal data into so-called “<a href="http://www.pwc.com/us/en/technology-forecast/2014/cloud-computing/features/data-lakes.html">data lakes</a>”. These are repositories that contain data in any format, whether structured, like databases, or unstructured, like emails or audio and video. </p>
<p>As a result, the growth in the amount of data being generated, collected and stored continues at an exponential rate.</p>
<p>But according to a recent <a href="http://www.informationweek.com/cloud/software-as-a-service/ibm-cognitive-colloquium-spotlights-uncovering-dark-data/d/d-id/1322647">IBM study</a>, more than 80% of all data is inactive, unmanaged, often unstructured, lacking meaningful metadata, and even unknown to the organisation. The proportion of this dark data is expected to reach 93% by 2020. </p>
<p>For example, data generated from vehicle on-board devices can be expected to reach 350MB of data every second. Where does all this data go and who is using it? </p>
<p>Organisations can also generate significant internal data. For example, a <a href="http://conferences.oreilly.com/strata/strataeu2013/public/schedule/detail/31755">recent study</a> found that a company with 1,500 employees had around 2.5 million spreadsheets, each of which were only used by 12 people on average. </p>
<p>What’s more, there is evidence of a variety of unstructured data such as document versions, project notes and emails that is left behind from organisational processes and subsequently sits dormant in data servers.</p>
<h2>Use it or lose it</h2>
<p>Lessons learnt from years of research in information system use have shown that the assumption that “more is better” when it comes to data is unfounded. </p>
<p>Even in traditional IT projects that follow carefully crafted analysis and design life cycles, the misalignment between perceived and actual value has been a notoriously difficult problem, often leading to poor returns on investment. </p>
<p>In big data projects, the data can often be externally sourced with little or no knowledge of its schemata, quality or expected utility. Thus the risk of making investments that will not deliver is greatly heightened.</p>
<p>The old adage of “use it or lose it” is by no means obsolete, and brings attention back to the purpose of how we use big data. Organisations may retain data for a variety of reasons, including <a href="https://www.ag.gov.au/dataretention">data retention regulations</a>, but perceived future value is typically the main reason. </p>
<p>Although storage is relatively cheap, given the volume of data being assimilated, the maintenance and <a href="http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html">energy consumption</a> of data centres is not trivial. Furthermore, there are costs and risks related to the <a href="http://www.cio.com/article/2686755/data-analytics/the-dangers-of-dark-data-and-how-to-minimize-your-exposure.html">security of such unmanaged data</a>. </p>
<p>Thus defining the purpose is pivotal to ensure that big data investments are targeted towards a meaningful problems, and data collection and storage is well justified.</p>
<p>Approaches such as <a href="https://en.wikipedia.org/wiki/Design_thinking">design thinking</a>, which encourages people to use creative solution-focused thinking, are proving to be highly successful in genuine problem formulation for big data. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/a7sEoEvT8l8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What is Design Thinking?</span></figcaption>
</figure>
<p>When appropriately applied, design thinking can equip data scientists to bring together desirability (customer need) and viability (business value) with technological feasibility, and thereby guide them towards developing meaningful solutions.</p>
<h2>Garbage in, garbage out</h2>
<p>When the gap between data creation and use becomes larger, it makes it more likely that data quality decreases. This means an organisation will have to employ a lot of effort cleaning old data if it wants to use it today.</p>
<p>According to the <a href="https://www.linkedin.com/pulse/everything-we-wish-wed-known-building-data-products-george-a-rubsam">US Chief Data Scientist</a> <a href="https://www.linkedin.com/in/dpatil">DJ Patil</a>:</p>
<blockquote>
<p>Data is super messy, and data cleanup will always be literally 80% of the work. In other words, data is the problem.</p>
</blockquote>
<p>Earlier this year, a group of global thought leaders from the database research community outlined the <a href="http://wp.sigmod.org/?p=1519">grand challenges in getting value from big data</a>. The key message was the need to develop the capacity to “understand how the quality of that data affects the quality of the insight we derive from it”.</p>
<p>The golden principle of “garbage in, garbage out” is still true in the context of big data. Without scientifically credible knowledge that provides the ability to efficiently evaluate the underlying quality characteristics of the data, there is a significant risk of organisations and governments accumulating large volumes of <a href="https://mikecurr55.wordpress.com/2010/09/14/the-value-density-of-information/">low value density data</a>, or investing in low return-on-investment data products. </p>
<p>Moreover, the lack of knowledge on the underlying data (distributions, semantics and other nuances) could result in <a href="http://gking.harvard.edu/files/gking/files/0314policyforumff.pdf">analytical traps</a>, where the data analysis can lead to erroneous, and possibly dangerous, conclusions.</p>
<p><a href="https://www.cs.helsinki.fi/u/jilu/paper/exploration02.pdf">Data exploration</a> is emerging as a promising approach to empower users with exploratory capabilities to investigate the quality of the data and gain awareness of data’s shortcomings in terms of their intended use, and do so before they invest in expensive data cleaning and curation tasks.</p>
<p>The search for enlightenment from the data deluge will consume the energy and investments of the data-driven society in the foreseeable future. Whereas there is immense power in the scale of data, when left unattended will propel organisations into the abyss of dark data. </p>
<p>All this underscores the growing need for well-trained data scientists who have the ability to articulate a well-justified business, scientific or social purpose and align it with the technological efforts for data collection, storage, curation and analysis.</p><img src="https://counter.theconversation.com/content/68167/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Shazia Sadiq receives funding from ARC and industry on topics related to data management and information systems. She is the program director of The University of Queensland's Master of Data Science commencing in 2017</span></em></p>More data isn’t necessarily better unless it’s properly collected, curated and analysed.Shazia Sadiq, Professor, Data and Knowledge Engineering, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/631732016-08-23T01:16:15Z2016-08-23T01:16:15ZHarried doctors can make diagnostic errors: They need time to think<figure><img src="https://images.theconversation.com/files/134843/original/image-20160819-30383-dhnhr6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Thinking too fast?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-168769280.html">ER image via www.shutterstock.com.</a></span></figcaption></figure><p>When a person goes to the doctor, there’s usually one thing they want: a diagnosis. Once a diagnosis is made, a path toward wellness can begin.</p>
<p>In some cases, diagnoses are fairly obvious. But in others, they aren’t.</p>
<p>Consider the following: A 50-year-old man with a history of high blood pressure goes to the emergency room with sudden chest pain and difficulty breathing. </p>
<p>Concerned that these are symptoms of a heart attack, the ER physician orders an electrocardiogram and blood tests. The tests are negative, but sometimes heart attacks don’t show up on these tests. Since every minute counts, he prescribes a blood thinner to save the patient’s life. </p>
<p>Unfortunately, the diagnosis and decision was wrong. The patient was not having a heart attack. He had a tear in his aorta (known as an aortic dissection) – a less obvious but equally dangerous condition.</p>
<p>It’s not a far-fetched scenario. </p>
<p>“Three’s Company” star <a href="http://johnritterfoundation.org/ritter-rules/">John Ritter</a> died from an aortic tear that doctors initially <a href="http://articles.latimes.com/2008/mar/15/local/me-ritter15">diagnosed</a> and <a href="http://www.today.com/id/23723123/ns/today-today_news/t/john-ritters-widow-jury-has-spoken/#.V7tucmXhrUk">treated as a heart attack</a>. </p>
<p>With over three decades of combined experience caring for patients in hospital settings, we have faced our share of <a href="http://dx.doi.org/10.1056/NEJM200001063420107">diagnostic dilemmas</a>. Determined to improve our practice and those of other physicians, we are studying ways to prevent diagnostic errors as part of a project funded by the federal government’s <a href="http://www.ahrq.gov">Agency for Healthcare Research and Quality</a>. Below, we describe some of the challenges – and possible solutions – to improving diagnosis.</p>
<h2>The flawed thought processes that result in errors</h2>
<p>When physicians learn to make diagnoses in medical school, they are trained to initiate a mental calculus, analyzing symptoms and considering the possible conditions and illnesses that may cause them. For instance, chest pain could indicate a problem with the cardiovascular or respiratory system. Keeping in mind these systems, students then ask what conditions may cause these problems, focusing first on the most life-threatening ones such as heart attack, pulmonary embolism, collapsed lung or aortic tears.</p>
<p>Once tests rule these out, less dangerous diagnoses such as heartburn or muscle injury are considered. This process of sifting through possibilities to explain a patient’s symptoms is called generating a “differential diagnosis.” </p>
<p>Although the ER physician in our example could have stopped to generate a differential diagnosis, this is easier said than done. With time and experience, mental shortcuts overshadow this time-consuming process and mistakes may result. </p>
<p>One such shortcut is “<a href="http://dx.doi.org/10.1056/NEJMcps052993">anchoring bias</a>.” This is the tendency to rely upon the first piece of information obtained – or the initial diagnosis considered – regardless of subsequent information that might suggest other possibilities. </p>
<p>Anchoring is compounded by availability bias, another mental shortcut in which we overestimate the likelihood of events based on memory or experiences. </p>
<p>Thus, an ER doctor who frequently sees patients with heart attacks <a href="http://dx.doi.org/10.1136/bmj.d4487">might anchor on this diagnosis</a> when evaluating a middle-aged man with cardiac risk factors presenting with chest pain. We doctors also tend to stop exploring something once we’ve reached a tentative conclusion, a bias called premature closure. So, even if a diagnosis doesn’t fit perfectly, we tend not to change our minds to explore other possibilities.</p>
<h2>How can we minimize diagnostic errors?</h2>
<p><a href="http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/kahneman-bio.html">Daniel Kahneman</a>, who won a Nobel Prize in 2002 for his work on human judgment and decision-making, argues that people have two systems that drive everyday thinking: fast and slow. </p>
<p>The fast thinking, known as System 1, is automatic, effortless and fueled by emotion. The slow system of thinking, or System 2, is deliberative, effortful and logical. Medical students are trained to use both systems: by toggling back and forth, physicians can thus harness their training, experience and intuition to craft a <a href="http://www.ncbi.nlm.nih.gov/pubmed/12915363">logic-driven diagnosis</a>. </p>
<p>So why don’t physicians just do this routinely?</p>
<p>In some cases, System 1 thinking is all that is necessary. For example, a physician who sees a young child with fever and the typical rash of chicken pox can easily make this diagnosis without slowing down or thinking about alternatives.</p>
<p>However, some physicians don’t use System 2 thinking when they need to because their work load makes it hard. Really hard. </p>
<p>In an <a href="http://cbssm.med.umich.edu/what-we-do/research-projects/enhancing-patient-safety-through-cognition-communication-m-safety-lab">ongoing study</a>, we have recorded first-hand how time pressures make it hard for doctors to stop and think. In addition to the incessant pace of work and physical distractions, there is substantial variation in how information is collected, presented and synthesized to inform diagnosis. </p>
<p>It is thus abundantly clear that physicians often do not have the time to do this type of toggling back and forth <a href="http://dx.doi.org/10.1136/bmjqs-2011-000149">during patient care</a>. Rather, they are often multitasking when making diagnoses, work that almost always leads to System 1 thinking. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/134841/original/image-20160819-30363-icaf73.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Technology is a help, but not a fix.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-171498107/stock-photo-doctor-working-on-his-computer-and-with-mobile-phone-in-the-office-he-is-wearing-blue-uniform-surgeon-uniform.html?src=Tpx3gGXCWNFgE56-h7nrDg-1-36">Doctor image via www.shutterstock.com.</a></span>
</figcaption>
</figure>
<h2>Can technology help?</h2>
<p>Technology seems like a promising solution to diagnostic errors. After all, computers do not suffer from cognitive traps like humans do.</p>
<p>Software tools that provide a list of potential diagnoses for symptoms and group collaboration platforms that allow physicians to engage with others to discuss cases <a href="http://dx.doi.org/10.1136/bmjqs-2013-001884">appear promising</a> in preventing diagnostic errors.</p>
<p>IBM’s Watson is also helping doctors make <a href="http://www.businessinsider.com/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4">the right diagnosis</a>. There is even an XPrize to create technology that can diagnose 13 health conditions while <a href="http://tricorder.xprize.org">fitting in the palm of a hand</a>. It may not be too long before a computer <a href="http://www.nytimes.com/2012/12/04/health/quest-to-eliminate-diagnostic-lapses.html?_r=0">will make better diagnoses than physicians.</a></p>
<p>But technology won’t solve the organizational and workflow problems physicians face today. Based on 200 hours of observing clinical teams and asking them what could be done to improve diagnosis as part of an ongoing research project, two remedies appear necessary: time and space. </p>
<p>Crafted timeouts from “busy work” with dedicated “thinking time” is a key need. Within this period, a diagnostic checklist may be <a href="http://www.improvediagnosis.org/page/Checklist">useful</a>. Although they vary in scope and content, these checklists encourage physicians to engage System 2 thinking and improve data synthesis and decision-making. One such tool is the <a href="http://c.ymcdn.com/sites/www.improvediagnosis.org/resource/resmgr/Take_2_-_BThink_Do_clinician.pdf">Take 2, Think Do</a> framework, which asks physicians to take two minutes to reflect on the diagnosis, decide if they need to reexamine facts or assumptions and then act accordingly.</p>
<p>Second, physicians need a quiet place to think, somewhere free from distraction. Working with colleagues in architecture, we are examining how best to create such environments. This is no small challenge. Hospitals have limited physical footprints, and medical culture makes it hard for doctors to duck into quiet spaces to think. But redesigning workflow and space could have an important impact on diagnosis. How do we know? The physicians we followed said so. In the words of one:</p>
<blockquote>
<p>“if we had a place where the pager could be silent for a few minutes, where I could review my [patient] list and think through labs, recommendations and plans, I know I could be a better diagnostician.” </p>
</blockquote>
<p>This approach may prove particularly valuable in high-stress, more chaotic environments such as the ER or intensive care unit.</p>
<p>A future with <a href="http://www.nationalacademies.org/hmd/%7E/media/Files/Report%20Files/2015/Improving-Diagnosis/DiagnosticError_ReportBrief.pdf">fewer diagnostic errors</a> – and the negative consequences of them – appears possible. Stopping to think about our thoughts and employing the power of modern technology is a combination that may lead us to the correct diagnosis more frequently. These changes will help physicians deliver better care and save lives – a future we can all look forward to.</p><img src="https://counter.theconversation.com/content/63173/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vineet Chopra receives funding from the Agency for Healthcare Research and Quality to study diagnostic errors. </span></em></p><p class="fine-print"><em><span>Sanjay Saint receives funding from the Agency for Healthcare Research and Quality to study diagnostic errors. </span></em></p>Cognitive traps can steer doctors away from the right diagnosis.Vineet Chopra, Assistant Professor of Internal Medicine and Research Scientist, University of MichiganSanjay Saint, George Dock Professor of Medicine, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/622962016-07-13T13:03:25Z2016-07-13T13:03:25ZWhy football, not chess, is the true final frontier for robotic artificial intelligence<p>The perception of what artificial intelligence was capable of began to change when chess grand master and world champion <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">Garry Kasparov lost to Deep Blue</a>, IBM’s chess-playing program, in 1997. Deep Blue, it was felt, had breached the domain of a cerebral activity considered the exclusive realm of human intellect. This was not because of something technologically new: in the end, chess was felled by the brute force of faster computers and clever heuristics. But if chess is considered the game of kings, then the east Asian board game Go is the game of emperors. </p>
<p>Significantly more complex, requiring even more strategic thinking, and featuring an intricate interweaving of tactical and strategical components, it posed an even greater challenge to artificial intelligence. Go relies much more on pattern recognition and subtle evaluation of the general positions of playing pieces. With a number of possible moves per turn an order of magnitude greater than chess, any algorithm trying to evaluate all possible future moves was expected to fail. </p>
<p>Until the early 2000s, programs playing Go progressed slowly, and could be beaten by amateurs. But this changed in 2006, with the introduction of two new techniques. First was the <a href="https://jeffbradberry.com/posts/2015/09/intro-to-monte-carlo-tree-search/">Monte Carlo tree search</a>, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move’s quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms.</p>
<p>The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately <a href="https://theconversation.com/googles-go-triumph-is-a-milestone-for-artificial-intelligence-research-53762">Google DeepMind’s AlphaGo program beat Go world champion Lee Sedol</a> in March 2016. Now that Go has fallen, where do we go from here?</p>
<h2>The future of AI is in physical form</h2>
<p>Following Kasparov’s defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game. Rather, it needed to be physically embodied in the real world: football.</p>
<p>Football is easy for humans to pick up, but to have a humanoid robot running around a field on two legs, seeing and taking control of the ball, communicating under pressure with teammates, and all mostly without falling over, was considered completely out of the question in 1997. Only a handful of laboratories were able to design a walking humanoid robot. Led by <a href="http://sbiaustralia.org/systems-biology/kitanoprofile/">Hiroaki Kitano</a> and <a href="https://www.cmu.edu/me/people/veloso.html">Manuela Veloso</a>, the ambitious goal set that year was to have by 2050 a team of humanoid robots able to play a game of football against the world champion team according to FIFA rules, and win. And so the RoboCup competition was born.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tAd1IeovyY8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The <a href="http://www.robocup.org/">RoboCup tournament</a> held its <a href="http://www.robocup2016.org/en/">20th competition in Leipzig this year</a>. Its goal has always been to improve and challenge the capacity of artificial intelligence and robotics, not in the abstract but in the much more challenging form of physical robots that act and interact with others in real time. In the years since, many other organisations have <a href="https://theconversation.com/cybathlon-will-showcase-what-bionics-could-do-for-millions-with-disabilities-54760">recognised how such competitions boost technological progress</a>.</p>
<p>The first RoboCup featured only wheeled robots and simulated 2D football leagues, but soon leagues that permitted Sony’s <a href="http://www.sony-aibo.com/">four-legged AIBO robot dogs</a> were introduced and, since 2003, <a href="http://wiki.robocup.org/wiki/Humanoid_League">humanoid leagues</a>. In the beginning, the humanoids’ game was quite limited, with very shaky robots attempting quivering steps, and where kicking the ball almost invariably caused the robot to fall. In recent years, their ability has significantly improved: many labs now boast <a href="http://robocup.herts.ac.uk/">five or six-a-side humanoid robot teams</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/pzYHAp7b7sY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>No ordinary ballgame</h2>
<p>In order to push competitors on to reach the goal of a real football match by 2050, the conditions are made harder every year. Last year, the green carpet was replaced by artificial turf, and the goalposts and the ball coloured white. This makes it harder for robots to maintain stability and poses a challenge of recognising the goals and ball. So while the robots may seem less capable this year than the year before, it’s because the goalposts are moving.</p>
<p>The tasks involved in playing football, although much more intuitive to humans than chess or Go, are a major challenge for robots. Technical problems of hitherto unimaginable complexity have to be solved: timing a kick while running, identifying the ball against a glaring sun, running on wet grass, providing the robot with sufficient energy for 45 minutes’ play, even the materials that go into constructing a robot can’t disintegrate during a forceful game. Other problems to be solved will define important aspects of our life with robots in the future: when a robot collides with a human player, who can take how much damage? If humans commit fouls, may a robot foul back? </p>
<p>RoboCup offers up in miniature the problems we face as we head towards intelligent robots interacting with humans. It is not in the cerebral boardgames of chess or Go, but here on the pitch in the physical game of football that the frontline of life with intelligent robots is being carved out.</p><img src="https://counter.theconversation.com/content/62296/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Polani has been heading teams participating at the RoboCup competition since 1998. He was member of the executive, later the trustee board and is now president elect of the RoboCup Federation.</span></em></p>Computers must master football if they are to demonstrate that they can be our equal.Daniel Polani, Professor of Artificial Intelligence, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/582052016-06-13T01:59:21Z2016-06-13T01:59:21ZComputers may be evolving but are they intelligent?<figure><img src="https://images.theconversation.com/files/122979/original/image-20160518-13496-1bp8xq8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Computers may be smarter than humans at some things, but are they intelligent?</span> <span class="attribution"><span class="source">Shutterstock/Olga Nikonova </span></span></figcaption></figure><p><em>The final in our <a href="https://theconversation.com/au/topics/computing-turns-60">Computing turns 60</a> series, to mark the 60th anniversary of the first computer in an Australian university, looks at how intelligent the technology has become.</em></p>
<hr>
<p>The term “artificial intelligence” (AI) was <a href="https://web.archive.org/web/20080830093710/http://news.cnet.com/Getting-machines-to-think-like-us/2008-11394_3-6090207.html">first used</a> back in 1956 to describe the <a href="http://www.dartmouth.edu/%7Evox/0607/0724/ai50.html">title of a workshop</a> of scientists at Dartmouth, an Ivy League college in the United States.</p>
<p>At that pioneering workshop, attendees discussed how computers would soon perform all human activities requiring intelligence, including playing chess and other games, composing great music and translating text from one language to another language. These pioneers were wildly optimistic, though their aspirations were unsurprising. </p>
<p>Trying to build intelligent machines has <a href="http://www.computerhistory.org/timeline/ai-robotics/">long been a human preoccupation</a>, both with calculating machines and in literature. Early computers from the 1940s were commonly described as electronic brains and thinking machines.</p>
<h2>The Turing test</h2>
<p>The father of computer science, Britain’s Alan Turing, was in no doubt that computers would one day think. His landmark <a href="http://www.loebner.net/Prizef/TuringArticle.html">1950 article</a> introduced the Turing test, a challenge to see if an intelligent machine could convince a human that it wasn’t in fact a machine.</p>
<p>Research into AI from the 1950s through to the 1970s focused on writing programs for computers to perform tasks that required human intelligence. An early example was the American computer game pioneer Arthur Samuels’ <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/">program for playing checkers</a>. The program improved by analysing winning positions, and rapidly learned to play checkers much better than Samuels.</p>
<p>But what worked for checkers failed to produce good programs for more complicated games such as chess and go.</p>
<p>Another early AI research project tackled introductory calculus problems, specifically symbolic integration. Several years later, symbolic integration became a solved problem and programs for it were no longer labelled as AI.</p>
<h2>Speech recognition? Not yet</h2>
<p>In contrast to checkers and integration, programs undertaking language translation and speech recognition made little progress. No method emerged that could effectively use the processing power of computers of the time.</p>
<p>Interest in AI surged in the 1980s through expert systems. Success was reported with programs performing medical diagnosis, analysing geological maps for minerals, and configuring computer orders, for example.</p>
<p>Though useful for narrowly defined problems, the expert systems were neither robust nor general, and required detailed knowledge from experts to develop. The programs did not display general intelligence.</p>
<p>After a surge of AI start up activity, commercial and research interest in AI receded in the 1990s.</p>
<h2>Speech recognition</h2>
<p>In the meantime, as computer processing power grew, computer speech recognition and language processing by computers improved considerably. New algorithms were developed that focused on statistical modelling techniques rather than emulating human processes.</p>
<p>Progress has continued with voice-controlled personal assistants such as Apple’s <a href="http://www.apple.com/ios/siri/">Siri</a> and <a href="https://support.google.com/websearch/answer/2940021?hl=en">Ok Google</a>. And translation software can give the gist of an article.</p>
<p>But no one believes that the computer truly understands language at present, despite the considerable developments in areas such as <a href="https://theconversation.com/the-future-of-chatbots-is-more-than-just-small-talk-53293">chat-bots</a>. There are definite limits to what Siri and Ok Google can process, and translations lack subtle context. </p>
<p>Another task considered a challenge for AI in the 1970s was face recognition. Programs then were hopeless.</p>
<p>Today, by contrast, Facebook can identify people from <a href="https://www.facebook.com/help/463455293673370/">several tags</a>. And camera software <a href="http://www.popularmechanics.com/technology/gadgets/how-to/a1857/4218937/">recognises faces</a> well. But it is advanced statistical methods rather than intelligence that helps.</p>
<h2>Clever but not intelligent – yet</h2>
<p>In task after task, after detailed analysis, we are able to develop general algorithms that are efficiently implemented on the computer, rather than the computer learning for itself.</p>
<p>In <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/">chess</a> and, very recently in <a href="http://www.bbc.com/news/technology-35761246">go</a>, computer programs have beaten champion human players. The feat is impressive and clever techniques have been used, without leading to general intelligent capability. </p>
<p>Admittedly, champion chess players are not necessarily champion go players. Perhaps being expert in one type of problem solving is not a good marker of intelligence.</p>
<p>The final example to consider before looking to the future is <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/watson/">Watson</a>, developed by IBM. Watson famously defeated human champions in the television game show Jeopardy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/P18EdAKuC1U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Dr Watson?</h2>
<p>IBM is now applying it <a href="http://www.ibm.com/smarterplanet/us/en/ibmwatson/">Watson</a> technology with claims it will make accurate <a href="http://www.businessinsider.com.au/ibms-watson-may-soon-be-the-best-doctor-in-the-world-2014-4">medical diagnoses</a> by reading all medical research reports.</p>
<p>I am uncomfortable with Watson making medical decisions. I am happy it can correlate evidence, but that is a long way from understanding a medical condition and making a diagnosis.</p>
<p>Similarly, there have been claims a computer will <a href="http://www.usability.gov/get-involved/blog/2010/01/adaptive-web-based-learning-environments.html">improve teaching</a> by matching student errors to known mistakes and misconceptions. But it takes an insightful teacher to understand what is happening with children and what is motivating them, and that is lacking for the moment. </p>
<p>There are many areas in which human judgement should remain in force, such as legal decisions and launching military weapons.</p>
<p>Advances in computing over the past 60 years have hugely increased the tasks computers can perform, that were thought to involve intelligence. But I believe we have a long way to go before we create a computer that can match human intelligence. </p>
<p>On the other hand, I am comfortable with autonomous cars for driving from one place to another. Let us keep working on making computers better and more useful, and not worry about trying to replace us.</p><img src="https://counter.theconversation.com/content/58205/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Leon Sterling receives funding from the Australian Research Council. </span></em></p>Computing has been getting much smarter since the idea of artificial intelligent was first thought of 60 years ago. But are computers intelligent?Leon Sterling, Professor emeritus, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/553312016-03-23T11:12:34Z2016-03-23T11:12:34ZDesigners are seizing Wall Street – but can they improve your life?<figure><img src="https://images.theconversation.com/files/115325/original/image-20160316-30247-1idifj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Much mightier than any sword</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/cat.mhtml?lang=en&language=en&ref_site=photo&search_source=search_form&version=llv1&anyorall=all&safesearch=1&use_local_boost=1&autocomplete_id=&search_tracking_id=OmwFcCe1ZicL1NYIQSRfGw&searchterm=pencil%20weapon&show_color_wheel=1&orient=&commercial_ok=&media_type=images&search_cat=&searchtermx=&photographer_name=&people_gender=&people_age=&people_ethnicity=&people_number=&color=&page=1&inline=264283169">vexworldwide</a></span></figcaption></figure><p>From unreasonably cool open-plan offices in San Francisco and New York, they are orchestrating an international business revolution. Casually dressed and armed with MacBooks, a new generation of design executives have emerged from their studios to cross into the corporate mainstream. They are aiming to undercut, outperform and ultimately overthrow incumbents across the business world. And they want to improve your life.</p>
<p>An <a href="http://www.designdisruptors.com">upcoming documentary</a>, Design Disruptors: How Design Became the New Language of Business, promises to tell the story of how designers were integral to the success of new-media giants like Google, Facebook, Pinterest, Dropbox, Airbnb, Netflix and Twitter. In recent years, they have also become the darlings of the wider business elite. With executive titles like “vice president, design” “VP of product” and “director of design”, their brief has been to integrate design philosophy into the biggest multinationals from the boardroom down. Such is their confidence that many believe there is nothing their designs cannot fix. But, as we will see, there is a lot they have to prove first. </p>
<p>The rise of these designers is a tale of two buzzphrases – “design thinking” and “disruptive innovation”. Disruptive innovation is a <a href="http://www.claytonchristensen.com/key-concepts/">concept from</a> Harvard Business School, characterising small businesses that often begin in obscure corners of markets. They don’t initially appear a threat, but begin to offer more mainstream services which are better and cheaper than those offered by incumbents. By the time the incumbents respond, often by mimicking the innovation, the disruptors have already taken over.</p>
<p>Design thinking, meanwhile, is the idea that non-designers can learn to think more creatively using methods based on how designers work – rapidly and repeatedly prototyping ideas and celebrating and embracing failures. A few years ago this began to be adopted by start-ups in Silicon Valley. According to the documentary, the key to their runaway success was combining this philosophy with disruptive innovation, plus the secret ingredient of excellent designs that focused on the experience of the user. </p>
<p>Take Airbnb, which disrupted the holiday-lettings industry by providing a cheaper service that was more enjoyable for users. Founded by two designers, the company has always had design thinking at its core. The success of the website and app is in small design details which allow sceptical travellers to see strangers not as risks but as welcoming hosts. Subtle hints like the size of text boxes for communications between users and prospective hosts encourage messages with just the right level of detail that come across as friendly rather than secretive or over-familiar. This is not simply web design: <a href="http://www.ted.com/talks/joe_gebbia_how_airbnb_designs_for_trust">this is</a> design for human relationships.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/115331/original/image-20160316-30247-sekntg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Duly disrupted.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/cat.mhtml?lang=en&language=en&ref_site=photo&search_source=search_form&version=llv1&anyorall=all&safesearch=1&use_local_boost=1&autocomplete_id=&search_tracking_id=oCfG7GDEOcMKKCoxPRYYBA&searchterm=airbnb&show_color_wheel=1&orient=&commercial_ok=&media_type=images&search_cat=&searchtermx=&photographer_name=&people_gender=&people_age=&people_ethnicity=&people_number=&color=&page=1&inline=387129184">AmsStudio</a></span>
</figcaption>
</figure>
<p>Many other big businesses watched these successes with great interest. They started buying into the idea that any individual or organisation that learns to think like designers can transform not only their products and services but also their processes, corporate strategies and underlying institutional structures. Through design thinking, went the argument, they would become more creative and more able to become disruptive innovators themselves. </p>
<p>Whether design thinking actually lives up to these promises is contentious, but many heavyweight corporations have been turning themselves into “design-led businesses” with integrated “design cultures” – often backed by serious investment. IBM is a good example. Under Phil Gilbert, the head of design, it has opened a string of design studios worldwide in the past three years. It hired over 1,100 designers and <a href="http://www.nytimes.com/2015/11/15/business/ibms-design-centered-strategy-to-set-free-the-squares.html?_r=1">aims to reach 1,500</a>. Apple’s success is often attributed to Steve Jobs’ belief in the power of design and trust in lead designer <a href="http://www.fastcodesign.com/3042524/fast-feed/22-things-you-need-to-know-about-apples-jony-ive">Jonathan Ive</a>. But recently, less obviously design-oriented businesses such as 3M, Philips, Pepsico, Barclays and Johnson & Johnson have all hired a “chief design officer” too. Where once design was just a service that provided style and functionality to products, now it is a core business value. </p>
<h2>Bow down?</h2>
<p>Design at its best can significantly improve how we interact with the world. When the Design Disruptors film goes live in the coming weeks, it should be commended if it brings this story of the positive impact of design to a broader public. Yet there is a danger in getting slightly carried away, like some of the design executives towards the end of the <a href="http://www.designdisruptors.com">trailer</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/W4AViRgrgkU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>As music swells to an uplifting crescendo, Braden Kowitz, design partner at Google Ventures, states:</p>
<blockquote>
<p>The questions now aren’t, can we build it? Cause more and more the answer’s yes, we can build anything. The question is, what is the future that we want to build together? For me that’s the power of design.</p>
</blockquote>
<p>This is indeed a good question: if everything really is possible, what should designers do with this power? Companies like Google, Spotify and Airbnb certainly make our daily lives easier, more efficient and more pleasant by disruptively improving on old designs. But the list of problems facing human society is as long as ever. Injustice, poverty, prejudice, displacement, corruption, conflict, fear, disease – take your pick. Why aren’t the design disruptors tackling some of these issues? </p>
<p>To give just one example of how disruptive design can make a truly meaningful impact, <a href="https://www.mfarm.co.ke">M-Farm</a> is a text-based service for farmers across Africa. It allows them to access accurate real-time information on market prices and weather; share and connect with previously inaccessible experts and the wider farming community; and sell their products at the best price. In an era where mobile-phone ownership has exploded in Africa, the service <a href="https://www.youtube.com/watch?v=K6RLvnLyZ9g">has been</a> very successful in helping farmers that were previously isolated and exploited. M-Farm has been designed to meet an important need, but it has had no significant input from high-flying designers.</p>
<p>Design should be about more than just making comfortable lives more comfortable. If this is a story of how designers won great power and ultimately squandered it, what a pity that would be. Utopian optimism from the likes of Kowitz is one thing, but actions speak louder than words. Design can make a difference in the world, but designers must choose what this difference will be.</p><img src="https://counter.theconversation.com/content/55331/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Buwert does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The generation of designers broke out of their studios and took the business world by storm. Their skills could also be turned to bigger world problems.Peter Buwert, Lecturer, Graphic Design, Edinburgh Napier UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/504742015-11-11T10:40:17Z2015-11-11T10:40:17ZSound waves could power hard disk drives of the future<figure><img src="https://images.theconversation.com/files/101455/original/image-20151110-21201-xh7uy.jpg?ixlib=rb-1.1.0&rect=4%2C5%2C693%2C472&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Tiny surface acoustic waves are enough to carry data quickly. </span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:TeO2SAWs.jpg">Femtoquake</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Our need to store data is growing at an astonishing rate. An estimated <a href="http://www.forbes.com/sites/ciocentral/2012/05/01/big-data-the-hidden-opportunity/">2.7 zettabytes (2.7 x 10<sup>21)</sup> of data</a> are currently held worldwide, equivalent to several trillion bytes for every one of the 7 billion people on Earth. Accessing this data quickly and reliably is essential for us to do useful things with it – the problem is, all our current methods of doing so are far too slow. </p>
<p>Conventional hard-disk drives encode data magnetically on spinning discs, from which the data is read by a sensor that scans over its surface as it rapidly rotates. Their moving parts introduce the potential for mechanical failures, and limits the speeds possible. This slows everything down.</p>
<p>Much faster are solid-state storage devices, which have no mechanical parts and store data as tiny electrical charges. Most modern laptops, all modern smartphones and digital cameras, and many other devices use this technology – also known as <a href="http://computer.howstuffworks.com/flash-memory.htm/printable">flash memory</a>. However, while solid-state devices are much faster they have a much shorter lifespan than hard disks before becoming unreliable, and are much more expensive. And despite their speed, they’re still far slower than the speed at which data travels between other components of a computer, and so still act as a brake on the system as a whole. </p>
<p>A solid-state drive that encodes data magnetically would be ideal. IBM is developing one variation, known as <a href="http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/racetrack/">racetrack memory</a>. This uses collections of tiny nanowires hundreds of times thinner than a human hair. Data is magnetically encoded as strings of ones and zeros along the nanowire, but although it can move data through it far faster than typical hard disks, a key challenge is to find ways to make the data “flow” through the nanowires in order to pass it across the sensors that read and write data to the wire. This can be achieved by applying magnetic fields or electric currents, but this generates heat and reduces power efficiency, affecting battery life.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=375&fit=crop&dpr=1 600w, https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=375&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=375&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=472&fit=crop&dpr=1 754w, https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=472&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/101484/original/image-20151110-21190-ojbiwg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=472&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Electrical current can provide flow for racetrack memory, but at the cost of heat and inefficiency.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>There are other ways of moving magnetic data, however. My group at the University of Sheffield in conjunction with John Cunningham at the University of Leeds have been using simulations, <a href="http://scitation.aip.org/content/aip/journal/apl/107/14/10.1063/1.4932057">now published in Applied Physics Letters</a>, to explore ways of making racetrack memory more efficient and stumbled upon a surprising solution using sound waves.</p>
<h2>Moved by the sound</h2>
<p>In our simulations we created vibration-sensitive magnetic nanowires on top of layers of piezoelectric materials, which stretch when we apply an electric voltage. By applying a rapidly-switching voltage they begin to vibrate, creating a special sort of sound wave known as <a href="http://www.sp.phy.cam.ac.uk/research/fundamentals-of-low-dimensional-semiconductor-systems/saw">surface acoustic waves</a>.</p>
<p>Using this method we created two sound waves, one flowing forwards along the nanowires and one flowing backwards. These waves combine together to create regularly spaced regions of the nanowire which vibrate strongly separated by regions that don’t vibrate at all. <a href="http://scitation.aip.org/content/aip/journal/apl/107/14/10.1063/1.4932057?utm_source=tech.mazavr.tk&utm_medium=link&utm_compaign=article">Our research shows</a> that the magnetic data bits are attracted to and held in place at the strongly vibrating sections. If we then change the pitch of two sound waves, so that one “sings” a higher note and one a lower note, we find that vibrating regions start to flow along the nanowire, pulling the data bits with them just as is required for racetrack memory. If we switch the notes around, the data flows in the opposite direction. Using only sound alone it’s possible to move data in both directions.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/101485/original/image-20151110-21201-1sqavc9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Using sound waves to provide an energy efficient flow for racetrack memory.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>At the moment our simulations show data flowing at around 100mph (160kph). This sounds pretty fast, but we’d like it to be ten times faster. However the really exciting implications of this stem from the unique properties of surface acoustic waves. Because they only exist right at a material’s surface they lose energy very slowly, and can travel as much as several centimetres (which is huge when you consider the tiny size of the nanowires). Because nanowires are so small a single pair of waves could be applied to a very large number of wires, and therefore the data within them, at the same time. Potentially this makes it a very power efficient way of moving lots of data around quickly.</p>
<p>There are still a lot of questions that need answering before we’ll know whether this technology is really the solution to the problems holding back racetrack memory. But with these promising initial indications, the next step is to create an experimental prototype to test it for real.</p><img src="https://counter.theconversation.com/content/50474/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Hayward receives funding from the Engineering and Physical Research Council and the Royal Society.</span></em></p>After electric and magnetic fields, nanoscale sound waves are a new idea for data storage.Tom Hayward, EPSRC Career Acceleration Research Fellow, University of SheffieldLicensed as Creative Commons – attribution, no derivatives.