tag:theconversation.com,2011:/fr/topics/computer-bug-9897/articlesComputer bug – The Conversation2018-01-05T12:08:43Ztag:theconversation.com,2011:article/896692018-01-05T12:08:43Z2018-01-05T12:08:43ZApple, Android and PC chip problem – why your smartphone and laptop are so at risk<figure><img src="https://images.theconversation.com/files/200933/original/file-20180105-26172-1v8qqry.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/illustration-computer-processor-bright-blue-on-513588976?src=DM3a9z-b8bs5l_-416KnBA-1-4">Shutterstock</a></span></figcaption></figure><p>Less than a week into 2018 and we may have already seen the year’s biggest technology story. <a href="https://security.googleblog.com/2018/01/todays-cpu-vulnerability-what-you-need.html">Researchers have</a> identified <a href="https://spectreattack.com/">a security flaw</a> in the computer processors made by three of the world’s biggest chip designers, Intel, AMD and ARM, and a second flaw in Intel chips. This means that almost every smartphone, tablet, laptop and business computer in the world could be vulnerable to having sensitive data including passwords stolen. The cloud servers that store websites and other internet data are also at risk.</p>
<p>This is one of the biggest cyber security vulnerabilities we’re ever seen in terms of the potential impact to personal, business and infrastructure computer systems. What’s more, because the flaw is located in such a fundamental part of the computer, there’s no way to know whether or not a machine has been targeted and what data might have been accessed. </p>
<p>Both the main flaw (<a href="https://spectreattack.com/spectre.pdf">Spectre</a>), and the Intel-only flaw (<a href="https://meltdownattack.com/meltdown.pdf">Meltdown</a>) have been created by a design technique intended to enhance the chips’ performance known as “speculative execution”. The problem means hackers can access parts of the computer’s memory that should be inaccessible. Sensitive data including passwords, email, documents and photos could all be at risk.</p>
<p>Most cyber attacks involve finding a flaw in a computer’s software that allows hackers to access the machine’s memory or operating system. For example, in 2017 an attack <a href="https://theconversation.com/heres-how-the-ransomware-attack-was-stopped-and-why-it-could-soon-start-again-77745">known as “WannaCry”</a> exploited a flaw in older versions of Windows. It affected around 300,000 computers in 150 countries and had a devastating effect on businesses and organisations including the UK’s National Health Service (NHS).</p>
<p>But the Spectre and Meltdown flaws could let hackers cut through all the layers of software to violate the very heart of a computer, the processor chip that powers its fundamental workings. Because similar designs are used by all the major chip makers, almost every computer in the world could be affected, from Apple iPhones and Android devices, to MacBooks, large desktop PCs and internet servers.</p>
<p>The process is also so fundamental that it doesn’t create any log of its operations, meaning there is no record of whether a particular chip has been hacked or not. This makes it harder to spot cyber attacks at an early stage in order to prevent them happening again, or to investigate what data might have been accessed or stolen.</p>
<p>Luckily, tech companies have already begun releasing software patches that they say will <a href="http://www.bbc.co.uk/news/technology-42561169">solve the problems</a> without a significant impact on performance. But <a href="https://www.newscientist.com/article/2157704-your-computer-may-run-30-per-cent-slower-due-to-intel-chip-bug/">some have claimed</a> any fix could dramatically slow down computer processing speed. We will have to wait to see the long-term impact.</p>
<h2>Responsible disclosure</h2>
<p>The story also raises an important issue about the responsible disclosure of such security flaws. <a href="http://uk.businessinsider.com/intel-ceo-krzanich-sold-shares-after-company-was-informed-of-chip-flaw-2018-1?r=US&IR=T">Reports suggest</a> the industry has known of the problem for months but only limited details have been disclosed so far. You could argue that consumers have the right to know about such flaws as soon as they are discovered so they can try to protect their data. Of course, the problem is this could end up fuelling cyber attacks by also making hackers aware of the flaw.</p>
<p>In the past, this debate has forced tech companies to use the law to prevent researchers disclosing security problems. For example,
scientists from the University of Birmingham faced a <a href="https://www.theguardian.com/technology/2013/jul/26/scientist-banned-revealing-codes-cars">legal injunction</a> from car manufacturer Volkswagen stopping them publishing details of flaws in car keyless entry systems. </p>
<p>The preferred route is “responsible disclosure”. When researchers discover a problem, they tell a small number of relevant people who can then work on a solution. The manufacturer can then reveal the problem to the public once the solution is ready, minimising the potential for hacking and damage to the company’s share price.</p>
<p>In this case, a researcher at Google who found the flaws seems to have alerted Intel in June 2017, and the two companies had been planning on announcing a fix. But details of the flaw were then published by technology website <a href="https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/">The Register</a>, forcing the firms to reveal what they knew earlier than planned, and hitting <a href="https://www.reuters.com/article/us-cyberintel-stocks/intel-shares-fall-as-investors-worry-about-costs-of-chip-flaw-idUSKBN1ET1NH">Intel’s share price</a>. While this kind of revelation arguably undermines responsible disclosure, the counter argument is that it forces manufacturers to <a href="http://science.sciencemag.org/content/314/5799/610.full">fix the problem faster</a>.</p><img src="https://counter.theconversation.com/content/89669/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Siraj Ahmed Shaikh receives funding from EPSRC. </span></em></p>Chips from the biggest chipmakers – Intel, AMD and ARM – all contain serious security flaws.Siraj Ahmed Shaikh, Professor of Systems Security, Coventry UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/856322017-10-12T14:29:03Z2017-10-12T14:29:03ZComputers will soon be able to fix themselves – are IT departments for the chop?<figure><img src="https://images.theconversation.com/files/190005/original/file-20171012-31440-1tzlhhl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">They call me the digital lizard. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/laptop-computer-hand-coming-through-screen-74147725?src=QhzS2EeTL3o35SNF_68PDg-1-4">Jeffrey B. Banke</a></span></figcaption></figure><p>Robots and AI are replacing workers at an <a href="https://www.theguardian.com/technology/2017/mar/24/millions-uk-workers-risk-replaced-robots-study-warns">alarming rate</a>, from simple manual tasks to making complex legal decisions and medical diagnoses. But the AI itself, and indeed most software, is still largely programmed by humans. </p>
<p>Yet there are signs that this might be changing. Several programming tools are emerging which help to automate software testing, one of which we have been developing ourselves. The prospects look exciting; but it raises questions about how far this will encroach on the profession. Could we be looking at a world of Terminator-like software writers who consign their human counterparts to the dole queue?</p>
<p>We computer programmers devote an unholy amount of time to testing software and fixing bugs. It’s costly, time consuming and fiddly – yet it’s vital if you want to bring high quality software to market. </p>
<h2>Testing, testing …</h2>
<p>A common method of testing software involves running a program, asking it to do certain things and seeing how it copes. Known as dynamic analysis, many tools exist to help with this process, usually throwing thousands of random choices at a program and checking all the responses. </p>
<p>Facebook <a href="https://arstechnica.co.uk/information-technology/2017/08/facebook-dynamic-analysis-software-sapienz/">recently unveiled</a> a tool called <a href="https://www.youtube.com/watch?v=j3eV8NiWLg4">Sapienz</a> that is a big leap forward in this area. Originally developed by University College London, Sapienz is able to identify bugs in Android software via automated tests that are far more efficient than the competition – requiring between 100 and 150 choices by the user compared to a norm of nearer 15,000. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=636&fit=crop&dpr=1 600w, https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=636&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=636&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=800&fit=crop&dpr=1 754w, https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=800&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/189609/original/file-20171010-17676-10qdlsm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=800&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Bug on out.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/software-bug-causing-program-error-647034754?src=ShnB1VkW4HlqgkzEbavO3g-1-14">Phichak</a></span>
</figcaption>
</figure>
<p>The difference is that Sapienz contains an evolutionary algorithm that learns from the software’s responses to previous choices. It then makes new choices that aim to find the maximum number of glitches and test the maximum number of kinds of choices, doing everything as efficiently as possible. </p>
<p>It may soon have competition from DiffBlue, a spin-out from the University of Oxford. Based on an AI engine designed to analyse and understand what a program is doing, the company is developing several automated tools to help programmers. One will find bugs and write software tests; another will find weaknesses that could be exploited by hackers; a third will make improvements to code that could be better expressed or is out of date. DiffBlue recently <a href="https://techcrunch.com/2017/06/27/diffblue/">raised</a> US$22m in investment funding, and claims to be delivering these tools to numerous blue chip companies.</p>
<p>The tool that we have developed is dedicated to bug hunting. Software bugs are often just an innocent slip of the finger, like writing a “+” instead of a “-”; not so different to typos in a Word document. Or they can be because computer scientists like to count differently, starting at zero instead of the number one. This can lead to so-called “off by one” errors. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1129&fit=crop&dpr=1 600w, https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1129&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1129&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1418&fit=crop&dpr=1 754w, https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1418&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/189597/original/file-20171010-17715-1npsivb.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1418&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Here he is!</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/fort-greene/7484436922">Fort Greene Focus</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>You find these annoying little glitches by making one small change after another – repeatedly testing and tweaking until you make the right one. The answer is often staring you in the face – a bit like the game “<a href="http://findwally.co.uk">Where’s Wally?</a>” (or Waldo if you’re in North America). After hours of trying, you finally get that a-ha moment and wonder why you didn’t spot it sooner. </p>
<p>Our tool <a href="https://dl.acm.org/citation.cfm?id=3082517">works as follows</a>: office workers go about their normal administrative duties in the daytime and report any bugs in software as they find them. Overnight, when everyone is logged off, the system enters a “dream-like” state. It makes small changes to the computer code, checking each time to see if the adjustment has fixed the reported problem. Feedback from each run of the code is used to inform which changes would be best to try next time. </p>
<p>We tested it for four months in a Reykjavik organisation with about 200 users. In that time, it reported 22 bugs and all were fixed automatically. Each solution was found on these “night shifts”, meaning that when the programmer arrived at the office in the morning, a list of suggested bug fixes were waiting for them. </p>
<p>The idea is to put the programmer in control and change their job: less routine checking and more time for creativity. It’s roughly comparable to how spell checkers have taken much of the plod out of proof-reading a document. Both tools support the writer, and reduce the amount of time you probably spend swearing at the screen. </p>
<p>We have been able to show that the same system can be applied to other tasks, including making programs run faster and improving the accuracy of software designed to predict things (full disclosure: Saemundur recently co-founded a company to exploit the IP in the system). </p>
<h2>Future shock?</h2>
<p>It is easy enough to see why programs like these might be useful to software developers, but what about the downside? Will companies be able to downsize their IT requirement? Should programmers start fearing that <a href="https://www.theguardian.com/politics/2017/oct/04/the-cough-the-p45-the-falling-f-theresa-mays-speech-calamity">Theresa May moment</a>, when the automators show up with their P45s?</p>
<p>We think not. While automations likes these raise the possibility of companies cutting back on certain junior programming roles, we believe that introducing automation into software development will allow programmers to become more innovative. They will be able to spend more time developing rather than maintaining, with the potential for endlessly exciting results.</p>
<p>Careers in computing will not vanish, but some boring tasks probably will. Programmers, software engineers and coders will have more automatic tools to make their job easier and more efficient. But probably jobs won’t be lost so much as changed. We have little choice but to embrace technology as a society. If we don’t, we’ll simply be left behind by the countries that do.</p><img src="https://counter.theconversation.com/content/85632/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Saemundur Haraldsson is a director of Easy Advanced Systems, which has been set up to develop the IP behind the system he developed at University of Stirling.</span></em></p><p class="fine-print"><em><span>Alexander Brownlee receives funding from EPSRC and Microsoft.</span></em></p><p class="fine-print"><em><span>John R. Woodward does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The automation wave is coming for computer programmers – up to a point.Saemundur Haraldsson, Postdoctoral Research Fellow, University of StirlingAlexander Brownlee, Senior Research Assistant, University of StirlingJohn R. Woodward, Lecturer in Computer Science, Queen Mary University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/424962015-05-28T12:16:10Z2015-05-28T12:16:10ZReport into air traffic control failure shows we need a better approach to programming<figure><img src="https://images.theconversation.com/files/83241/original/image-20150528-32187-1fj0vc0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The higher they are, the further they have to fall.</span> <span class="attribution"><a class="source" href="http://commons.wikimedia.org/wiki/File:Changi_Airport_Air_Traffic_Control_(141922192).jpg">Ramil Sagum</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>The causes of the National Air Traffic Services (<a href="http://www.nats.aero/about-us/what-we-do/our-control-centres/">NATS</a>) flight control centre system failure in December 2014 that affected 65,000 passengers directly and up to 230,000 indirectly have been revealed in a recently published report.</p>
<p>The <a href="http://www.caa.co.uk/docs/2942/Independent%20Enquiry%20Final%20Report%202.0.pdf">final report</a> from the UK Civil Aviation Authority’s <a href="http://www.caa.co.uk/application.aspx?appid=7&mode=detail&nid=2411">Independent Inquiry Panel</a> set up after the incident examines the cause of and response to the outage at the Swanwick control centre in Hampshire, one of two sites controlling UK airspace (the other is at Prestwick in Scotland). Safety is key, said the report. I agree. And safety was not compromised in any way. Bravo!</p>
<p>“Independent” is a relative term, after all the panel includes Joseph Sultana, director of Eurocontrol’s Network Management, and NATS’s operations chief Martin Rolfe, as well as UK Civil Aviation Authority board member and director of safety and airspace regulation Mark Swan – all of whom have skin in the game. (Full disclosure: a panel member, Professor John McDermid, is a valued colleague of many years.)</p>
<p>For a thorough analysis, however, it’s essential to involve people who know the systems intimately. Anyone who has dealt with software knows that often the fastest way to find a fault in a computer program is to ask the programmer who wrote the code. And the NATS analysis and recovery involved the programmers too, Lockheed Martin engineers who built the system in the 1990s. This is one of two factors behind the “rapid fault detection and system restoration” during the incident on December 12.</p>
<p>The report investigates two phenomena: the system outage, its cause and how the system was restored. It also examines NATS’ operational response to the outage. The report also looks at what this says about how well the findings and recommendations following the last major incident, a year earlier, had been implemented. I just look at the first here, but arguably the other two are more important in the end.</p>
<h1>Cause and effect</h1>
<p>In the NATS control system, real-time traffic data is fed into controller workstations by a system component called the System Flight Server (SFS). The SFS architecture is what is called “hot back-up”. There are two identical components (called “channels”) computing the same data at the same time. Only one is “live” in the running system. If this channel falls over, then the identical back-up becomes the live channel, so the first can be restored to operation while offline. </p>
<p>This works quite well to cope with hardware failures, but is no protection against faults in the system logic, as that logic is running identically on both channels. If a certain input causes the first channel to fall over, then it will cause the second to fall over in exactly the same way. This is what happened in December.</p>
<p>The report describes a “latent software fault” in the software, written in the 1990s. Workstations in active use by controllers and supervisors either for control or observation are called Atomic Functions (AF). Their number should be limited by the SFS software to a maximum of 193, but in fact the limit was set to 151, and the SFS fell over when it reached 153.</p>
<h2>Deja vu</h2>
<p>My first thought is that we’ve heard this before. As far back as 1997-98, evidence given to the House of Commons Select Committee on Environment, Transport and Regional Affairs <a href="http://www.parliament.the-stationery-office.co.uk/pa/cm199798/cmselect/cmenvtra/360iv/et0407.htm">reported</a> that the NATS system, then under development, was having trouble scaling from 30 to 100 active workstations. But this recent event was much simpler than that – it’s the kind of fault you see often in first-year university programming classes and which students are trained to avoid through inspection and testing. </p>
<p>There are technical methods known as static analysis to avoid such faults – and static analysis of the 1990s was well able to detect them. But such thorough analysis may have been seen as an impossible task: it was <a href="http://www.parliament.the-stationery-office.co.uk/pa/cm199798/cmselect/cmenvtra/360iv/et0407.htm">reported</a> in 1995 that the system exhibited 21,000 faults, of which 95% had been eliminated by 1997 (hurray!) – leaving 1,050 which hadn’t been (boo!). Not counting, of course, the fault which triggered the December outage. (I wonder how many more are lurking?)</p>
<p>How could an error not tolerated in undergraduate-level programming homework enter software developed by professionals over a decade <a href="http://www.computerweekly.com/feature/A-brief-history-of-an-air-traffic-control-system">at a cost approaching a billion pounds</a>?</p>
<h2>Changing methods</h2>
<p>Practice has changed since the 1990s. Static analysis of code in critical systems is now regarded as necessary. So-called <a href="http://www.eschertech.com/products/correct_by_construction.php">Correct by Construction</a> (CbyC) techniques, in which how software works is defined in a specification and then developed through a process of refinement in such a way as <a href="http://proteancode.com/keynote.pdf">demonstrably to avoid</a> common sources of error, have proved their worth. NATS nowadays successfully uses key systems developed along CbyC principles, such as <a href="http://nats.aero/blog/2013/07/how-technology-is-transforming-air-traffic-management">iFacts</a>.</p>
<p>But change comes only gradually, and old habits are hard to leave behind. For example, <a href="https://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-goto-fail-apples-ssl-bug-explained-plus-an-unofficial-patch/">Apple’s “goto fail” bug</a> which surfaced in 2014 in many of its systems rendered void an internet security function essential for trust online – validating website authentication certificates. Yet it was caused by a simple syntax error – essentially a programming typo – that could and should have been caught by the most rudimentary static analysis. </p>
<p>Unlike the public enquiry and report undertaken by NATS, Apple has said little about either how the problem came about or the lessons learned – and the same goes for the developers of many other software packages that lie at the heart of the global computerised economy.</p><img src="https://counter.theconversation.com/content/42496/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Bernard Ladkin presented evidence to the UK House of Commons Transportation Sub-committee on the development of the Swanwick system in 1997 and 1998. His tech-transfer company Causalis Limited received consulting payments from BT Systems, as well as from Serco for due-diligence analysis of the Swanwick system, for their bids during the privatisation of NATS near the turn of the millennium.</span></em></p>Software is now too critical to how the world works, so we need to enforce ways to ensure it’s better.Peter Bernard Ladkin, Professor of Computer Networks and Distributed Systems, Bielefeld UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/335222014-11-11T08:59:44Z2014-11-11T08:59:44ZIt’s possible to write flaw-free software, so why don’t we?<figure><img src="https://images.theconversation.com/files/64139/original/x587hfzh-1415630068.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If Spock would not think it illogical, it's probably good code.</span> <span class="attribution"><a class="source" href="http://commons.wikimedia.org/wiki/File:Agda_proof.jpg">Alexandre Buisse</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Legendary Dutch computer scientist <a href="http://www.cs.utexas.edu/users/EWD/">Edsger W Dijkstra</a> famously remarked that “<a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1969.PDF">testing shows the presence, not the absence of bugs</a>”. In fact the only definitive way to establish that software is correct and bug-free is through mathematics. </p>
<p>It has long been known that software is hard to get right. Since <a href="http://computer.org/computer-pioneers/pdfs/B/Bauer.pdf">Friedrich L Bauer</a> organised the very first <a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/NATOReports/">conference on “software engineering”</a> in 1968, computer scientists have devised methodologies to structure and guide software development. One of these, sometimes called strong software engineering or more usually <a href="http://users.ece.cmu.edu/%7Ekoopman/des_s99/formal_methods/">formal methods</a>, uses mathematics to ensure error-free programming.</p>
<p>As the economy becomes ever more computerised and entwined with the internet, flaws and bugs in software increasingly lead to economic costs from fraud and loss. But despite having heard expert evidence that echoed Dijkstra’s words and emphasises the need for the correct, verified software that formal methods can achieve, the UK government seems not to have got the message.</p>
<h2>Formal software engineering</h2>
<p>The UK has always been big in formal methods. Two British computer scientists, Tony Hoare (<a href="http://www.cs.ox.ac.uk/people/tony.hoare/">Oxford 1977-</a>, <a href="http://research.microsoft.com/en-us/news/features/hoare-080411.aspx">Microsoft Research 1999-</a>) and the late <a href="http://www.cl.cam.ac.uk/archive/rm135/">Robin Milner</a> (Edinburgh 1973-95, Cambridge 1995-2001) were given <a href="http://amturing.acm.org/">Turing Awards</a> – the computing equivalent of the Nobel Prize – for their work in formal methods.</p>
<p>British computer scientist <a href="http://homepages.cs.ncl.ac.uk/cliff.jones/">Cliff B Jones</a> was one of the inventors of the <a href="http://overturetool.org/method/">Vienna Development Method</a> while working for IBM in Vienna, and IBM UK and Oxford University Computing Laboratory, led by Tony Hoare, won a <a href="https://www.gov.uk/queens-awards-for-enterprise">Queen’s Award for Technological Achievement</a> for their work to formalise IBM’s <a href="http://www.bcs.org/upload/pdf/advprog-apr06.pdf">CICS software</a>. In the process they further developed the <a href="http://formalmethods.wikia.com/wiki/Z_notation">Z notation</a> which has become one of the major formal methods. </p>
<p>The formal method process entails describing what the program is supposed to do using logical and mathematical notation, then using <a href="http://math.berkeley.edu/%7Ehutching/teach/proofs.pdf">logical and mathematical proofs</a> to verify that the program indeed does what it should. For example, the following Hoare logic formula describing a program’s function shows how formal methods reduce code to something as irreducibly true or false as 1 + 1 = 2.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=245&fit=crop&dpr=1 600w, https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=245&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=245&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=308&fit=crop&dpr=1 754w, https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=308&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/63570/original/jcbyhhbs-1415035184.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=308&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Hoare logic formula: if a program S started in a state satisfying P takes us to a state satisfying Q, and program T takes us from Q to R, then first doing S and then T takes us from P to R.</span>
</figcaption>
</figure>
<p>Taught at most UK universities since the mid-1980s, formal methods have seen considerable use by industry in <a href="http://www.inrialpes.fr/vasy/fmics/">safety-critical systems</a>. Recent advances have reached a point where formal methods’ capacity to check and verify code can be applied at scale with powerful automated tools.</p>
<h2>Government gets the message</h2>
<p>Is there any impetus to see them used more widely, however? When the Home Affairs Committee took evidence in its <a href="http://www.publications.parliament.uk/pa/cm201314/cmselect/cmhaff/70/70.pdf">E-crime enquiry</a> in April 2013, <a href="http://www.profjimnorton.com/">Professor Jim Norton</a>, former chair of the <a href="http://www.bcs.org">British Computer Society</a>, told the committee:</p>
<blockquote>
<p>We need better software, and we know how to write software very much better than we actually do in practice in most cases today… We do not use the formal mathematical methods that we have available, which we have had for 40 years, to produce better software.</p>
</blockquote>
<p>Based on Norton’s evidence, the committee put forward in recommendation 32 “that software for key infrastructure be provably secure, by using mathematical approaches to writing code.”</p>
<p>Two months later in June, the Science and Technology Committee <a href="http://www.publications.parliament.uk/pa/cm201314/cmselect/cmsctech/uc252-i/uc25201.htm">took evidence</a> on the <a href="https://www.gov.uk/service-manual/digital-by-default">Digital by Default</a> programme of internet-delivered public services. One invited expert was <a href="http://www.thomas-associates.co.uk/">Dr Martyn Thomas</a>, founder of <a href="http://www.altran.co.uk/">Praxis</a>, one of the most prominent companies using formal methods for safety-critical systems development. Asked how to achieve the required levels of security, he replied that: </p>
<blockquote>
<p>Heroic amounts of testing won’t give you a high degree of confidence that things are correct or have the properties you expect… it has to be done by analysis. That means the software has to be written in such a way that it can be analysed, and that is a big change to the way the industry currently works.</p>
</blockquote>
<p>The committee <a href="http://www.parliament.uk/documents/commons-committees/science-technology/130709-Chair-to-Francis-Maude.pdf">sent an open letter</a> to cabinet secretary Francis Maude in asking whether the government “was confident that software developed meets the highest engineering standards.”</p>
<h2>Trustworthy software is the answer</h2>
<p>The government, in its <a href="http://www.parliament.uk/documents/commons-committees/home-affairs/E-crime-Government-Response-Cm-8734.pdf">response to the E-crime report</a> in October 2013 , stated: </p>
<blockquote>
<p>The government supports Home Affairs Committee recommendation 32. To this end the government has invested in the <a href="http://uk-tsi.org.uk">Trustworthy Software Initiative</a>, a public/private partnership initiative to develop guidance and information on secure and trustworthy software development.</p>
</blockquote>
<p>This sounded very hopeful. Maude’s <a href="http://www.parliament.uk/documents/commons-committees/science-technology/Correspondence/131031MaudeDigitalbyDefault.pdf">reply to the Science and Technology committee</a> that month was not published <a href="https://twitter.com/CommonsSTC/status/527074057515446272">until October 2014</a>, but stated much the same thing.</p>
<p>So one might guess that the TSI had been set up specifically to address the committee’s recommendation, but this turns out not to be the case. The TSI was established in 2011, in response to governmental concerns over (cyber) security. Its “<a href="http://www.uk-tsi.org/?page_id=1175">initiation phase</a>” in which it drew from academic expertise on trustworthy software ended in August 2014 with the production of a guide entitled the Trustworthy Security Framework, available as British Standards Institute standard <a href="http://shop.bsigroup.com/ProductDetail/?pid=000000000030284608">PAS 754:2014</a>.</p>
<p>This is a very valuable collection of risk-based software engineering practices for designing trustworthy software (and not, incidentally, the “agile, iterative and user-centric” practices described in the <a href="https://www.gov.uk/service-manual/digital-by-default">Digital by Default service manual</a>). But so far formal methods have been given no role in this. In a <a href="http://ssdri-web.s3-website-eu-west-1.amazonaws.com/TSI_2012_165_SQM_2012_Keynote_Web.pdf">keynote address</a> at the 2012 BCS Software Quality Metrics conference, TSI director <a href="http://www2.warwick.ac.uk/fac/sci/wmg/research/csc/people/">Ian Bryant</a> gave formal methods no more than a passing mention as a “technical approach to risk management”.</p>
<p>So the UK government has been twice advised to use mathematics and formal methods to ensure software correctness, but having twice indicated that the TSI is its vehicle for achieving this, nothing has happened. Testing times for software correctness, then, something that will continue for as long as it takes for Dijkstra’s message to sink in.</p><img src="https://counter.theconversation.com/content/33522/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eerke Boiten is a senior lecturer in the School of Computing at the University of Kent, and Director of the University's interdisciplinary Centre for Cyber Security Research. He receives funding from EPSRC for the CryptoForma Network of Excellence on Cryptography and Formal Methods. He is a member of BCS and board member of its specialist group on Formal Aspects of Computer Science and editorial board member of their journal. Friedrich L. Bauer is his "academic grandfather", see <a href="http://genealogy.math.ndsu.nodak.edu/id.php?id=76349">http://genealogy.math.ndsu.nodak.edu/id.php?id=76349</a>.</span></em></p>Legendary Dutch computer scientist Edsger W Dijkstra famously remarked that “testing shows the presence, not the absence of bugs”. In fact the only definitive way to establish that software is correct…Eerke Boiten, Senior Lecturer, School of Computing and Director of Interdisciplinary Cyber Security Centre, University of KentLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/322022014-09-26T05:33:09Z2014-09-26T05:33:09ZShell shocked – but what should you do about the Bash bug?<figure><img src="https://images.theconversation.com/files/60139/original/7fgpv8pm-1411703073.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Apple computers could be at risk from the latest Bash bug.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/ollily/3703644157">Flickr/Oliver</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>A <a href="http://www.staysmartonline.gov.au/alert_service/message?id=1136622&name=Severe+Bash+vulnerability+affects+Unix-based+systems+including+Linux+and+Mac+OSX#.VCTY7PmSwa8">serious security flaw</a> has been discovered in a ubiquitous utility program present on a wide variety of important computer systems, including many Unix-based servers and Macintosh desktop computers.</p>
<p>“<a href="http://www.abc.net.au/news/2014-09-26/shellshock-bug-leaves-up-to-500-million-computers-at-risk/5770952">Shell shock</a>”, as it has been dubbed, has meant another round of sleepless nights for system administrators around the world as they attempt to protect their systems, and Mac users should be wary until a fix for their systems is available.</p>
<p>The security flaw, discovered by Edinburgh-based programmer Stephane Chazelas, affects a software tool called Bash.</p>
<h2>Bash – the duct tape of a Unix system</h2>
<p>Bash is a <a href="http://www.tutorialspoint.com/unix/unix-shell.htm">Unix shell</a>, or “command-line interpreter”, which is a tool that people who used a personal computer in the 1980s and early 1990s were all too familiar with, but younger computer users may never have seen directly. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=369&fit=crop&dpr=1 600w, https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=369&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=369&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=464&fit=crop&dpr=1 754w, https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=464&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/60129/original/rx287cs9-1411698647.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=464&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A shell being used interactively. This system has Bash installed, but uses an alternative for system administration purposes.</span>
</figcaption>
</figure>
<p>Shells have a similar job to the recently reinstated Start Menu on a Windows PC - they are used to start other applications on a system. Despite the fact that most non-technical users haven’t had to use shells for many years, they are still installed on every Windows or Mac OS X computer, as well as all Linux and Unix systems. </p>
<p>Windows systems use their own unique shell, which is not affected by the current bug. But many (though not all) Unix-based systems, including Mac OS X, by default use Bash.</p>
<p>Bash (which stands for Bourne Again SHell) was first released in 1989 by programmer Brian Fox and is now <a href="http://www.gnu.org/software/bash/">distributed as free</a> (open source) software by the GNU Project. Its design can be directly traced back to the origins of Unix in the late 1960s.</p>
<p>System administrators and programmers still often use shells directly, for a variety of reasons. But the security risk from the current bug primarily relates to another use of shells – as a largely invisible intermediary when one program starts another.</p>
<p>Starting a program may appear simple, but the process of figuring out exactly which program to execute, and providing configuration information, can actually be quite complicated.</p>
<p>Therefore, many systems delegate this process to the shell, rather than tackling it directly, and Bash acts as the duct tape that binds systems together. For instance, the Apache web server can use Bash in this way to invoke other programs to generate dynamic web pages.</p>
<h2>Mishandling configuration information</h2>
<p>The bug in Bash, present in all versions dating back at least to 1994, relates to the handling of configuration information. (A more technical summary of the bug and its consequences <a href="https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/">is available</a> from Unix vendor Redhat.)</p>
<p>Bash should simply pass such configuration information to the programs it starts on either the user’s or another program’s behalf. But a maliciously formatted configuration “string” can cause Bash to do literally anything the “user” running Bash has permission to do.</p>
<p>When Bash was used as originally designed, by a human at a command prompt, this was no big deal. A user who could enter these configuration strings could issue the same (potentially malicious) commands at a command prompt anyway.</p>
<p>The problem today is that other programs, accessible via a network, pass information received from possibly malicious sources on the internet as configuration strings to Bash. Bash could then misinterpret these as commands to execute. </p>
<p>For instance, as previously mentioned, one way the common Unix-based Apache web server can dynamically generate web pages uses Bash in an intermediary role.</p>
<p>If this particular feature is enabled on a specific web server, a remote attacker could send a malicious request for a web page that causes Bash to be invoked, and the malformatted configuration information passed to Bash. Bash will then run the commands the attacker requests on the web server, giving the attacker full control over the server.</p>
<p>While web servers are one of the more obvious ways in which this vulnerability could be exploited, it is not the only one. Many other standard services, accessible from a network, are vulnerable. For instance, the standard tools for configuring network access are a potential entry point, putting not only servers but desktop computers with the vulnerable version of Bash at risk.</p>
<p>Because it can be easily exploited remotely, and potentially gives an attacker full control over a system, this vulnerability is known as a “remote root” exploit – the worst kind.</p>
<p>Given this, and the ubiquity of vulnerable systems, security analysts <a href="http://blog.erratasec.com/2014/09/bash-bug-as-big-as-heartbleed.html#.VCTR6PmSx8E">have described it as comparable</a> to the earlier “<a href="https://theconversation.com/how-the-heartbleed-bug-reveals-a-flaw-in-online-security-25536">Heartbleed</a>” vulnerability. </p>
<h2>Are you at risk from shell shock?</h2>
<p>For system administrators, shell shock has already been, and will continue to be a headache, particularly as an early attempt to fix the vulnerability does not provide full protection.</p>
<p>At this stage, the general internet-using public will probably need to do less than for Heartbleed, for the following reasons:</p>
<ul>
<li>There is, as yet, no reported evidence of attackers using shell shock to attack systems before the public disclosure of the bug</li>
<li>Bash is not installed by default on Windows PCs and servers, and while it is available, very few systems have it installed. So the vast majority of desktop PCs are not affected directly</li>
<li>Most servers for high-profile websites long ago ceased to use Bash as an intermediary between the web server and content generation programs, both for security and performance reasons. Therefore, vulnerable servers are likely to be less high-profile ones, unlike Heartbleed where some of the world’s largest websites were vulnerable</li>
</ul>
<p>If a website finds that it was vulnerable, users may be requested to change passwords; but at this point users should wait until they are requested to do so by individual websites.</p>
<p>Periodically changing passwords is a good idea in any case, but doing so right now unless specifically advised is actually not a good idea as it would be better to wait for administrators to fix the bug first.</p>
<h2>Macs are vulnerable</h2>
<p>No official patches to fix the underlying vulnerability have been released by Apple yet. The unofficial patches circulating are not only very difficult for non-technical users to install, they don’t yet fully protect against the vulnerability.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/60149/original/vsqdvzxn-1411705625.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Apple users should take care and wait for patches.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/andrewscott/3538968660">Flickr/Andrew Scott</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA</a></span>
</figcaption>
</figure>
<p>Users who administer their own system should install system updates as soon as they become available from Apple. </p>
<p>Most Mac desktop systems do not have many network-accessible “server” programs running on them, limiting the ways in which the bug could be exploited.</p>
<p>The usual suspects of dodgy email attachments represent one possibility but as yet no “proof of concept attacks” along these lines have been reported.</p>
<p>But connecting to a malicious Wi-Fi hotspot is one way systems – particularly laptops – could be attacked, and the attacker could gain full access to the system.</p>
<p>Again, no such attacks have been reported to date but Mac users should be very careful about what Wi-Fi networks they choose to connect to until this vulnerability is patched.</p><img src="https://counter.theconversation.com/content/32202/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Merkel has previously received Australian Research Council Discovery Project grants in the areas of software testing and reliability.</span></em></p>A serious security flaw has been discovered in a ubiquitous utility program present on a wide variety of important computer systems, including many Unix-based servers and Macintosh desktop computers. “Shell…Robert Merkel, Lecturer in Software Engineering, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/286092014-07-03T05:22:39Z2014-07-03T05:22:39ZThe Heartbleed bug continues to pose risks for people<figure><img src="https://images.theconversation.com/files/52916/original/n736wm4y-1404352160.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You could still be at risk from the Heartbleed bug.</span> <span class="attribution"><span class="source">Igor Stevanovic</span></span></figcaption></figure><p>It’s been almost three months since the <a href="http://heartbleed.com/">Heartbleed</a> bug was revealed and many thousands of computer servers still need to be fixed.</p>
<p>The Australian government’s <a href="http://www.staysmartonline.gov.au/alert_service/alerts/heartbleed_update_more_than_300,000_web_servers_are_still_vulnerable#.U7Jpp42Syrw">Stay Smart Online initiative</a> this week points to research by security expert Robert Graham who identified 600,000 vulnerable servers after the Heartbleed bug was <a href="http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerabilities">made public in April</a>. He says <a href="http://blog.erratasec.com/2014/06/300k-vulnerable-to-heartbleed-two.html#.U7HuDvmSx8H">300,000 servers</a> still remain exposed as of late June.</p>
<p>Managing security problems in complex IT infrastructure is uncannily like managing pests on a farm. If they are handled promptly, problems are minimised.</p>
<p>But if they are neglected, the problems will grow, do more damage and take more work to rectify when they are finally dealt with.</p>
<p>The equivalent of an insect plague arrived on the paddocks of the world’s IT system administrators in April 2014 when the Heartbleed vulnerability was first revealed.</p>
<h2>The Heartbleed risk</h2>
<p>The <a href="https://theconversation.com/topics/heartbleed">Heartbleed</a> bug was <a href="https://theconversation.com/how-the-heartbleed-bug-reveals-a-flaw-in-online-security-25536">a programming mistake</a> in the <a href="http://www.openssl.org/">OpenSSL</a> security library used by a large proportion of the world’s internet software. It left much of the world’s IT infrastructure vulnerable to cybercriminals.</p>
<p>Keeping systems secure required system administrators to not only update software, but obtain new “master keys” to re-establish their corporate electronic identity. In many cases they also had to ask their users to change passwords.</p>
<p>It is likely that the global cost of dealing with Heartbleed has already run into the hundreds of millions of dollars.</p>
<p>A <a href="https://theconversation.com/six-more-bugs-found-in-popular-openssl-security-tool-27679">second round of problems</a> in the same software were identified in June 2014, again requiring considerable remedial action by vendors and system administrators. </p>
<h2>But the problem persists</h2>
<p>A few months on from Heartbleed the majority of internet-accessible systems that were vulnerable have been secured, but not all.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=500&fit=crop&dpr=1 754w, https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=500&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/52920/original/pxtwttyq-1404353763.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=500&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The 4.1.1 version of Android’s Jelly Bean operating system remains a risk.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/frikjan/7988113282">Flickr/Frikjan</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>For instance, many older Android smartphones have a firmware version (4.1.1) that <a href="http://www.pcmag.com/article2/0,2817,2456507,00.asp">contained the vulnerable code</a>. Protecting these phones required the firmware supplier to either patch the supplied version to fix the bug, or update to a newer version of Android.</p>
<p>While exploiting the bug on a smartphone is much harder than on a server, it remains possible. Therefore vulnerable phones should be updated to protect them.</p>
<p>Google made updates available to the manufacturers of smartphones shortly after discovering the problem but manufacturers then had to apply Google’s fixes to the specific firmware for each of their affected models, and test the fixed version.</p>
<p>Even then, updates for many phones were not made available to consumers, as phones are often sold with customised firmware from carriers.</p>
<p>The major Australian carriers – Telstra, Optus and Vodafone – provide custom firmware in phones sold from their retail outlets. Each carrier would then have had to package and test the update for the customised version for each vulnerable phone model.</p>
<p>Given the relatively limited resources at each individual carrier for such testing, it’s no surprise that this process took a long time. For instance, it took Vodafone Australia <a href="http://support.vodafone.com.au/articles/FAQ/HTC-One-X-software-update">until June 16</a> to supply fixed firmware for one model, the HTC One X.</p>
<p>Other carriers, and other phones running this Android version, may still be vulnerable. Users of Android phones should consider downloading the free <a href="https://blog.lookout.com/blog/2014/04/09/heartbleed-detector/">Lookout Heartbleed Detector</a> from the Google Play store to check.</p>
<h2>Why so slow to fix the bug?</h2>
<p>The issues illustrated by the slow rollout of Android updates are specific examples of the kinds of problems faced by both software vendors and system administrators in dealing with security vulnerabilities.</p>
<p>Fixing the problem in the software is often the easy part. Deploying the fix across the many affected systems, and testing to ensure that the fix doesn’t create additional problems, is where the real work lies, particularly when security updates are bundled with other unrelated fixes that may have side effects.</p>
<p>Information security analyst Marco Ostini, who works at the Australian Computer Emergency Response Team (<a href="https://www.auscert.org.au/">AusCERT</a>), says this leads to “<a href="http://www.itnews.com.au/News/388961,vendors-slow-to-patch-openssl-vulnerabilities.aspx">vulnerability mitigation fatigue</a>” where fixes are not being deployed on many systems.</p>
<h2>The problem with orphans</h2>
<p>The systems and software packages that aren’t being updated are “orphans” – that is, no one is taking responsibility for keeping them updated to protect against security issues.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=727&fit=crop&dpr=1 600w, https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=727&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=727&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=913&fit=crop&dpr=1 754w, https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=913&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/52930/original/dbfhwvwz-1404356759.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=913&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Heartbleed.</span>
</figcaption>
</figure>
<p>Phones running the vulnerable version of Android, 4.1.1, were actually examples of orphan devices, as most suppliers had ceased providing updates for them. Because of the scale of the security risk, an exception was made for Heartbleed. </p>
<p>IT orphan servers are often be operated by smaller organisations, or smaller divisions within larger ones, that lack the expertise to maintain their servers. </p>
<p>They may be running old, unsupported software that nevertheless continues to perform some useful but often relatively small task. A common example is a computer in an engineering environment such as a factory that uses vendor-specific software to control some expensive, valuable, but ageing device.</p>
<p>If the vendor has ceased to support the software, there may be no way to fix it. Even if the software is open source the individual customer will often not have the expertise to perform the fix themselves.</p>
<p>But sometimes orphan servers <em>are</em> simply the result of tired system administrators with the so-called “vulnerability mitigation fatigue”. Maintaining servers, particularly running old and relatively unusual software, is a great deal of work and the rewards are often not clear.</p>
<h2>If it ain’t broke … still fix it</h2>
<p>It’s tempting to simply say “if it ain’t broke, don’t fix it”. Unfortunately, IT security doesn’t work that way. </p>
<p>Aside from the risk of data loss from the specific system, a compromised server within a wider corporate network may leave a gap in the metaphorical fence for further attacks.</p>
<p>Therefore, managing IT infrastructure requires vigilance to ensure even lower-profile systems are kept protected, and careful design to reduce the consequences of a single system being compromised.</p>
<p>Even if the consequences to the organisation of a compromise of a particular system are not great, they still represent a safe and anonymous electronic haven from which cybercriminals can do further damage. In the farm analogy, they’re the equivalent of the neglectful neighbour’s weed-infested paddock. </p>
<p>The internet has become an essential part of our global society but it is vulnerable to criminal activity, and will ever be thus. The continuing aftermath of Heartbleed increases that vulnerability.</p>
<p>That is why we need diligence on the part of those who develop and manage IT systems to not only protect their own little patches, but to help keep the pests under control more generally.</p><img src="https://counter.theconversation.com/content/28609/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Merkel has previously received Australian Research Council grants in the area of software testing and reliability..</span></em></p>It’s been almost three months since the Heartbleed bug was revealed and many thousands of computer servers still need to be fixed. The Australian government’s Stay Smart Online initiative this week points…Robert Merkel, Lecturer in Software Engineering, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/254192014-05-13T05:17:09Z2014-05-13T05:17:09ZMalware is everywhere so watch out for the fake healers<figure><img src="https://images.theconversation.com/files/48197/original/qynx8tmw-1399654213.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You could hire an army to protect yourself. Or just do your research</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/han_shot_first/8075955051/in/photolist-diDo5k-az14Zt-4BYp6j-anPJRA-anLXTx-8sNYX-3aDoex-2QfiZW-7Udnx-az14ox-4R86Z3-4R7Phh-kfsHf-c1utWw-2DZ8RP-am7XeV-AAUB2-3aHV4L-kfsxy-7ATbmq-c1uusG-pA2vR-kfsCp-kfstd-AAUB9-7JPirN-7EjAHH-61hnAg-c5N7h9-5TCKMG-akDuJt-akDuRi-5TtHyp-5w4hRp-dEbiR4-BoJgM-9RuyMn-9RxrBL-aUxkxe-5nbBDP-5nbBxa-5nbBQT-dEbhvT-dEgEVL-dEgGby-dEgEYQ-dEgG1w-aUo3W2-dooydD-auE3ur/">Michael Li</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>There is nothing worse than having a fake healer offer a cure that does absolutely nothing. History is full of tales of frauds and quacks offering a cure for all, which eventually turn out to be nothing more than a bitter tasting facsimile of the real thing.</p>
<p>Google has recently removed an Android <a href="http://www.theregister.co.uk/2014/04/08/google_kills_virus_app_after_decompilation_proves_its_a_fake/">fake anti-malware application</a> called Virus Shield, fearing that it did the exact opposite. Based on the reports, this app was fortunately benign and did not appear to infect the smartphones or tablets of its users. But it could have been worse, potentially opening up their devices to many undesirable exploits.</p>
<p>The problem is widespread and has been for some time. Cybercrime isn’t just about exploiting technology, some of the most successful scams are those that exploit your trust.</p>
<p>[Malware](http://www.techterms.com/definition/malware](http://www.techterms.com/definition/malware) is a term used to cover a wide range of attacks. A virus is one amongst many styles of attack, as it is the oldest and best understood by the majority of computer users. Others include Trojans, where an application you download has hidden code designed to reach out to a remote party; worms, which spread via email or insecure networks; and zombies, which are used by cybercriminals to exploit your computers resources.</p>
<p>There is a chance that you could fall for [pop ups](http://www.techterms.com/definition/popup](http://www.techterms.com/definition/popup) and operating system windows that look like the <a href="http://arstechnica.com/security/2014/02/what-a-fake-antivirus-attack-on-a-trusted-website-looks-like/">real deal</a> or download a fake anti-malware application, which itself turns out to be malware.</p>
<p>Popular anti-malware applications like AVG and Sophos are mimicked when you visit websites. These look like applications that could help you but are fakes. The riskiest sites are those associated with illegal software downloads, pirate copies of movies and pornography. Cybercriminals trade on the notion that you are unlikely to admit to what you were doing at the time you made the mistake of clicking on the pop-up and had their download compromise your system.</p>
<p>Or, as is often the case, they trade on our desire for a good deal. If a deal seems too good to be true, it often is. This is no different with anti-malware applications. The price is often right, you like the promises made and the name of the application may even sound genuine. Checking the source of an application is equally as important as checking if it is the right product.</p>
<p>Discovering a fake app in its store is embarrassing for Google. But the reality is that it is your responsibility to double check the credibility of anything you download. In the case of anti-malware applications, checking to see if the creators are well-known is essential. There are many credible anti-malware software houses around the world.</p>
<p>New start-ups are welcomed by the industry but if you are unsure, then you are best advised to do some research before installation, such as by looking at different <a href="http://www.pcadvisor.co.uk/reviews/anti-virus/102/">review sites</a>.</p>
<p>Cyber-criminals do understand human nature even if considerable efforts are made by developers to secure systems. The weakest link is always the human part of the chain, this is known as <a href="http://searchsecurity.techtarget.com/definition/social-engineering">social engineering</a>. For you and I, it pays to be vigilant, and we have to be cautious when being offered a good deal to secure our device.</p><img src="https://counter.theconversation.com/content/25419/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There is nothing worse than having a fake healer offer a cure that does absolutely nothing. History is full of tales of frauds and quacks offering a cure for all, which eventually turn out to be nothing…Andrew Smith, Lecturer in Networking, The Open UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/258872014-04-24T05:14:18Z2014-04-24T05:14:18ZLet’s not panic like it’s 1999 as we clean up after Heartbleed<figure><img src="https://images.theconversation.com/files/46937/original/s448pv38-1398265084.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The internet didn't fall to pieces at the millennium and it won't now.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/drinksmachine/495352477/in/photolist-8eUzq1-51sByA-51ojop-7scvow-8VE6Db-8VE6CG-8VE6CY-8VE6CL-Nn4Df-4fy2jJ-GFtSi-6NUA7d-73nTXo-hRQMZx-6NQo2V-8dvaw-7LPHj-6NUyHf-6NUzqh-6ngg6-8G6BVF-kSzp6K-6BXyT-sTWWz-cA3qYs-sTDDn-bAgWD3-bPbAcx-bxqEQY-bxmDFm-bxqEMq-bxmDWN-73zxgf-51ojDi-8G9LN5-524Lqu-2vjqj5-bLk1eK-bxqHJE-bLjZpt-bxqhQ9-4mc4Bg-6DsSe8-7J55CN-8twF7P-nTcHV-6Gpiq7-6GpisY-6GpiE1-KLP2X">drinksmachine</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Take a moment to jump back in your mental time machine to 31 December 1999. It was the biggest New Year’s Eve for a thousand years. The dawn of a new millennium. But as we prepared to party, the world was also gripped by the fear that digital infrastructure was about to come crashing down around us.</p>
<p>For all we knew, the <a href="http://news.bbc.co.uk/hi/english/static/millennium_bug/countries/default.stm">millennium bug</a> would hit at midnight, causing untold havoc on the computers upon which we had come to depend. Those of us old enough to remember may have felt a similar sense of dread over the past few weeks as we faced the implications of the Heartbleed security flaw.</p>
<p>We were caught in the hype in 1999 and let others dictate what we needed to do. That left us vulnerable to people who wanted to take advantage. We should learn our lesson from that time as we deal with <a href="https://theconversation.com/explainer-should-you-change-your-password-after-heartbleed-25506">Heartbleed</a> and as we approach the next big security glitch. </p>
<h2>The apocalypse that wasn’t</h2>
<p>The millennium bug, also known as the Y2K bug, was a real issue, a throwback to historical programming from the 1960s and 1970s. </p>
<p>For many years, operating systems, hardware, software and many other devices made their calculations using a two-digit date. The switch from 99 to 00 as the millennium came to an end meant that some systems, such as those used by your bank, would be thrown into immediate chaos. They wouldn’t know if it was 1900 or 2000.</p>
<p>The story went that many critical systems, including air traffic control, security control systems and financial systems all used date and time to assist humanity in completing their automated tasks. If they were confused about the date, human safety and security could have been on the line.</p>
<p>The millennium bug came with considerable hype and scaremongering in the press. Some <a>outlets</a> discussed the potential for planes to simply fall out the sky. Whether you were around in 1999 or not, you probably know that this didn’t actually happen in the end.</p>
<p>But even though much of the hype was unwarranted, the millennium bug was a realistic concern. By 1999, the internet was popular across the world, even if it wasn’t the backbone of our very existence. Home computers were becoming a standard feature and many societies had become dependent on computer technology to support everyday experiences. Online shopping had already begun and many of us were already printing out tickets for economy airlines.</p>
<p>Cynics would say that some IT experts <a href="http://techie.com/how-the-y2k-scare-made-panic-into-profit/">profited</a> from Y2K, making a killing from the fear, hype and misunderstanding that surrounded it by selling advice and software to protect against the worst.</p>
<p>While Y2K didn’t cause total societal meltdown. There were still some problems. Some <a href="http://news.bbc.co.uk/1/hi/business/582007.stm">cash machines and card readers failed</a>, for example, and were out of action for around two days. But many of the big issues it might have caused were addressed in advance of New Year’s Eve.</p>
<h2>Learning the lesson</h2>
<p>Considering the current media coverage of <a href="https://theconversation.com/explainer-should-you-change-your-password-after-heartbleed-25506">Heartbleed</a>, you could be forgiven for thinking that we have not learnt from history.</p>
<p>Just as in 1999, the general public was heavily implicated. Up to 60% of websites were vulnerable to the Heartbleed security flaw, but users of those sites were left with mixed messages. Should they change their passwords? Was their bank, social network or email under threat? Would they be robbed? Would their identity be stolen? Is it the end of the internet as we know it?</p>
<p>As the media spread panic, people all over the world struggled to keep up. But now that we know we should probably change our passwords to be on the safe side, how many people have actually done it? Probably only a tiny fraction. Still, the internet has not crumbled. A security meltdown has not yet been reported. </p>
<p>For both Heartbleed and the Millennium Bug, the problem was real, issues have occurred for both. But with intervention from technical experts, the issues were both eventually resolved. While Heartbleed may linger for a little while longer. I doubt the Millennium Bug remains an issue.</p>
<p>Hopefully, Heartbleed has taught us all to be a bit more careful about our passwords and it should serve to prove that panic helps no one. On the other hand, the disasters averted in 1999 and 2014 should guide us as we start to look to 2038 – the year when the next big bug could hit our systems. </p>
<p>But maybe you should start thinking about <a href="http://2038bug.com/">2038</a>. This is the next date that could confuse our computers. It is a while yet before anyone should be concerned but it is still a mathematically likely issue. </p>
<p>In all technology reports, when you start seeing every expert saying different things, it can be difficult to know how to act. That is because collectively we do not yet know the the extent of the problem. So, the best thing, is to stay calm, wait, and make an informed decision rather than react to the first piece of advice that comes your way.</p><img src="https://counter.theconversation.com/content/25887/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Take a moment to jump back in your mental time machine to 31 December 1999. It was the biggest New Year’s Eve for a thousand years. The dawn of a new millennium. But as we prepared to party, the world…Andrew Smith, Lecturer in Networking, The Open UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/255362014-04-11T04:46:32Z2014-04-11T04:46:32ZHow the Heartbleed bug reveals a flaw in online security<figure><img src="https://images.theconversation.com/files/46184/original/kdcrqcyp-1397189103.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Does Heartbleed expose flaws in the way some security-critical software is developed?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/kaleenxian/2912692337">Flickr/Kaleenxian</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>The <a href="http://www.abc.net.au/news/2014-04-10/heartbleed-bug-password-reset-data-openssl/5379604">Heartbleed bug</a> that’s potentially exposed the personal and financial data of <a href="http://www.news.com.au/technology/heartbleed-bug-in-openssl-renders-internet-insecure/story-e6frfrnr-1226878634218">millions of people</a> stored online has also exposed a hole in the way some security software is developed and used.</p>
<p>The bug is in an extremely widespread piece of software called <a href="https://www.openssl.org/">OpenSSL</a>. OpenSSL allows programmers to write systems that send sensitive data such as financial or medical information over the internet, with confidence that anybody “listening in” will only get indecipherable gibberish.</p>
<p>It also provides a way to prove that a message came from a particular organisation’s computer, so that you can be confident you’re sending your credit card details to Amazon or Apple rather than a criminal.</p>
<h2>How was OpenSSL developed?</h2>
<p>OpenSSL is not the only tool that provides these facilities, but it is by far the most common, due to its free availability and long history.</p>
<p>OpenSSL dates from the late 1990s, and like many other crucial pieces of internet software, is developed by a loosely-organised global bunch of hobbyists, students and volunteers.</p>
<p>It is made available as <a href="https://theconversation.com/topics/open-source">open source</a> software for anyone to use for free on very liberal terms. Most of the world’s internet servers – and every Android smartphone – use a great deal of software developed in this manner, though many such developer teams include paid professionals from companies who use the software.</p>
<h2>The Heartbleed bug</h2>
<p>On New Year’s Eve 2011, German researcher and OpenSSL contributor Robin Seggelmann added <a href="https://github.com/openssl/openssl/commit/96db9023b881d7cd9f379b0c154650d6c108e9a3">code</a> implementing a new feature called “heartbeats”.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=727&fit=crop&dpr=1 600w, https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=727&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=727&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=913&fit=crop&dpr=1 754w, https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=913&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/46188/original/q32grzh4-1397189479.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=913&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Heartbleed needed a bugfix.</span>
</figcaption>
</figure>
<p>The <a href="https://tools.ietf.org/html/rfc6520">idea</a> was straightforward: if a connection between two computers stays silent for too long, it is disconnected, so periodic “heartbeat” messages can keep the connection going.</p>
<p>As well as a simple “I’m here”, messages contain a arbitrary “payload” which is sent back and forth, a little like this:</p>
<p><strong>Computer 1</strong>: “Hi, I’m still here, the payload is 5 characters long and is ‘12345’.”</p>
<p><strong>Computer 2:</strong> “Hi, great, you’re still there, and your payload was 5 characters long and was ‘12345’.”</p>
<p>Unfortunately, Seggelmann’s code didn’t check that the payload was of the indicated length, so a malicious request could request more data than was in the payload:</p>
<p><strong>Computer 1:</strong> “Hi, I’m still here, the payload is 50,000 characters long and is ‘12345’.”</p>
<p>Computer 2 would then send back a message with a payload of the requested length, the first characters of which would be the 12345 sent. The rest would be whatever happened to be in the computer’s memory next to the payload.</p>
<p>The exact contents sent back varied between systems and over time. But as well as information such as user passwords or private data, it could contain something called the private master key.</p>
<p>With access to this key, an “attacker” can electronically impersonate the organisation who rightfully owns the key, and unscramble all the private messages sent to that organisation – including old ones, if they’ve kept the previously unreadable scrambled versions.</p>
<p>Criminals could, for instance, steal the key of a major bank and then electronically impersonate it. It’s a potential field day for spies, too.</p>
<h2>Discovery and consequences</h2>
<p>The buggy code was incorporated into a June 2012 release of OpenSSL that was widely adopted, and there it stayed until discovered virtually simultaneously by Google’s security team, and <a href="http://www.codenomicon.com/">Codenomicon</a>, an internet security company.</p>
<p>Before <a href="http://heartbleed.com/">informing the public</a>, they informed the OpenSSL developers, who fixed the bug by adding the missing checks.</p>
<p>At this moment, there is no evidence that anybody has maliciously exploited the bug but system administrators have acted both to prevent exploitation, and reduce the consequences if it has already been.</p>
<p>The fix is simple. The task of getting it deployed to the millions of systems using OpenSSL is not.</p>
<p>System administrators across the world have been furiously installing the fix on millions of computers. They’re also scrambling to generate new master keys.</p>
<p>For most end users, the biggest nuisance will come when administrators request <a href="https://theconversation.com/explainer-should-you-change-your-password-after-heartbleed-25506">password changes</a>.</p>
<p>Most users have multiple internet accounts; many of these will be affected by the Heartbleed bug and their administrators will request their users to change passwords in case they have been stolen.</p>
<p>In addition, many embedded computers in devices such as home network routers may be vulnerable, and updating these is a time-consuming manual task.</p>
<p>Even if there hasn’t been any malicious exploitation of the bug, the costs of people’s time will likely run into the hundreds of millions of dollars.</p>
<h2>A tiny mistake but a major headache</h2>
<p>Contrary to a variety of conspiracy theories, the simplest and most likely explanation for the bug is an accidental mistake. Seggelmann denies doing anything <a href="http://www.smh.com.au/it-pro/security-it/man-who-introduced-serious-heartbleed-security-flaw-denies-he-inserted-it-deliberately-20140410-zqta1.html">deliberately wrong</a>.</p>
<p>Mistakes of the type that caused Heartbleed are have led to security problems since the 1970s. OpenSSL is written in a programming language called <a href="http://www.howstuffworks.com/c.htm">C</a>, which also dates from the early 1970s. C is renowned for its speed and flexibility, but the trade-off is that it places all responsibility on programmers to avoid making precisely this kind of mistake.</p>
<p>There are currently two broad streams of thought in the technical community about how to reduce the likelihood of such mistakes:</p>
<ol>
<li><p>use <a href="http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html">technical measures</a>, such as alternative programming languages, that make this type of error less likely</p></li>
<li><p>tighten up the process for making changes to OpenSSL, so that they are subject to much more extensive expert scrutiny before incorporation.</p></li>
</ol>
<h2>Dealing with risk</h2>
<p>My view is that while both of these points have merit, underlying both is that the Heartbleed bug represents a massive failure of risk analysis.</p>
<p>It’s hard to be too critical of those of who volunteer to build such a useful tool but OpenSSL’s design prioritises performance over security, which probably no longer makes sense.</p>
<p>But the bigger failure in risk analysis lies with the organisations who use OpenSSL and other software like it. The development team, language choices and development process of the OpenSSL project are laid bare, in public, for anyone who cares to find out.</p>
<p>The consequences of a serious security flaw in the project are equally obvious. But a huge array of businesses, including very large IT businesses depending on OpenSSL with the resources to act, did not take any steps in advance to mitigate the losses.</p>
<p>They could have chosen to fund a replacement using more secure technologies, and they could have chosen to fund better auditing and testing of OpenSSL so that bugs such as this are caught before deployment.</p>
<p>They didn’t do either, so they – and now we – wear the consequences, which likely far exceed the costs of mitigation.</p>
<p>And while you shake your head at the IT geeks, I leave you with a question – how are you identifying and managing the risks that your own organisation faces?</p><img src="https://counter.theconversation.com/content/25536/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Merkel has received funding from the Australian Research Council.</span></em></p>The Heartbleed bug that’s potentially exposed the personal and financial data of millions of people stored online has also exposed a hole in the way some security software is developed and used. The bug…Robert Merkel, Lecturer in Software Engineering, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.