Menu Close

Stuxnet is scary, but human safety should come first

Safety first, but which kind of safety? peretzp

Critical national infrastructure keeps our water and electricity flowing, our payments running and our manufacturing and distribution moving. This infrastructure faces a new threat in the form of cyber-attacks, but in seeking to protect power stations from computer attacks, we may be taking our eye off the ball when it comes to more traditional safety concerns.

Much infrastructure relies on automated technology typically referred to as industrial control systems (ICS). These systems allow our physical environment to be affected by computers. They open valves, generate power, and sort parcels for delivery; all to meet our demands.

As the computers controlling these systems are becoming increasingly connected with other computer networks, and more importantly, the internet. These new connections provide access routes for attackers to probe and enter systems, potentially causing large scale disruption to the services we depend so heavily on.

This situation is made more severe by the long lifespan of industrial equipment. Many legacy devices are still in use which lack protection for the modern era. The recent discovery of 25 vulnerabilities on the devices that interconnect legacy and modern equipment in power stations is testament to this.

Fortunately, we have not yet seen significant disruption, but one particularly high profile case has shown all industries that use control systems that they might be a target in the future and that they need to prepare for potential attacks.

The Stuxnet legacy

In 2010, a piece of malicious software called Stuxnet was used to attack and disrupt the operation of uranium enrichment facilities in Iran, causing millions of pounds worth of damage and delaying the enrichment programme by several years.

It worked by spreading itself over the internet, infecting ordinary Microsoft Windows computers, as it searched for its target - a specific type of industrial control system component that was only produced in Iran and Finland. Once it found its targets, Stuxnet was then able to modify their operating parameters to dangerous conditions, while hiding this behaviour from the operators that supported the system. While Stuxnet has not been attributed to a specific attacker, there are numerous suspicions as to where the malware was developed.

Stuxnet was scary for anyone using control systems. For the first time significant physical damage happened as a result of malware.

Since Stuxnet, industries producing and relying on control systems to automate their business have invested heavily in cyber defence, developing new technologies to protect these important infrastructures. Work so far has almost wholly been directed towards managing risks surrounding the protection of information, in a process known as information assurance. But that has implications for protecting infrastructure in other ways.

Fail-safe vs fail-secure

Industrial control systems have traditionally used a “fail-safe” design. If a system stops operating correctly, it shuts down to minimise damage to the environment and loss of life. Operations at a water treatment facility will shut down when water tankers reach dangerous capacity limits, for example.

Most information assurance approaches, on the other hand, advocate a “fail-secure” design methodology. When a system is attacked, mechanisms spring into action to prevent your information from falling into the wrong hands.

But these security goals are potentially at odds with one another. If a system fails and the first priority is to protect information, the shut down may cause the system to go into a dangerous state. If a wind turbine begins rotating dangerously fast and an automated system moves to shut it down, a fail-secure system may see this as anomalous or malicious behaviour, preventing the shutdown with potentially catastrophic consequences.

While you do need to protect the information held in control systems, in the event of an attack this should always be secondary to the protection of life and the environment.

Safe and secure

A counter-movement is emerging to try to reconcile the two approaches, ensuring that both people and information are protected if an attack occurs. Advocates would like to see industry taking a “functional assurance” approach. In the event of an attack, a system would enter both fail-safe and fail-secure modes.

The functional assurance concept goes beyond a simple concept of “on” or “off” in the face of an attack. Internet connected systems are under constant attack and must still carry on functioning. If an internet connected control system were to shut down every time it were attacked, it would never be on, so we need to start thinking about how to keep the systems running in the face of concentrated digital onslaught.

The aftermath of Stuxnet led to the development of security standards and guidance documents that specifically target industrial control systems. However, a survey has found that these documents are probably inadequate for helping operators to achieve functional assurance. The importance of safety was frequently highlighted, but is largely treated as a separate issue. Little attention is devoted to their complex inter-dependency, in particular the capability of failing both safely and securely.

Industry and governments are yet to work out how to deliver functional assurance, but they need to make progress. Industrial control system technologies are increasingly found not only in critical infrastructure, but in our personal environments, as we move towards living in smart cities. That means anything from traffic lights to home security could be attacked. Personal safety is at stake but we want to make our infrastructure work more efficiently, which makes the balance between protecting data and protecting people more important than ever.

Want to write?

Write an article and join a growing community of more than 182,500 academics and researchers from 4,943 institutions.

Register now