Why invest in purchasing, housing and maintaining a set of computers when you can outsource all that worry to someone else? This has been the oft-used marketing slogan of cloud computing. And it works. It is much easier to offload the data hassle and focus your resources (especially if they’re limited) on your core operations.
During their astounding growth in the middle of the last decade, technology companies such as Amazon and Google built huge infrastructures to power their ever-growing needs. It is estimated that Amazon, for example, has more than 2m servers around the world, while Google is estimated to have 10 exabytes of data storage space. That’s 10 million terabytes, or 10 billion gigabytes.
Over time, they learned how to manage all their software and hardware assets in these infrastructures without significantly increasing costs. They also realised that these infrastructures could be leased out to external companies for them to use as and when they wish. This significantly lowers the amount of capital expenditure required by these businesses to build server set-ups and also allows them to scale this up and down as dictated by their needs.
This was the birth of cloud computing, so called because computer specialists commonly use a cloud cartoon in schematic diagrams to refer to parts in the system that are opaque. But while we know that it works – the global cloud computing market is forecast to reach $127 billion in the next two years – we are less sure exactly how it works.
Data in the wind
For example, we know that cloud providers typically store your data in different locations for reliability, but we don’t know where exactly or how many copies of it they keep. In fact, identifying the exact location of all your data in the cloud is a near impossible feat. Only a few cloud providers allow users to choose which countries their data is stored in, although more providers are slowly catering to such needs. We do know that the highest density of cloud servers are located in the United States and Ireland.
This means that it is subject to various changing national and international laws. Data held in the EU, for example, is subject to the EU Data Protection Directive, to which companies transferring data in and out of the EU must conform. Until recently, the EU-US “safe harbour” agreement made this straightforward, but Edward Snowden’s revelations regarding US surveillance led to it being invalidated by the European Court of Justice.
The matter of who actually owns your data is also quite complicated. The short answer is that you own the data you create, but the cloud service provider has ultimate control over it.
This is reflected in many providers’ terms of service which state that they can hold on to the data to comply with legal regulations. They can also pass on the data to government organisations if requested (for example, DropBox). On the upside, providers are responsible for securing the data they hold on your behalf against misuse, especially if it relates to credit card information – although there have been a number of large-scale data breaches.
Whose data is it anyway?
Moreover, many service providers, such as Facebook and DropBox, say that there may be a delay before your data is deleted upon your request, but they do not specify how long this delay would be. A lot of this is likely to change, though. The European Commission is in the process of updating its regulations to provide more transparent control of personal data in the cloud.
Expanding the cloud model beyond massive data centres and integrating it within the fabric of residential and business buildings can present great opportunities. Some refer to this as fog computing. Battery operated micro-clouds would also act as important hubs to post and relay information within and between communities that have been cut off due to natural disasters (floods and earthquakes, for example) and security crises (terrorist attacks and riots). But how big the cloud will become and how we will all navigate our way around it remains a murky topic.