Have you stopped to think about how the leading cloud computing providers are able to provide enormous quantities of computing resources at the click of a button? Clearly, they have massive data centres around the world filled with racks of high performance IT. If you were their only customer that might be the end of the story but, of course, they serve huge numbers of customers, many with enormously volatile workloads, and are taking on more every minute. Now and again you might get a little reminder that this does not all come about through magic (have you noticed the occasional “our servers are busy” message when trying to use Twitter?). So how does it all work and what risks does this pose to your business? How confident can you be that your next ad-hoc request for a virtual machine will be satisfied? Even worse, how do you know that your existing cloud based services will not be constrained by your cloud provider’s next wave of new customers?
The sky is the limit
The technical answer to this question is deceptively simple - enormous spare capacity. The leading cloud providers carry enormous reserves and are building new capacity at an extraordinary rate, comfortably ahead of demand (see “Google pumps $400 million more into Iowa” and “IBM pumps $1.2 Billion into Global Cloud”). For internal corporate IT this would be seen as a problem and virtualisation technology is used to drive up utilisation and get more work out of the IT estate. In the public cloud virtualisation is also important but only to speed up and automate service provision - utilisation is irrelevant. If you have 50% to 100% reserve capacity who cares about utilisation rates?
The financial answer is more difficult. The major cloud providers are world leaders in driving down the unit prices of their IT and passing on those savings to their customers but global, industrial-scale IT is still incredibly expensive (see “Google Has Spent $21 Billion on Data Centers” via @wattersjames but allow for another $2 billion a quarter since then). It would be nice to think that this is just a problem for the providers but this is only true up to a point.
For example, who is actually paying for the reserves in public cloud services? If all the spare capacity is funded from the surplus generated from current customer revenues then the public cloud business would be extremely stable and secure. If it is financed by investors and based on projections of revenue growth then the levels of spare capacity and the resilience of your cloud services are dependent upon investor sentiments. If investors are also subsidising current capacity to hold down prices and buy market share the business will be even more fragile and starts to look like a giant Ponzi scheme. As long as investors think that the cloud business will continue to grow at a spectacular rate the cloud providers will have plenty of money to build reserves ahead of demand and keep absorbing short term demand shocks. What would it take to change the investors’ expectations (see “Is there enough cloud biz to go around?” from @GigaOM) and what would happen to cloud services then?
The truth is probably some where between these extremes but, unfortunately, it is quite difficult to find out how cloud services are being financed (see this analysis of Amazon’s cloud finances by @BernardGolden). All of the major cloud providers are business units of much bigger organisations and are probably the largest customers of their own cloud services which makes the flow of investment and revenue hard to follow.
Bringing clouds down to earth
This post is not leading up to an argument to avoid public cloud services. This would be the financial equivalent of keeping all your cash under your mattress. That would be unwise for individuals and inefficient for organisations. Instead, my plea is for CIOs, media analysts and academics to stay alert, to pay attention to the financial engineering behind the cloud as well as the technical engineering and to make plans for business continuity. Continuing the financial analogy, much of the economic pain we have experienced recently is not because our financial systems were fundamentally flawed but because of complacency and carelessness.
There are more useful lessons we can draw from the financial services sector:
- commodities (e.g. shares and debt, computer chips and memory) can be assembled into easy to consume packaged services (investment funds, cloud services) but the assembled services no longer behave like commodities and do not have the same low risk profiles
- the competence of the people who assemble and maintain the packaged services is as important as the performance of the underlying commodities
- it can be hard to track risks around these systems and make the right provisions against systemic shocks
- some packaged services (e.g. insurance, bank accounts, computer networks) pool resources for a large number of customers and only work as long as there is little correlation between the patterns of demand from these customers
- if something looks too good to be true - guess what - it probably is!
Financial services can also offer some solutions but the cloud service industry still needs to do more work on this. For example, what are the cloud computing equivalents of capital adequacy, value at risk, leverage, liquidity and cash flow? What kinds of disclosure are needed from cloud providers? What are the roles of auditors and ratings agencies?
It is going to be some time before we can answer these questions but it will also be some time before cloud is a large enough share of computing provision to pose a systemic threat. In the meantime, please go ahead and capture your share of the value that cloud services will create but take the time to stay well informed about how the cloud business works (not just the technology) and take prudent steps to protect yourself.
Do you have any insights into cloud finances? Add a comment or use the Twitter button below to let me know what you think.