Over the past year or so, cloud computing has become all the rage. Everyone is talking about it as the next great hope for the IT industry in general and communications in particular. But I have to wonder if those who see cloud computing as a sort of miracle worker really know what it is. If you ask 10 people to supply a definition of cloud computing, you’ll likely get 10 divergent responses.
But cloud computing is not a technology play. In fact, I would be hard-pressed to identify a single piece of new technology that is fundamental to cloud. Unlike Twitter or Facebook, it's not a social-psychology phenomenon in any real sense - there is no “man-on-the-street” movement that is driving the uptake or need for cloud computing. It is one of those rare beasts - a practical, common sense-driven initiative.
Simply put, cloud computing makes much more efficient use of resources. In the early stages, these resources are essentially processing power and storage, but increasingly the focus of cloud will converge on efficient use of software resources from a bewildering array of sources. The concept of a user gaining access to, and paying for, these resources on a per-use basis makes great economic sense for everyone, from the lone mobile game developer in his garage to the uber-large-scale financial institution. It also happens to be industry-changing. Unless someone spots a fatal flaw with the concept, over the next 10 years we will move from a predominantly distributed computing and storage world to a centralized computing and storage world.
What makes cloud very interesting is that every one of the global vertical industries (telecom, financial, retail, etc.) has to have two conversations about the cloud: first, how do we become a cloud user to enable more efficient operations; second, how do we leverage our existing platform assets to become a cloud provider?
The best way to understand the force behind cloud computing and make it accessible to the masses is to discuss it in purely financial terms. Let’s say you’re a developer in a large communications company, and you want to launch a new project. You’d probably have to sit down and write out a purchase requisition to buy the new hardware and software. That will go through the approval cycle, and once it’s been signed, you’re probably looking at a 6-12-week order timeframe. So, all told, you’ve been delayed 3-4 months before your project can really get off the ground.
By contrast, in the world of cloud computing, you’d log on to something like the Amazon Elastic Compute Cloud (EC2) and purchase capacity for a few cents or dollars an hour. Instead of spending $30,000 or more on hardware and software – to say nothing of being delayed by weeks or even months – you can be up and running instantly and with a completed solution for a few hundred dollars in infrastructure cost. This is the propaganda anyway from the cloud suppliers, and it is a reasonably credible argument.
From the point of view of an IT department within a large service provider, this is a stunningly brilliant idea. It gives them the flexibility they need and takes away the pain of maintaining the infrastructure. The entire infrastructure resides with a third party who has huge economies of scale.
Amazon has very publicly run with this concept, and undoubtedly we’ll see many other giants such as Google, Cisco, IBM, Juniper and Sun making a big splash in this world. With their access to hardware and/or data centers, they won’t have any trouble scaling this type of environment and offering it at an attractive price. This is the concept known as Infrastructure as a Service (IaaS).
But the cloud vision also embraces Platform as a Service (PaaS), and the long established Software as a Service (SaaS). In these two scenarios, we have companies offering either a comprehensive solution stack as a service – referred to as PaaS; or online use of discrete software applications on a per-use basis – referred to as SaaS.
So a model like this makes a lot of sense, but there’s always a downside. I’d say one of the biggest concerns people have with cloud computing is security. I’m not questioning the cloud providers’ ability to shore up their infrastructure and platforms; rather whether or not potential customers genuinely trust them enough to put their mission-critical data in there where they may or may not be able to touch it when they most need it.
It’s an irrational fear, but a fear nonetheless. To counter this, I expect a lot of companies are eyeing opportunities for security overlays on top of the public cloud. There’s also a movement toward private clouds, where a big corporation might build its own cloud where hundreds or thousands of employees can plug into it. This would certainly alleviate any fears of losing control of your data or processes.
The communications savior?
Communications companies are questioning how they can shift their own business models to allow them to emerge as one of the winners in the new cloud world. They are looking at everything from opening their own data centers to hosting other third parties, to (more realistically, I believe) opening their own “crown jewel” applications and offering them in a PaaS fashion.
For example, a service provider might expose its billing system as a third-party service in the cloud. Customers would be able to pay a fee on a per-transaction or other revenue-sharing basis and plug into the billing system. Other opportunities surround their service delivery systems, their location systems and so on.
This goes back to the idea of a “two-sided” business model that TM Forum has been talking about for the past year, where traditional communications companies morph from only delivering services to end-users downstream to also opening their core capabilities and offering them to upstream customers. In this case, the services would be offered through the cloud.
Whether or not any service provider is doing this today is another matter entirely. It’s certainly something they are talking about, but it’s not a trivial task to take your formerly internal systems and reconfigure them so they are suitable for consumption by the outside world. Risk and scalability are the two key watchwords here. On one hand, they could make the significant investment to open their typically closed internal systems, only to find that there is no market for such a platform. On the other, they could find the service a resounding success and realize that they just can’t handle the global scale that cloud services imply.
At TM Forum, we’re in a position to help providers who want to reach for the cloud. Our Service Delivery Framework feeds very strongly into enabling providers to manage their environments so they can open their interfaces and expose their services in a cloud setting. We also have our IPsphere program, which creates federated services from multiple providers in a cloud. Our Information Framework (SID) and our SOA-based Solution Frameworks (NGOSS) architecture is the underpinning for the expansion of communications companies into the cloud.
We may still be a long way from pervasive cloud computing, but I’m confident that the cost savings, time-to-market and other advantages of this kind of infrastructure will get communication companies to see cloud computing as a mainstream challenge and opportunity for their enterprise. So if you’re not discussing this within your company, now is the time to start.
Martin Creaner is president at TM Forum