Local cloud ops will struggle against Google Compute Engine

Away from the high-profile launch of Android 4.1 Jellybean, the Nexus 7 tablet and the Nexus Q media streaming device, one of the biggest announcements at Google IO 2012 was the opening up of Google’s vast computing resources under the moniker of the Google Compute Engine.

The sheer scale and engineering finesse of Google must have local cloud providers quivering in their shoes.

On one level, Google compute engine is just another platform as a service offering, but one that in terms of sheer computing power and geographic reach nobody can hope to match. The only one that comes close is Amazon, and GCE out of the box promises to offer roughly double the processing power for the same amount of money as does EC2, luring those with high-performance computing needs as early adopters.

Each GCE compute unit is at least a 2.6 GHz sandy-bridge core with a mapping of one virtual CPU to one hyperthread. Smaller units for debugging and prototyping will follow.

On the network side, each set of IPs can float between multiple geographies. Using GCE is akin to a first class ticket, once it gets picked up by the google network it will get processed quickly no matter where you are in the world with caching of public data allowing CDN-like speeds.

Today, Google compute engine is only offered in a number of zones across the central and western coasts of the United States and only to those with large compute needs for hundreds of cores, but the threat it poses it clear. Just how can a local service provider hope to compete with Google’s network, technical set of offerings and perhaps most importantly, vast developer community? This is a question many must be asking.

Many telcos had been pinning their future on transforming into a service provider to avoid the dumb pipe trap.


“Cloud is nothing more than taking assets most companies already have today and making it consumable,” said Chip Salyards, VP for APAC at BMC software.

His idea was simple enough. To avoid the death spiral of becoming a dumb pipe losing customer ownership to OTT players, even the smaller telcos must become service providers, and leverage their strengths of a customer base and their local market position. The only way they can do that efficiently is through turning themselves into an IT provider with high levels of automation.

Today HR is automated, ERP is automated, yet it is almost whimsical to think that many service providers still provision their IT services manually.

Telstra is one example of a telco who effectively rents out its infrastructure to customers and turned into a service provider. Maqurie Telecom is another that was born in the waves of deregulation to what is now the country’s second largest managed service provider in Australia, he said.

But, the question I posed to him was, why go for a local provider and not Amazon EC2?

Salyards said that performance was one key factor as Amazon does not have points of presence in every country, as was data residency laws and better support from a smaller, more nimble player.

When GCE goes worldwide, the first point will be moot as Google will provide, in its own words, a first-class ticket from anywhere in the world. Support? Well, while some think of a good helpdesk as good support but perhaps more importantly, support today means an active community and lots of users asking questions on sites such as stackoverflow.com. That, and a robust toolset which Google app engine provides.

Which leaves only data sovereignty, the need to keep data within the judicial confines of the country (and quite often out of the jurisdiction of the US Patriot Act) that remains.

The industry needs to think of a better reason than that if it deserves to survive this Google onslaught that is changing the world. Again.