Editor’s Corner—No quick fixes when it comes to VNF interoperability

Monica Alleven Editor's Corner

While operators like AT&T are moving fast down the network virtualization path, it’s not all happening at once. Case in point: VNF interoperability. While desirable, it’s not necessarily a piece of cake.

At the Layer 123 NFV & Zero Touch World Congress in San Jose, California, earlier this year, a member of the audience, who identified herself as being from Verizon, asked about NFV interoperability. More than one person chimed in to provide an answer, which seemed to boil down to this: If you’re the operator and you want your vendors to provide interoperable NFVs, stipulate as such and it will be up to them to respond.

Of course, that’s oversimplifying, and that’s not what operators necessarily have in mind. Even if operators are getting together and “ganging up,” so to speak, on the vendors to shape up or ship out, that doesn’t mean the landscape is going to change overnight.

RELATED: ETSI in midst of figuring out role with open source

“VNF interoperability is very important to us,” Mazin Gilbert, vice president of advanced technology and systems at AT&T Labs and chair of ONAP’s technical steering committee, told me last month. In fact, it was one of the key drivers in AT&T contributing its ECOMP to open source, which led to the formation of ONAP last year. The purpose of the Linux Foundation project is to deliver the capabilities for the design, creation, orchestration, monitoring and life-cycle management of VNFs in a software-defined networking environment.

“What we need is a blueprint that all the operators and vendors follow,” Gilbert said. That blueprint defines how data is collected from a VNF, how alarms are generated, how a VNF does its own closed-loop automation, et cetera. “A lot of these are interfaces and APIs that should not be different among the VNFs.”

Basically, they want VNFs to move from snowflakes to being more like Legos. “I don’t need those VNFs to look like snowflakes,” he said, adding that vendors can build it once and not incur further costs. “It’s a win-win. What we want from a VNF is the same as another operator,” and the vendors can differentiate themselves based on performance, reliability, speed and things of that nature.

AT&T likes to point out that ONAP is a collaboration of some of the largest network and cloud operators around the world representing 65% of the world’s global subscribers. It certainly has momentum on its side.

Still, it goes to figure that VNF interoperability can be a quandary for vendors. They have to work out a lot of particulars and they’ve already put a lot of investment into building their propriety appliances. It’s a tough proposition to insist on lower-cost solutions but then expect vendors to change what they’ve invested in for many years.

It also has to do with the state in which the industry is in right now. Today the major cloud players—Amazon Web Services (AWS), Microsoft Azure and Google—all have onboarding procedures that include testing to ensure that different vendor VNFs will operate in their environment.

The cloud vendor provides hardware operating specs and the respective VNF vendor can test their VNFs in a cloud vendor’s lab environment (as well as their own), share the results with the cloud provider and achieve certification as milestones are reached. Once the VNF has been certified and deployed in the cloud, then, in theory, it should work in the given cloud environment, explained John English, senior solutions marketing manager, Service Providers, at NetScout, a provider of application and network performance management products.

“However, in general, there is no equivalent process to assure different vendor VNF interoperability and it is up to either trial and error or proactive testing and real-time monitoring of micro services, service chains, and interoperation with a collaborative testing of different vendor VNFs,” English said.

Typically, service providers will go through lab trials and First Office Application (FOA) before rolling out services, he said. In the cloud environment, there’s the potential of a more “fast to fail” situation in which interoperation issues can be witnessed and addressed potentially faster.

“But to truly achieve that ‘fast to fail’ environment in the cloud, there must be visibility, intelligence and analytics,” he added. “Smart visibility down to the VNF layer but with the context of the micro service and service chains, and out to the subscriber session is clearly needed. It is only with this end-to-end visibility that vendor interoperation issues can be quickly and easily diagnosed and triaged.”

Mobile edge computing with 5G is good example of how manual processes will no longer work in an environment that demands ultra-low latency communications, according to English. “5G is a new arena and will be the first large-scale virtualized network environment operated by service providers.”

To be sure, operators have a lot on their plate getting 5G out the door. Virtualization is seen as a key enabler for things like autonomous driving. Interoperability always has been a big deal and not just for VNFs but physical network functions (PNFs) as well.

It’s not going to happen overnight, but eventually, the industry needs to move to Lego-land. – Monica | @fiercewrlsstech | @malleven33