What is fog computing and why is it important to telecom?

If there’s one sure thing about fog computing, it’s very unclear. And while that might sound like a trite summary, many of the people who are leading the charge to fog computing acknowledge the very same thing.

Of course, to them, it’s clear what fog computing is and why it’s important. But similar to edge computing, you can get different answers about the definition depending on whom you’re talking to. At the most basic level, fog computing is the layer between the edge and the cloud, but that doesn’t completely cover it, either. It can also overlap with these other areas—at least, by some accounts, which kind of makes sense if you're, well, fog. 

Part of the reason for the struggle over definitions is similar to what’s happening with open source: The concepts around fog computing are just now being developed. Since it is a revolutionary concept, that means it takes time for the technology to be rethought and developed, and business models to be hashed out. “That’s what’s happening” now in the OpenFog Consortium, said Jingyi Zhou, an OpenFog Consortium board member and director of 5G Standards and Business Development at ZTE.

Jingyi Zhou (ZTE)
Jingyi Zhou

It’s a process that hasn’t been seen much from the outside, and OpenFog member companies realize it can’t be done in one giant step. “Many baby steps have to be taken” to reach the ultimate goal, Zhou said, noting that today’s edge computing efforts can instruct the development of fog as well.

As for why it’s important, Lynne Canavan, executive director of the OpenFog Consortium, recalls a keynote that was given on the last day of the Fog World Congress that took place last fall in Santa Clara, California.

At one point, Mung Chiang, Ph.D.—one of the founders of the OpenFog Consortium—paused and looked out at the audience. He told them that if anyone was sorry to have missed the TCP/IP revolution—a seminal shift in the industry—now is that moment. They can join in the current revolution around fog. “That was sort of an interesting way of positioning it,” she said. “Fog is important. It’s not a fad. It’s really the necessary technology for really advanced use cases” that involve autonomous vehicles and drones, for instance.

RELATED: OpenFog Consortium welcomes more carrier input as it collaborates with ETSI

While a large contingent of the 399 attendees at Fog World had been working in the space for some time, a significant number of curious attendees came to find out what it is. 

Fog: What is it?

Flavio Bonomi, a former Cisco Fellow who is now the founder and CEO of Nebbiolo, gets credit for coining the term “fog.” The OpenFog Consortium defines fog computing as a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from cloud to things, as in the IoT. Put another way, one could imagine the “fog” layer hanging below the clouds, just as the fog hangs over the Bay Area.

But Canavan is quick to point out that fog is not an either/or element of the cloud. “We think about it as a continuum” from the cloud down to the device, and that continuum includes parts of the cloud and the edge. Fog isn’t trying to replace the cloud, as there are some functions that are meant for the cloud and should stay there.

The OpenFog Consortium was an idea that Helder Antunes, chairman of the OpenFog Consortium and senior director for the Corporate Strategic Innovation Group at Cisco, had along with Professor Chiang of Princeton, who is now the dean of Purdue’s School of Engineering, and Tao Zhang, Ph.D., a Cisco Distinguished Engineer and IEEE Fellow. They decided they needed a more horizontal organization to tackle the issues of the day, and the OpenFog Consortium was formed in 2015.

Matt Vasey (Microsoft)
Matt Vasey

"The idea of fog computing was, we thought, an interesting way of capturing our thinking about there being a continuum of intelligence,” explained Matt Vasey, head of IoT business development at Microsoft and an OpenFog Consortium board member. “Lots of intelligence in the cloud, intelligence in near edge devices, intelligence in devices that are very close to the edge and finally intelligence in the edge devices.”

Both an intelligent cloud and an intelligent edge are part of the mix. “When we think about edge computing, if I was forced to define it, I would say it’s this pattern where you have devices at the edge that can do some work but then require cloud connectivity to deliver a full experience,” Vasey said. “When we think about fog computing and how it should work, it should work where we have intelligence all the way up and down through that continuum from the cloud to the edge,” and that model addresses latency and data privacy issues. At least, that was some of the thinking when the consortium was formed, he said.

Back to the edge

To try to explain fog, “the first thing you’ve got to decide is where is the edge,” said Iain Gillott, founder and president of research firm iGR. “The answer is not easy.” In fact, it’s where “it depends” comes in handy.

To some, the ultimate edge is the device, like an iPhone. The problem is, the closer you get to the true edge, the more specific you need to be in terms of apps and content. (In mobile, the farthest from the edge is probably the packet core, Gillott noted.)

RELATED: ETSI MEC chair says contrary to assumptions, confusion does not reign supreme in edge computing space

For some applications, it makes sense to have them at the base station or macro cell level. Other applications might make sense in the local data center. Some applications will only live at the edge, while others will live only in the packet core; and hybrids will live in both.

Things like remote surgery, where a little bit of jitter leads to life or death, have to be at the edge. Something like video can be farther back in the network. Other things like VoLTE need to be in the packet core because everybody uses it—putting it down at the edge would create a mess, Gillott said.

Rhonda Dirvin (ARM)
Rhonda Dirvin

Most people, when they talk about edge computing, mean the edge of the internet and some sort of gateway device, said Rhonda Dirvin, an OpenFog board member and director of IoT verticals at ARM. One of the problems with fog is, even though it’s a hot subject, there are multiple definitions for it, she added. Edge computing is a subset of fog, she said, pointing to the analogy: The edge is to fog as an apple is to fruit. Edge is part of fog, but it’s not the only thing.

The definitions may vary even within the same segment of an industry. For most people in the IoT, the edge would be the gateway, but some could consider a thermostat to be the edge if it has the ability to do some computations. “The key challenge in the market space right now is education,” she said. “I think people don’t really understand what it is.”

Operators' take on it

Wireless operators are embracing fog computing and working on it, and they were among those that really started enabling it by moving toward more software defined networks. “Most of them do get this and do understand it,” Dirvin said.

AT&T has been one of the more aggressive operators when it comes to moving to SDN and NFV. “I think the promise of edge is really about being able to deliver on the kinds of services and applications that require very low latency so that you don’t have those delays” that can occur by going to the cloud that could be hundreds of miles away, said Alicia Abella, AVP of the Cloud Technologies and Services Research Organization at AT&T.

“We’re certainly focused on 5G and the edge. That’s our primary focus right now,” she said. Edge can be used as part of the LTE network and assets, and it’s also part of the evolution to 5G. But AT&T is being diligent about where it places edge nodes to minimize cost and maintain good quality of experience. Some of it is determined by use cases. “We know that a lot of these applications and use cases like AR/VR will require very fast compute time,” she said. In fact, if the latency isn’t low enough, people can get nauseous from the VR experience.

“I think all the operators are thinking about how do they participate in this,” Microsoft’s Vasey said. The smart city is a good example where there are simple use cases like an intelligent car traveling through the operator’s network. Perhaps the car is getting information about other cars around it. If the driver can’t see the brake lights of another car two cars ahead of him, local nodes could be used to send information to the right cars about who’s applying brakes and when a traffic signal is about to change. That’s the kind of use case that could improve safety and reduce congestion in cities, with the system generating new routing scenarios to avoid bottlenecks. "Those sorts of scenarios, we potentially see operators playing a big role in it,” he said, noting that it will require investments in cloud and near-edge technologies on the part of operators.

Opportunities for all?

Steve Jennis (Adlink)
Steve Jennis

Edge computing is already providing opportunities for companies like Adlink. “It’s an exciting time because the combination of edge computing with virtualization is creating huge opportunities for vendors like Adlink into the telecom market,” said Steve Jennis, corporate VP and head of global marketing at Adlink, an industrial IoT company that serves the medical, transportation, defense/government and other verticals.

Of course, he contends, it’s not such a good time for incumbent vendors that see a threat to their business models from the virtualization space. But then again, all incumbent vendors are under some sort of threat, whether it’s hardware or software from open systems and virtualization and standardization. “It’s exciting times,” Jennis said.

For the industrial IoT, the edge resonates very well, and on the factory floors, people know what it means, according to Sastry Malladi, CTO at FogHorn, a Silicon Valley startup founded around 2014 that is focused on the industrial IoT space; its target markets include transportation, mining, oil and gas, healthcare, manufacturing and smart cities.

In recent years, more companies decided they wanted to take all the data produced by big machines and put it in the cloud. But the locations of these machines may not be practical for cloud computing—say if it’s a ship in the middle of the ocean—and even if they have mobile connectivity, the bandwidth required to handle the amount of data is cost prohibitive. In some cases, they don’t want to be connected to the cloud due to cybersecurity risks. So they turn to fog computing.

Sastry Malladi (FogHorn)
Sastry Malladi

Similar to how companies will say something is “open” when it’s not, some cloud vendors might want to get into the edge space, so they try to change the definition to suit their strategy. The same could be said for fog.

“The confusion is not in the actual customer. They have a very clear definition of what is the edge for them,” Malladi said. “What we can tell you for sure is that we are providing our solutions what we believe and consider to be an edge/fog solution for the industrial IoT, and I think it resonates very well. We don’t even have to have a second call to explain to them what it is and why they need it. Most of the time we spend on trying to figure out what are their use cases and how to solve their use cases, but not on what is fog and what is edge and how to convince them why they need this.”

Getting to the point where fog is widely understood may take time, as ZTE’s Zhou indicated. But the fog train certainly has left the station, and it may be only a matter of time before everybody gets on board. That’s probably the hope of the OpenFog Consortium, which signed an agreement last year to work together with the ETSI MEC. “We are branching out,” and the consortium is working with IEEE to advance the fog concept where appropriate, Zhou said. “The wheel is in motion.”