Industry Voices—Paolini: Finding your edge

data center (pixabay)
Implementing and managing edge computing requires more than adding processing and storage resources at edge locations. (Image: Pixabay)
Monica Paolini

A simple way to think about edge computing in wireless is as a way to take functions, content, and processing closer to the subscriber, and to improve quality of experience by reducing latency and improving the use of network resources.

That is sufficient to make a case for edge computing as growth in real-time traffic continues and as enterprises want to deploy services and applications that are local to their sites. Yet, edge computing has progressively acquired a larger scope and promises to have a deeper impact on wireless networks.

Edge computing, alongside network slicing and virtualization, lays the foundation for a new way to plan for and leverage the topology of the core network, in which the location of network functionality is strategically central to determine performance and quality of experience, as well as costs.

Sponsored by Southco Inc.

How To Secure 5G Equipment With Electronic Access

Learn how to protect small cell enclosures from physical threats and deliver better, stronger and more reliable networks with electronic locks and access control systems.

With edge computing, location becomes supreme—at the edge or at the center. Operators have the freedom to choose where things go, and can extract performance and cost benefits if they choose wisely.

There is no single edge. One of the first learning points of edge computing is that there is no single location that we can call the edge. The RAN is the ultimate edge, but, in most cases, it is not a good candidate either for cost or performance, and a location closer to the core is preferable. The choice requires a careful assessment that depends on the applications and services, network resources and demand profiles. There are no straightforward answers, although operators, as well as content and applications providers, are getting a better understanding of where the edge should be for a given use case. The implication is that there are multiple edges in a network—each catering to different traffic types.

Forcing the edge to a single location may erase most of the benefits of edge computing. If the edge is pushed too close to the RAN, edge computing becomes too expensive and creates unnecessary duplication of resources. If the edge is too close to the center, the performance benefits may be too small to justify the effort. Caching content may require too much storage capacity if done at the RAN, and may not result in a sufficient reduction in latency if done too centrally.

A moving target. If only life were that simple! Not only there is no single edge, but the location of each edge is not fixed in time—at least in some cases. When traffic density is high—heavy use by many users in a small area—the edge for bandwidth-hungry applications moves closer to the edge. At night, when traffic slows down, the ideal edge may retreat to a more centralized location. Depending on network resources and requirements (e.g., energy consumption), it may or may not make sense to move the edge throughout the day or in real time, and different operators may have different preferences. If the edge location is determined by security or latency considerations, the edge location is less likely to shift.

Slicing and dicing. Virtualized networks make it possible to move the edge location dynamically with the required time granularity. Network slicing makes it possible to manage traffic flows across shifting edge locations. In combination, edge computing, virtualization and network slicing will transform how operators manage traffic, moving from a static approach to a dynamic one.

Meeting in the middle. The combination of RAN virtualization and edge computing adds further value by establishing the baseband unit (BBU) in a virtual RAN as a good edge candidate: it is closer to the users, and it can serve multiple access points (remote radio units, or RRUs). In this context, edge computing lowers latency more cost-effectively than in a distributed RAN, where the edge has to be co-located at each access point to get the same latency improvement. As the RAN becomes more centralized with virtualization, and the core more distributed with edge computing, RAN and core join in the middle, with a vanishing demarcation line between them.

Managing the edge. Edge computing gives operators the freedom to decide what should be centralized or distributed, and to dynamically allocate edge functionality, but, to benefit from it, they need to orchestrate this process end-to-end to keep a consistent user experience, and seamless application delivery. Kaniz Mahdi at Ciena pointed out that “today’s mobile systems use centralized anchor points. Applications are static and stay in one place as the endpoint moves. Distributing applications (specifically, latency sensitive control) to the edge breaks this paradigm because as the edge moves, the anchors move too. How do you manage moving anchors?”

Implementing and managing edge computing requires more than adding processing and storage resources at edge locations: to benefit from it, operators need to introduce an automated orchestration platform that optimizes the process, assigning network resources based on real-time demand from multiple applications and types of content, and of the interactions among them.

As the awareness of the value of low latency grows, there is a shift from a centralized cloud, to a dynamic, multi-faceted edge, and to networks in which operators have to manage multiple edges in real time. The scope—and value—of edge computing keeps expanding, and so are the learning challenges that the industry is facing in defining, managing and optimizing edge computing. But as a reward, edge computing can strengthen the business case for 5G with a more efficient use of network resources and a better performance.

Monica Paolini, Ph.D., is the founder and president of Senza Fili. She is an expert in wireless technologies and has helped clients worldwide to understand new technologies and customer requirements, create and assess financial TCO and ROI models, evaluate business plan opportunities, market their services and products, and estimate the market size and revenue opportunity of new and established wireless technologies. She frequently gives presentations at conferences, and writes reports, blog entries and articles on wireless technologies and services, covering end-to-end mobile networks, the operator, enterprise and IoT markets. She has a Ph.D. in cognitive science from the University of California, San Diego (U.S.), an MBA from the University of Oxford (U.K.), and a BA/MA in philosophy from the University of Bologna (Italy). You can reach her at [email protected].

Industry Voices are opinion columns written by outside contributorsoften industry experts or analystswho are invited to the conversation by Fierce staff. They do not represent the opinions of Fierce.

Suggested Articles

Dish Network named Stephen Stokols, most recently founder and CEO of FreedomPop, to serve as EVP of Boost Mobile.

AWS said there are many ultra-low latency use cases that its Wavelength service along with Verizon's 5G network can enable.

Only 11 utilities got PALs, which amounted to 1.64% of CBRS licenses - almost a rounding error, especially small considering they cover the country.