Extended reality (XR) is already providing revolutionary experiences today. However, taking XR to the next level of immersion within the power and thermal constraints of a sleek mobile device is a critical challenge (Figure 1).
The Wireless Edge transformation, which introduces a new era in distributed computing powered by 5G, on-device processing, and edge cloud processing, can offer an optimized solution.
The best of both worlds
What if we could combine all the goodness of mobile XR with the power of PC-tethered XR? Mobile XR is clearly the future of XR since it has many benefits, such as reliable anywhere anytime usage and ease of use with no setup or wires. PC-tethered XR, although not the future of XR, has a valuable asset — it is not limited by power and thermal constraints, which allows it to do more extensive processing. Thanks to low-latency, high-bandwidth 5G networks, our idea is to take the best from both. Distributing the processing over 5G can offer the best of both worlds – boundless mobile XR experiences with photorealistic visuals in a sleek, affordable headset (Figure 2). The experiences are boundless in terms of where you can have them and how immersive they can be.
Augmenting the on-device XR processing
The XR visual processing pipeline is both compute intensive and latency sensitive. Splitting the processing correctly requires a system approach — an approach which we have a history of successfully applying. Let me walk you through how the edge cloud could augment on-device processing for boundless photorealistic XR (tune into my webinar for more details).
When an XR user moves his head, the on-device processing determines the head pose and sends it to the edge cloud through a low-latency, high quality-of-service 5G link (Figure 3). The edge cloud will use the head pose to partially render the next frame, encode the data, and send it back to the XR headset. The XR headset will decode the latest available data, and based on the latest head pose, which is being generated at a high frequency, do any further rendering and adjustment to minimize motion-to-photon latency. As a reminder, motion-to-photon processing happens completely on the device to meet the less-than-20 ms-latency requirement, which is the general threshold for avoiding user discomfort.
As you can see, making the XR experience truly immersive will require a system solution that has low latency and high reliability — 5G is crucial for this with its low latency, high capacity, and high quality of service. As 5G deployment matures and coverage becomes more ubiquitous, consumers can experience photorealistic XR in more places and can rest assured that premium standalone XR experiences will always be available through on-device processing.
And that’s a key point that I can’t stress enough: on-device processing is essential in all scenarios. In standalone mode, the on-device processing does all the XR processing. When augmented by the edge cloud, on-device processing still must provide power-efficient, high-performance, latency-sensitive rendering and tracking.
Making boundless XR a reality
Here at Qualcomm, we’re developing the breakthrough technologies that transform how the world connects, computes, and communicates… and boundless XR is no exception. Qualcomm Technologies is already delivering superior mobile XR solutions today for standalone experiences, and we are also leading the world to 5G. However, we can’t make our vision for boundless XR a reality by ourselves. We’re working with the key players across the XR and 5G ecosystem, from OEMs and content developers to operators and infrastructure, since the split-rendering architecture is a system solution (Figure 4).
There is synergy from the XR ecosystem working together and a big incentive — everyone benefits from increased consumer adoption. Operators, for example, will receive some benefit from the Wireless Edgetransformation in general but let’s focus on the boundless XR opportunity. First, the enhanced mobile broadband that comes with 5G will provide increased capacity, lower latency, and a uniform experience for richer or more interactive XR content. Second, as operators add more processing capabilities at the edge cloud, they can offer a platform for additional services, such as photorealistic mobile XR for the masses.
In the end, we envision that the big benefit will be revolutionary experiences for consumers, such as real-time interactive collaboration, multi-player gaming with photorealistic graphics, next-gen 6-DoF video, immersive education, and personalized shopping like never before. I’m excited for what’s possible and look forward to working with the rest of the ecosystem to make our vision a reality.
By Eduardo Esteves
Vice President, Product Management