Industry Voices—Blaber: Separating hype from reality in smartphone AI

The recent announcement of Qualcomm’s Snapdragon 710 chipset once again puts the spotlight on artificial intelligence in smartphones (and more broadly in terms of AI at the network edge). There’s been a tremendous amount of noise on this subject in the last 12 months with Apple announcing its neural engine in its A11 bionic SoC in the iPhone X, HiSilicon unveiling a “neural processing unit” in its Kirin 970 and MediaTek following with its Neuropilot AI technology and AI processing unit (APU) in the Helio P60. More recently, Arm also announced Project Trillium.

This raises the questions of what these solutions consist of and what they are trying to achieve. There is evidently a need for custom silicon for AI but workloads vary, and delivering silicon is only half the challenge. Ensuring developers can fully leverage the hardware is arguably the hardest part.

With AI firmly at the peak of the hype curve, the industry must be collectively conscious that technologies deliver tangible benefits rather than an empty claim of intelligence. This should be easy given that AI is not a new phenomenon. What is new is the way solutions are being marketed expressly under the banner of AI.

The advent of dedicated accelerators for AI workloads is a mixed blessing. Even defining these is difficult given architectural similarities to Digital Signal Processors (DSPs). AI is becoming pervasive in smartphones, spanning everything from power management to predictive user interface, natural language processing, object detection, facial recognition… the list is endless. For these tasks to be fully efficient, it is not realistic that they run exclusively on CPU or even the GPU. Equally, developers need to have the tools to fully maximize the resources available.

It’s highly reminiscent of the early days of the smartphone CPU core wars. Adding more cores had little impact beyond marketing hype until developers began writing to those cores to create multi-threaded apps.

The approach taken by Qualcomm is noteworthy as it contrasts with that of Apple, HiSilicon and MediaTek, all of which are positioning a single, dedicated accelerator for AI. Instead, Qualcomm is emphasizing its heterogenous approach that encompasses its Hexagon DSP, Adreno GPU and Kryo CPU. The Qualcomm AI Engine consists of these cores as the foundation for software frameworks and tools to accelerate AI app development using the platform.

This includes Qualcomm’s own Neural Processing SDK in addition to support for the Android Neural Networks API and Hexagon Neural Network development kit. Similarly, models trained in Caffe, Caffe2, TensorFlow, TensorFlow Lite and ONNX are all supported. This means developers can provide optimized experiences with a choice of three cores on Snapdragon but without significant additional heavy lifting.

Whilst Qualcomm hasn’t yet created a custom AI accelerator, its heterogenous approach is designed to offer a range of options that can address a wide variety of tasks and workload requirements. This is a logical approach given that no two workloads are the same—some will work more effectively on a GPU, others on a DSP. Nor is the strategy a rigid one. As workloads change and demands evolve, it’s logical to assume that Qualcomm and others will adapt their silicon strategy with more custom accelerators.

Subsequently, neither approach is right or wrong, particularly as this is still the very early days of AI. It’s logical to assume that the network edge will become characterized by multiple devices and end-points, all of which have varying and increasingly custom requirements when it comes to AI. As workloads proliferate, the resources available will adapt in parallel. No single accelerator will fulfill all functions.

What is key today is that AI silicon has purpose and can be easily leveraged by the developer. Choice, flexibility and performance matters far more than whether a given core is considered “dedicated to AI” or has an attention-grabbing brand name conjuring up visions of other-worldly intelligence. Success will be determined by flexibility, performance, developer commitment and the ability to adapt to the rapidly changing demands of AI.

Geoff Blaber is vice president of research for the Americas at CCS Insight. Based in California, Blaber heads CCS Insight’s Americas business and supports the range of clients located in this territory. Blaber's research focus spans a broad spectrum of mobility and technology, including the lead role in semiconductors. He is a well-known member of the analyst community and provides regular commentary to leading news organizations such as Reuters, the Financial Times and The Economist. You can follow him on Twitter @geoffblaber.

Industry Voices are opinion columns written by outside contributors—often industry experts or analysts—who are invited to the conversation by Fierce staff. They do not represent the opinions of Fierce.