Apple can ease digital overload

Apple has excelled at identifying an unmet need and serving it–the iPhone and iPad being obvious examples. To understand what Apple may do next, one should start with identifying a major problem with our digital devices and services that the company is positioned to address.
What is that problem? The problem is that we have too many devices, all with too many features and services. They all seem to demand attention, and they all have features we’ve not even discovered. Apple is in a strong position to address this problem of “digital overload.” 
What is digital overload? Our digital systems may include a PC, smartphone, tablet computer, TV/digital-recorder, and automobile infotainment system. And, with the advent of Google Glass, a wearable device is probably next.
Features multiply with each new model and upgrade. It is a time-consuming and frustrating challenge to deal with all that complexity. The unmet need that Apple can address is to simplify the digital experience by making it more intuitive and consistent across devices. Make all our devices feel like one digital service!
A historical perspective is useful in understanding this opportunity. The Graphical User Interface (GUI)–a pointing device, windows, menus, icons–brought computing to the masses. The GUI’s familiarity helped us understand quickly how to use a smartphone or tablet computer. But the GUI is becoming over-burdened. For example, a menu is less efficient if it displays ten items rather than five, particularly if many of those items lead only to another long set of options from which to choose.
We also find an excess of features and options in each device. And we are bombarded with messages of various sorts. Proactive features such as Google Now have their points, but also can add to the constant digital demands for our attention.
So how could Apple address this need? It starts with improvements in Siri. The core breakthrough that will propel the next cycle of growth is natural language understanding and speech recognition technology. Apple’s Siri showed the potential of being able to ask for something the way you might ask an assistant.
The objective as discussed in my book The Software Society is “Personal Assistant Model,” an extension of what Siri does now. The Personal Assistant Model is the equivalent of the GUI in terms of its potential to drive another cycle of technology growth. The key is that speech and natural language interpretation technology have reached the point where it can support this model at an acceptable level, and the technology will continue to get better.
How must Siri be improved to meet this goal? First, the core technology supporting Siri must be continually improved, both the speech recognition and the understanding of a request. Both are driven by analysis of data, and Siri has been collecting data on what we say to it in huge quantities. As a research manager deeply involved in speech technology told me recently—”it’s all about the data.”
Second, Siri must allow text input as well as speech, allowing us to type what we would say. That makes Siri usable when speaking isn’t a good option.
Third, Siri should work across all our devices. This is the key feature, making a unified interface that is the same no matter what digital system we are using. Siri can be ubiquitous, remembering what you did on one device when you are using another. For example, you should be able to tell Siri to text a message from your phone while at your tablet computer, using the contact list on your phone. The Software Society and my blog entries (Ubiquitous personal assistants and The ubiquitous personal assistant: The battle has begun) discuss this concept in more detail.
Siri could also add features that reduce the aspect of digital overload that comes from too much communication and too much news? Ideally, we could simply say, “Show me emails from people I have replied to in the past” (or from a priority list you compile). Or “Show me relevant news,” where “relevance” is determined from what articles you have perused for significant amounts of time in the past.
Such features require work to develop, but not any deep technology breakthroughs. It is easier for a company like Apple that develops the products that carry its software. Apple also has iCloud to help provide cross-device consistency. And Apple supplies many of the core applications on its devices, e.g., a calendar app on the iPhone, so that Siri can engage device-specific apps.
How will the Personal Assistant Model drive a unified strategy that could be part of a single major announcement? What will Apple do in TV, for example? The remote control for a TV could be an iPhone with a special app using a WiFi and/or Internet connection to the TV. The technical challenge of dealing with speech across a noisy room addressed to a TV or peripheral device will make it much more preferable to simply raise the iPhone near one’s mouth and ask Siri to “change to ABC News.” We could even tell Siri to have the TV record a program while away from the TV.
What about wearable computing? Apple CEO Tim Cook seemed to promote the idea of a wearable device at the D11 conference recently, pointing to a Nike wristband he was wearing. But he also said, commenting on Google Glass, “There’s nothing that’s going to convince a kid who has never worn glasses or a band or a watch to wear one, or at least I haven’t seen it.” Whatever Apple’s concept for a wearable device (an iClip?), power and size considerations are likely to make it a Bluetooth-connected accessory for an iPhone, rather than a standalone device. And the small size would make voice interaction a requirement.
In the automobile, we will simply want the vehicle system to allow us connection to the smartphone we have already mastered, with a supplementary personal assistant for vehicle-specific services. Our personal assistant will have a voice-only option, as Siri currently does.
And Apple can engage outside developers through its App Store for specialized features. Optional third-party specialized personal assistants could register with Siri so she can engage them when asked.
In summary, the next big thing won’t be a single device or incremental features. It will be unification and expansion of Siri to a major feature on all future Apple devices and services. Apple will address the core problem of too many devices and too many features. Featuring Siri as the primary interface makes the introduction of new devices more seamless.
I don’t expect Apple to announce all this at the upcoming Apple Worldwide Developers Conference. I hope they will announce that Siri will accept partners in specialized personal assistants available in the App store, applications that, once downloaded, Siri can bring up by request.
Perhaps I’m just imagining what I’d like Apple to do. If Apple doesn’t unify our digital experiences, I hope someone else will. Google tries to be a device-agnostic option and has recently expanded search (including natural-language voice search).
Microsoft has the core technology capabilities to move in this direction, already offering some voice search options in Bing. Nuance Communications’ CEO Paul Ricci recently expressed his strong belief in the ubiquitous personal assistant at a conference; Nuance provides an option for device manufacturers, and is already teaming with Samsung with its S-Voice personal assistant on Galaxy smartphones.
Test this proposition on yourself. Wouldn’t you like a unified cross-device experience that let you simply say what you want?
William Meisel is an industry analyst and executive director of the Applied Voice Input Output Society (AVIOS). This article originally appeared on The Software Society blog.