As frustrated iPhone owners have known for a few years now, there are some things Siri just can't understand. Like this: "Siri, help me to get functionality like yours into my own mobile apps."
Expect Labs says the use cases for its MindMeld API could be a lot more like the virtual assistants mobile users are coming to expect.
Most developers may not make that kind of request out loud, but that doesn't mean the interest isn't there. With the rise of Siri on iOS, Google Now on Android and Cortana on Microsoft Windows Phone 8, the concept of speech-enabled assistants that can ease tasks or even tackle them before being asked is becoming a more common smartphone experience. Experts call this "anticipatory computing," and though it can take many forms, it's voice-driven content discovery that may be the first to make its way into developers' hands.
Serving it up before you ask
A few months ago, for example, Expect Labs released MindMeld, its proprietary anticipatory computing technology, as an API available to developers across all major OS platforms. MindMeld uses natural language processing to understand a consumer's context and then serve up relevant information, ideally before you've even asked for it.
San Francisco-based Expect Labs makes basic access to the API free, but there are premium plans that offer access to a larger number of documents MindMeld indexes or requests users can make each month.
While Expect Labs already has a MindMeld API, CEO Tim Tuttle told FierceDeveloper the use cases could be a lot more like those virtual assistants that mobile users are coming to expect.
"I think we're going to start seeing voice-driven intelligent functionality appear in a large percentage of the apps we use every day," he said. "Whether it's Yelp, Netflix, or something else, it's become a feature that users will expect to have."
Some of the best examples may come from larger companies. A cable TV network, for example, might have a large library of videos, but it needs to be easier for consumers to find and watch them on a smartphone. Or think of a shopping website, where anticipatory computing can allow consumers sitting on a couch to source products using their voice rather than tapping away with their thumbs.
"If you can make it just slightly easier, they will find items, they will buy them, and that will have a huge impact on what gets allocated to that channel," he said.
There are already plenty of apps that are aiming to bring a similar level of personalization and convenience to smartphone users. For Cezary Pietrzak, who predicted anticipatory computing as a key trend in 2014 on his blog, early examples include Aviate, Sunrise, Cue and even major players like Foursquare.
"Anticipatory computing is quite a hard thing to do," said Pietrzak, a startup founder and mobile marketing consultant based in New York who previously worked at Appboy. "It takes a lot of processing power on the back end. It also demands a lot of usage, which on mobile is hard to drive. Then they need to make something of the data that they're collecting."
So far, many app developers are becoming more aware of anticipatory computing and one segment is very excited about taking the MindMeld API and developing features as quickly as possible. But there's another camp who might only know of the concept through their experiences with something like Siri, and as such have "an inherent skepticism," he said.
The "new default"
Developers may need to transcend that attitude, and quickly. In a report on the anticipatory computing space, analyst Thomas van Manen of the Netherlands-based research firm VINT suggested that more and more apps will require traits like context awareness, personalization and adaptiveness.
"This will be the new default," he wrote. "The future of technology is not only about location-based apps; it is about user context and the way technology anticipates the needs of a user."
Tuttle agreed. "Once voice is an input that is supported throughout your application, it will give it significant differentiation," he said, suggesting smart notifications may be another way anticipatory computing makes its way into apps. "The ah-ha moment they need to see is when they can get the information they want much faster than they would have if they just opened the app."
Pietrzak added that as mobile shifts beyond smartphones and tablets to wearable devices, anticipatory computing may become even more advantageous to developers.
"When you think about the phone or personal computing shrinking to the point where you're wearing it on your body so you can barely see it, figuring out the value of each of those tools are on the shoulders of developers and marketers," he said. "The mission is to derive value from data and proactively bring whatever that value is to people without requiring them do to extra work, because screens are small and thumbs are thick."