Purdue University researchers are developing a neural-networking chip, whose underlying artificial intelligence technology could be used by leading manufacturers to add image-recognition capabilities to mobile devices.
According to MIT Technology Review, the Purdue group last month demonstrated at a conference in Nevada how a coprocessor linked to a smartphone processor could help the standard processor run what is called "deep learning" software to detect faces or identify street scene components. Smart image detection capabilities could be applied to face tracking, gesture recognition, augmented reality and location-based services.
Numerous labs worldwide are working on deep learning, which is based upon multiple layers of processing, for widespread applications in varied fields. Many of those labs are listed on deeplearning.net.
The Purdue researchers' coprocessor approach is notable because of its application to mobile devices, whereas other deep learning experiments have relied upon extensive computing architectures. For example, Google's (NASDAQ:GOOG) widely touted demo through which a neural network taught itself to recognize images of cats from YouTube videos required the use of 16,000 processors.
Demo of nn-X neural network technology. (Source: TeraDeep)
Eugenio Culurciello, a professor at Purdue who is involved with the school's project, intends to commercialize the technology via his company TeraDeep. "The idea is that we sell the IP to implement this so that a large manufacturer like Qualcomm (NASDAQ:QCOM) or Samsung or Apple (NASDAQ:AAPL) could add this functionality to their processor so they could process images," Culurciello told MIT Technology Review.
Indeed, players such as Qualcomm have already indicated their interest in neural networking. Qualcomm announced in October that its Zeroth processor, which the company describes as a Neural Processing Unit (NPU), will be released for use by researchers and startups sometime this year.
TeraDeep's nn-X (which stands for "Neural Network next!) is described on the company's website as "a vision system based on programmable-logic with embedded mobile processor." TeraDeep added that the coprocessor, which is composed of multiple accelerators and interfaces to ARM cores, "is intended for enabling state-of-the-art vision algorithms" to run on mobile phones, embedded system and mobile computers.
The company contends that implemented in a system-on-a-chip (SoC) running at 500 MHz with 10 computational collections, its technology can deliver 1 trillion operations per second for a power budget of 5 watts or less. TeraDeep said it intends to double memory bandwidth and tailor learning algorithms to enable a system capable of delivering 4 tera-ops per second with a power budget of less than 10 watts.
Working with Culurciello on TeraDeep's technology is Berin Martini, a Purdue research associate. One of TeraDeep's consultants is Yann LeCun, who announced late last year that he was named director of Facebook's (NASDAQ:FB) artificial intelligence lab. He remains a professor at New York University on a part-time basis and will maintain research and teaching activities at NYU.
Brain chips offer a new approach to crunching data