Intel sets up 3-year program with Carnegie Mellon to improve IoT video analytics

Intel is collaborating with Carnegie Mellon University (CMU) on a $4.125 million research program designed to increase their understanding of visual cloud capabilities given that the Internet of Things (IoT) and 5G are going to be driving a whole lot more video experiences in the future.

Visual data is probably the single largest form of data in terms of the amount of bits allocated, and over half of all the mobile data in the world is visual data, mostly video but some images and audio. New types of experiences, such as augmented reality, virtual reality and panoramic video require new capabilities in terms of system architectures and data processing techniques.

In addition, as 5G rolls out over the next 10 years or so, it will increase interactive experiences. “5G has those great characteristics of low latency and high bandwidth in mobile environments and that will enable a whole new set of applications as well,” said Jim Blakley, general manager of Intel’s Visual Cloud Division.

Blakley explained that as Intel looked around the space, “we saw a major gap” in addressing some of the large-scale, cloud-related issues. While people are doing cloud systems and other research, “we didn’t see anybody focused on visual computing” and large scale data center environments, so Intel approached a number of researchers to develop proposals.

In the end, Intel decided to go with CMU, creating the Intel Science and Technology Center (ISTC) for Visual Cloud Systems, which is housed at CMU in Pittsburgh and includes funding for about 11 researchers, 10 of whom are at CMU and one at Stanford University in California. The funding is for three years, at which time they will re-evaluate it, but Blakley said they intentionally kept the term short in order to get a fast turn-around.

He said the center is still working through the specifics of the use cases for the research, but a lot of the early thinking has been around smart city kinds of use cases, which would include wireless and wired connectivity.

The research conducted at the ISTC for Visual Cloud Systems will use Intel technologies including Intel Xeon processors, edge devices and imaging and camera technology. Intel will also contribute its data center and IoT expertise and Carnegie Mellon will apply its expertise in cloud computing, visual computing, computer vision, storage systems and databases and networking. Stanford is also contributing computational photography and domain-specific language expertise for the ISTC for Visual Cloud Systems.

Intel made its ISTC announcement to coincide with this week's 2017 NAB Show in Las Vegas, where Intel’s demos include immersive VR and previews in virtual reality of AR Rahman’s Le Musk. VR clips from the movie are being shown for the first time on three Cinema VR chairs in the Intel booth.

Intel is also showing how workstations powered by Intel Xeon processors can seamlessly stitch together content images captured with a GoPro Omni Rig in 8K into one 360-degree video that is viewed with a VR headset.