Why Intel Is Investing In Neuromorphic Computing

by | Apr 3, 2020 | AI and Machine Learning, In the News

Intel certainly has a lot of irons in the AI fireplace, including Xeon CPUs, Movidius computer vision chips, MobileEye chips for autonomous driving and Deep Neural Network training and inference processing technology from the newly acquired Habana Labs. With all of this going on, one might wonder why Intel is pursuing yet another approach—neuromorphic computing. Neuromorphic computing, for the unfamiliar, mimics the neuron spiking functions of biological nervous systems. To help us all understand what the company is doing in this fascinating space and why, Intel Labs held a press and analyst event this week. At the event, the company highlighted a new server for researchers and some work the company published with Cornell University, which uses this technology to simulate the human sense of smell.

What did Intel announce?

Last year, Intel’s research team announced a neuromorphic test chip called Loihi. At the press event this week, the company announced a new server called Pohoiki Springs, now available to its group of collaborative research institutions. Pohoiki Springs is a 5U server that provides roughly the neural capacity of the brain of a small mammal, with 768 Loihi chips and 100 million neurons that operate under 500 watts. Note that such systems do not replace other technologies in the computing landscape; rather, they provide unique capabilities to solve problems that are currently beyond our grasp.

Intel Labs and Cornell University showcased the new system with an interesting application that leverages these neuromorphic chips to “smell.” Specifically, the researchers said they had developed a system capable of detecting 10 distinct hazardous substances by smell. Furthermore, it can purportedly do this in the presence of noisy data, and quickly and accurately at that. Doing this work today on a traditional supercomputer would be slow and expensive. In fact, Intel claims that Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors. The researchers purportedly used the Loihi chips to achieve higher recognition accuracy than other state-of-the-art methods, including a DNN solution that requires 3,000 times more training samples to reach the same level of accuracy.

Why does this matter?

Essentially, this signifies that Intel has fully embraced the ideas behind Domain-Specific Architectures, where silicon is designed to solve a challenging set of problems. This is a significant shift from the old days where the company seemed to regard X86 as the only architecture of merit. These days, each of Intel’s chips solve a specific set of problems, while supporting a common stack of open programming tools to ease development, experimentation and adoption.

The neuromorphic approach is still in deep research, and is being investigated by Intel, IBM, HPE, MIT, Purdue, Stanford and others. It will likely be deployed in production solutions within the next three to five years. Like quantum computing, there is potential for a future solution than could be 1,000-10,000 times more efficient than the digital processing approach that is currently in vogue. But also like quantum, neuromorphic computing will require a lot of research to reach fruition. When it does, it will likely only be applied to a specific set of challenges. I will continue to watch with interest.