Intel Research believes that brain-like Neuromorphic computing could hold the key to AI efficiency and capabilities.
Intel has announced the availability of the second generation “Loihi” chip to further research into neuromorphic computing techniques that more closely emulate the behavior of biological cognitive processes. While Intel believes that large-scale deployment of neuromorphic AI is still years away, the company has been investing in the hardware, software, and development community for over 4 years since it announced the 1st Loihi platform.
What is Neuromorphic Computing and why do we need it?
Artificial Intelligence platforms today use digital representations and math to create a rudimentary simulation of how a brain works. But the real world is not digital, and our brains certainly do not perform matrix multiplications to process input and thought. Rather, the brain itself is the circuit; no simulation required. Neurons in the brain communicate by sending spiking signals across synapses to each other. The six senses create spiking input, which the brain then translates into spiking output to muscles.
The abstraction gap between the neural simulation at the top and the digital hardware performing the computation creates a massive inefficiency. The roughly 100-trillion synapses in a human brain, which weighs a scant 3 pounds and consumes only 100 watts of power. A 100-billion parameter AI model like GPT-3 (that is 1000 times smaller than a human brain) requires months to train on a thousand GPU cluster. OpenAI reports that training GPT-3 consumed several thousand petaflop/s-days of computing power. A petaflop/s-day consists of performing 1015—that’s one thousand trillion, or a quadrillion—neural-network computations per second for a day. So, do that for several hundred days and you can build a network that can generate reasonable english text. OpenAI estimates it cost over $4M to train the network once. Now you see why we need a different approach.
Intel Loihi and Loihi 2
In the three years since Intel unveiled Loihi, Intel has nurtured a community of 150 companies and research institutions; IBM has done something similar. The community has developed uses cases in gesture recognition, odor recognition, robotic arm control, optimization for apps like train scheduling, scene understanding. And early measurements range up to thousands of times more efficient at dramatically lower power levels.
One of the limitations with Loihi 1 was that spike signals are not programmable and have no context, no range of values. Loihi 2 solves both of these limitations as well as offering 2-10x faster circuits, and eight times more neurons, with 4x more link bandwidth to enable higher scalability.
Loihi 2 has been built on the pre-production version of the Intel 4 process and has benefited from the use of EUV technology in that node. Intel 4 has more simplified design rules which has helped bring Loihi 2 to fruition faster than previous process technologies could have enabled, according to a company spokesperson.
The Neuromorphic future is promising
Intel shared a vision of possible commercialization. While companies like BrainChip can get products to market for specific use cases, Intel needs to lay the groundwork for billion-dollar markets. That takes time, and take really good software. Intel launched its open-source Lava software development stack, again embracing a community development approach. Lava will attract many more developers, leading to more opportunities and much more learning. “As an open, modular, and extensible framework, Lava will allow researchers and application developers to build on each other’s progress and converge on a common set of tools, methods, and libraries.”
Armed with future technology and what one hopes will become a robust development stack, Intel envisions selling chips, such as adding features to Intel CPUs and stand-alone specialized designs with other IP. The next step would be intelligent accelerators for edge AI, and eventually scaled-up systems for data center optimization and acceleration.
On the competitive front, Intel is not alone in believing that the neuromorphic approach bears further research. IBM has the True North research neurochip, and startup Rain Neuromorphic is designing an analog learning chip that can a) scale in 3D using 3D RRAM, manufacturable with similar toolsets as those used for 3D Flash memory chips, and b) leverage equilibrium propagation, which could be 1000 times more efficient than today’s best-in-breed digital backpropagation with its expensive global learning rule. Publicly-traded BrainChip Holdings Ltd (ASX: BRN), (OTCQX: BRCHF) has both an SOC and IP licenses for it’s Akida neuromorphic technology the company says can scale down to milliwatts for edge applications.
Loihi 2 can update its neurons approximately 4 to 22 times faster than IBM TrueNorth (every ~45us worst-case for Loihi vs 200-1000us for TrueNorth). And Loihi 2’s synaptic operations have important differentiators that extend model applicability and precision, such as multiplicative amplitudes, propagation delays, nonlinear synaptic responses that add algorithmic capabilities, and even weight changes (learning). Intel has not yet released power consumption data.
A one-million-neuron brain-inspired processor is just the start. Loihi 2 adds new learning features, programmability and scalability that will enable expanded function and applications. There remains a lot of work in software and future chips with more capacity for larger models, but I am convinced that analog and neuromorphic computing will both have a significant impact on capability and, more importantly, affordability of large-scale AI.
The Cambrian AI Explosion continues to get more interesting every day!