In a strategic move in its datacenter AI plans, Intel has announced that it will acquire Israeli chip startup Habana Labs for $2 billion. Intel had reaffirmed its plans to deliver its Nervana chips as recently as last month, but the acquisition of Habana likely signals that early lighthouse accounts preferred the startup’s approach over the 2nd attempt with Nervana. It is hard to imagine a scenario where the Nervana chips would play a significant role going forward, but Intel will understandably take a few months to explore its options.
I believe the highly optimized Nervana software stack, however, is likely to be retooled to work with the Habana architecture. Let’s look at the probable reasons behind this move, and the implications for Intel and the industry for the AI market, which Intel expects to grow to over $25 billion in annual revenue by 2024.
What is Habana, and why would Intel pay $2 billion to buy it?
Among the scores of startups readying hardware for AI, Habana Labs stands out as one of the first to deliver working hardware with impressive performance claims for both training and inference processing. Habana Labs launched its Goya chip for inference processing in September 2018, claiming roughly 3X performance advantage over NVIDIA with lower latency. The company subsequently announced its training chip, Gaudi in June 2019, claiming record performance and an integrated fabric based on industry standards to enable scaling to process very large AI models.
I suspect the Habana network fabric is one of the key reasons that Intel decided to abandon Nervana in favor of Habana’s technology. Nervana’s Neural Network Processor (NNP-T) uses a proprietary interconnect for scaling, while Habana’s Gaudi can scale to thousands of nodes over standard 100Gb Ethernet. And Gaudi even supports Remote Direct Memory Access, RDMA, which enables software to access memory across the fabric without taxing the remote CPU. This fabric can dramatically increase the performance of training very large neural network models, which are doubling in size every 3.5 months to handle ever-more complex AI tasks.
Note that RMDA over 100Gb Ethernet (called RoCE) is quite expensive today, with a 100GbE switch costing over $5,000 and a Mellanox 100GbE NIC costing over $1,500. By integrating similar capability on each chip, Habana will likely be much faster and more affordable. In a similar vein, although at a much greater scale, NVIDIA is acquiring Mellanox for its networking technology to interconnect NVIDIA GPU’s. The AI chip game is becoming increasingly tied to networking, so this move could be a game changer for Intel, at a much lower price point than acquiring Mellanox.
Key takeaways
Clearly, Intel realizes that it needs breakthrough performance and efficiency to go up against NVIDIA, the 800-pound gorilla in this space, and it can’t be happy with its Nervana efforts to date. And Intel must get this right; I don’t believe it will get a third chance, but it is still early enough in the game to switch horses. To quote the Moor Insights & Strategy founder Patrick Moorhead, “We’re in the first inning of AI and there’s still room to maneuver.”
Interestingly, Intel said that Habana will report to Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel, not Naveen Rao, former CEO of Nervana and head of AI products at Intel. There was no word of Mr. Rao’s role.
So, what we know now is that 1) Habana’s technology is superior to Nervana, a view that is likely shared by Intel’s largest customers, 2) Habana’s technology is probably superior to Graphcore and other startups, or Intel would have opted for one of those, and 3) Intel sees an opportunity to leapfrog NVIDIA in fabric-connected AI training chips, the next big thing.
While Habana technology looks promising, the large datacenters that might consider adopting it will be much more comfortable dealing with Intel than depending on a small startup, and Intel can dedicate the resources to build an ecosystem around Habana that would out of reach of even a well-funded startup.
As I said in the beginning of the year, the Cambrian Explosion in AI chips is just getting under way, so stay tuned as more companies get its chips to market, and we learn whether Intel picked the right one this time around.