NVIDIA Data Center Growth Returns

by | Mar 6, 2020 | AI and Machine Learning, In the News

NVIDIA reported renewed growth overall last week. Its Data Center segment (where most of the AI hardware is reported) grew 43% to a record $968M in the last quarter, and gaming rose 56% to $1.49 billion. The stock market appreciated the news, sending NVIDIA up over 6%, and many analysts revised their price targets higher. This turnaround in demand could be temporary, but I prefer to think this is back to the new normal for AI—a move that could have significant implications for Intel, Xilinx, Google, Qualcomm and many startups. Let’s look at some of the implications of NVIDIA’s return to growth in the data center.

Figure 1: NVIDIA posted a new high-water mark for data center revenue in Q4 FY2020.  image: NVIDIA

NVIDIA: Still in the AI driver seat

First and foremost, NVIDIA’s data center results tell us that demand for AI hardware did not meaningfully taper off as some had feared. The slowdown last year was painful to watch, but clearly the large customers had purchased a ton of capacity. They had to digest that before resuming their growth in AI hardware. In fact, Intel saw a similar trend in its CPUs from the hyperscalers. Concurrently, NVIDIA increased the performance of its V100 GPU by a factor of four over the last year with software optimizations, further reducing demand for additional hardware.

Secondly, the market demand for training, the primary growth driver for NVIDIA AI chips, has finally begun to shift to acceleration for complex inference workloads, such as found in “conversational AI”—where spoken queries feed neural networks to produce spoken and even translated answers. Consequently, NVIDIA’s T4 inference processor is now deployed by many cloud service providers. These include Amazon and Google, both of whom have developed their own inference chips but still rely on NVIDIA for the heavy lifting. This is probably due in part to the breadth and depth of NVIDIA’s software and the ecosystem around the platform.

Finally, the size and complexity of the deep neural networks themselves are doubling every 3.5 months, according to open.ai.org. This furthers the need for training and inference acceleration. Many newer applications, such as conversational AI, actually require multiple neural networks to work together to produce accurate results. Intel’s recent acquisition of Habana Labs is another indicator of the need for inference acceleration for these complex models, and the UK startup Graphcore specifically designs its solutions to solve these networks of networks. The computation required by AI applications will continue to grow dramatically.

The outlook: partly sunny

Since Intel and most AI startups are only just beginning to ship their new AI silicon (if they’re even there yet), these trends almost exclusively go to benefit NVIDIA—at least through most of 2020. Certainly, many startups and their investors can now breath a sigh of relief, as the growth assumptions in their business cases no longer seem so outlandish given NVIDIA’s results. Many of these companies were founded and funded when NVIDIA’s AI hardware business was growing at well over 100% CAGR.

Looking forward, I expect NVIDIA to announce its next generation GPU later this year (perhaps as soon as GTC next month, but almost certainly by the time SuperComputing 2020 rolls around in November). NVIDIA will need this product to maintain its momentum, since dozens of companies large and small will begin shipping their domain-specific AI hardware chips as the year progresses. While NVIDIA enjoys clear sailing today, the competitive landscape will become more crowded. NVIDIA CFO Colette Kress guided the street to expect approximately $3B in total revenue in the next quarter (roughly flat from Q4) in spite of an expected $100M negative impact of the coronavirus.

I will be attending GTC as usual this year; hope to see many of you there!