NVIDIA DPU Smart NIC
NVIDIA led off with a new NIC from the Mellanox division. Setting the stage for a reimagining of the server, NVIDIA added multi-core Arm CPUs and AI acceleration to the staid Network Interface Card (NIC). Smart NICs have been around for a while, but have yet to reach broad-scale adoption outside a few hyperscale cloud providers. However, as networking becomes the new bottleneck for moving massive data sets (and the lynchpin for data center security), NVIDIA sees an opportunity to augment the NIC with far more processing power. This could potentially alter the data center landscape significantly. By offloading much of the work usually done by server CPUs, the Data Processing Unit (DPU) can create a new platform of high-performance processing, and leverage the GPU’s AI capabilities into the networking market. If NVIDIA can gain traction here, this move could effectively capture some of the revenues currently going to CPU vendors, shifting that spend to more capable, presumably higher-margin Smart NICs. Said another way, NVIDIA is freeing up cycles on the server cores, which allows it to focus on the application that will process the data, instead of spending time unpacking TPC/IP headers and acting as a firewall.
NVIDIA’s roadmap will upgrade the NIC with more CPU and GPU intelligence, from a current performance of 70 SPECINT and 0.7 TOPS per card to 1000 SPECINT and 400 TOPS for 400 Gbps Ethernet/IB by 2023. To put that into perspective, a high-performance workstation with Intel Xeon E-2278G processor reaches some 56 SPECINT today, and 400 TOPS is in the ballpark of the new A100 GPU. For now, NVIDIA announced the Bluefield-2X, which adds an Ampere GPU to the NIC to handle workloads such as anomaly detection/response, real-time traffic analytics and dynamic security orchestration. The entry Bluefield-2 DPU features an 8-core A72 Core 64-bit CPU to accelerate security, networking and storage.
New EGX Edge AI Platform
While NVIDIA’s successful business focuses on graphics and the data center, the Internet of Things presents a greenfield opportunity for pushing intelligence to the edge. Jensen has a platform for that, too. NVIDIA offers six software stacks to support edge application development, hardware to deploy at the edge and edge data centers, over 20 partners to create solutions and the NVIDIA Fleet Management software to help service providers keep it all running. Edge AI software startups planning to take a chunk of the AI edge marketplace now have a roadmap and an idea of the scale of the challenge ahead if they want to compete in this space with NVIDIA.
One of the timely enhancements to those AI software stacks is the federated learning NVIDIA is adding to its Clara platform. The idea here is bold, enabling physicians and scientists worldwide to quickly predict Covid-19 patients’ outlook for oxygen levels 24 hours in advance (after intake x-ray and measurements). This AI-enabled system is up and running at 20 sites in 8 countries with measured benefits to patients.
Another edge enhancement is a new Jetson Nano 2GB robotics and AI starter kit, that runs all AI frameworks and models and only costs $59. Nano is another tool in Jensen’s drawer to attract developers, which now number over 700,000 programmers worldwide. Unfortunately, I do not think the Nano comes with a cute Lego Jensen.
Yet another NVIDIA-owned supercomputer: Cambridge-1
NVIDIA already has two supercomputers located in its facilities in Silicon Valley, including the A100 powered Selene, the 7th fastest computer globally. When Jensen announced the deal to acquire Arm holdings from Softbank, he said he would build an AI Center of Excellence in Cambridge, generating a lot of social media and press speculation and skepticism about when, where and how. While not directly tied to the acquisition, the new Cambridge-1 supercomputer will undoubtedly put some lead in that pencil, providing UK scientists with a state-of-the-art platform for AI research and development. Cambridge-1 comprises 80 DGX A100 servers and should become the largest supercomputer in the UK by the end of the year.
New collaboration platforms
Lastly, Jenson announced two new software stacks designed to enable collaboration and development of graphics and AI applications. The first is the Open Beta release of Omniverse, which allows content creators to collaborate and simulate designs of intricate image and video projects. Omniverse is being used now by companies as diverse as BMW, Volvo and Industrial Light and Magic.
The second collaboration offering is something many of us have wanted since work-from-home became the norm. When conducting video conferences, wouldn’t it be nice if participants’ gaze could appear as if looking at the camera instead of looking at a screen off to the side? Or how about real-time language translation, de-noising kitchen clatter or enhancing resolution? These features and more are now available on NVIDIA’s new Maxine platform, supported on Google Cloud, Amazon AWS, Microsoft Azure, Tencent Cloud and Oracle Cloud Infrastructure.
While Maxine probably will not add a lot to GPU sales, it will add to the GPU envy that many of us feel (I want one!).
Wrapping up
Jensen Huang’s company continues to up its game, extending its technology vertically with optimized software platforms, and horizontally with the new DPU Smart NIC. NVIDIA is truly a juggernaut, with best-in-class hardware and software, and an ecosystem. While this may be depressing news for startups hoping to take a slice of the AI pie, nobody can deny NVIDIA’s impressive rate of innovation and progress.