
Jensen Huang announces NVLink Fusion at Computex in Taiwan. NVIDIA
While few dispute the incredible performance of Nvidia AI platforms, many complain that it is a closed system. You can’t replace the Arm CPUs with, say, a CPU from a RISC-V , or take advantage of a new AI ASIC like the Meta MTIA accelerator, without redesigning and building everything from scratch. The key to the gate of this walled garden is NVLink, which no non-Nvidia CPU or GPU/ASIC supports today. Thats about to change, with huge potential ramifications. (Nvidia is a client of Cambrian-AI Research).
NVLink Fusion
Nvidia CEO and Founder Jensen Huang announced NVLink Fusion during his keynote at Computex in Taiwan. NVLink Fusion enables others to integrate custom technology into the Nvidia ecosystem. Nvidia will provide the IP for the chip-to-chip NVLink technology. As hyperscalers are already deploying full NVIDIA rack solutions, this enables them to deploy their own silicon while standardizing around a single scalable hardware infrastructure. With the rich ecosystem of NVIDIA infrastructure partners, NVLink Fusion adopters benefit from the ease of deploying and managing at scale.

The NVLink Fusion allows hyperscalers to incorporate their own CPU or ASIC within the NVL72 infrastructure. NVIDIA
How Does This Impact UALink?
UALink is the open, industry-standard alternative being developed by a consortium to deliver near-NVLink levels of perforamance for non-Nvidia solutions. AMD or Intel will still need to use the new standard UALink to build rack-scale solutions, unless they want to cede control to Nvidia. And they don’t. But Meta or perhaps Amazon, for example, may just want to deploy their own CPU or ASIC for internal workloads. Thet can now engineer the new NVLink Fusion IP into their next generation chip and be able to utilize the Nvidia infrastructure. Of course, software is another story; CUDA won’t work on their accelerators, but they’ve already built the libraries needed to run AI well on those devices.
Who Will Use It?
The list of silcon providers adopting LVLink Fusion right now is short, but includes Qualcomm and Fujitsu as inaugural partners. Qualcomm (also a client of Cambrian-AI Research) plans to incorporate the NVLink Fusion IP into a future generation of its Oryon CPU, taking Oryon back to the data center as Nuvia had originally intended prior to the Qualcomm acquisition. Talking with Durga Malladi, Qualcomm SVP and GM, Technology Planning, Edge Solutions and Data Center, I learned that a future generation of the Qualcomm Cloud AI100 will also support NVLink Fusion. The initial focus will be for inference processing. It is highly likely that the deal announced last week with Humain AI in Saudi Arabia will combine Oryon and CloudAI accelerators.
Fujitsu will add support for NVLink to its A64FX Arm-based processor, which is used in the Fugaku Supercomputer, currently #4 on the Top500 lists.
What’s Next?
Clearly, this move enables partners to semi-customize the Nvidia rack-scale architecture with partners’ own semiconductors and get to scale much more easily. Nvidia doesn’t usually build something customers are not asking for. And the sheer scale of AI supercomputer infrastructure is a challenge for anyone to build from scratch; Google has certainly spent many tens of millions of dollars to build out the infrastructure for their TPU supercomputers. If other hyperscalers and supercomputer centers decide to forgo that expense, and use NVLink as their backbone, UALink will have a more difficult road ahead, and Nvidia will have another competitive differentiator that will be hard to beat.