by Karl Freund | Sep 5, 2025 | In the News
NVLink has become a second, and perhaps deeper, moat for Nvidia, following its CUDA software tools and libraries. There’s a reason the AI community prefers NVLink for connecting multiple GPUs: it is mind-blowingly fast, more than three times the performance of...
by Karl Freund | Aug 22, 2025 | In the News
The annual HotChips conference starts this Sunday, Aug. 24, in San Francisco. Nvidia is scheduled to present six sessions covering topics of interest to AI data center users and operators and will make several key announcements I’ll cover in this article. (Like most...
by Karl Freund | Jul 11, 2025 | In the News
While GPU performance has been the focus in data centers over the last few years, the performance of fabrics has become a key enabler or bottleneck in achieving the throughput and latency required to create and deliver artificial intelligence at scale. Nvidia’s...
by Karl Freund | May 20, 2025 | In the News
While few dispute the incredible performance of Nvidia AI platforms, many complain that it is a closed system. You can’t replace the Arm CPUs with, say, a CPU from a RISC-V , or take advantage of a new AI ASIC like the Meta MTIA accelerator, without redesigning and...
by Karl Freund | Feb 14, 2025 | In the News
While the Deep Seek Moment crashed most semiconductor stocks as investors feared lower demand for data center AI chips, these new, smaller AI models are just the ticket for on-device AI. “DeepSeek R1 and other similar models recently demonstrated that AI models are...