The Latest News in AI

We publish news articles on Forbes, which are copied here for your convenience.  

IBM Doubles Down On Its AI Cloud

IBM Research has doubled the capacity of its Vela AI Supercomputer, part of the IBM Cloud, to handle the strong growth in watsonx models and has aggressive plans to continue to expand and enhance AI inferencing with its own accelerator, the IBM AIU. A year ago, IBM...

read more

Who Wins If The New Biden AI Export Rules Stand?

While Nvidia and the European Union have expressed their displeasure with the latest salvo of AI export restrictions from the Biden administration, a few companies and countries could actually benefit from them. But the industry as a whole will suffer, and so will...

NVIDIA Dominates A Near-Empty Field In AI Benchmarks Again

The Qualcomm AI chip also looks pretty efficient, but where is everyone else? MLCommons, the not-for-profit organization that manages the AI benchmarks collectively known as MLPerf, has just released the V1.0 inference results. While NVIDIA once again dominated the...

NVIDIA Loses The AI Performance Crown, At Least For Now

For the first time, NVIDIA did not sweep the MLPerf table. While the era of its performance dominance may have come to an end, NVIDIA GPU’s flexibility and massive software ecosystem will continue to form a deep and wide moat. Meanwhile, Google, Intel, and Graphcore...

Nvidia Sweeps Benchmarks. AMD Is MIA, Again

It should not surprise anyone: Nvidia is still the fastest AI and HPC accelerator across all MLPerf benchmarks. And while Google submitted results, AMD was a no-show. This blog has been corrected on 11/14 with a fresh TPU Trillium vs. Blackwell comparison. Say what...

Synopsys Launches First 1.6T Ethernet To Accelerate AI Data Centers

Normally, an AI Industry Analyst like myself would not take notice of a new version of Ethernet; its IP is fairly staid technology these days. But now the demands of high performance AI has changed the game, again. CPU, Accelerators, and Switch vendors depend on...

NVIDIA Completely Re-Imagines The Data Center For AI

It is all about tighter integration with memory, CPUs, and accelerators for trillion-parameter AI models. For 12 years, NVIDIA has used its Spring GPU Technology Conference (GTC) to amaze its customers and investors with new GPUs for graphics and application...

The Cambrian AI Landscape: Intel

Intel has adopted a "Domain-Specific Architecture" strategy espoused by John L. Hennessy, Alphabet Chairman and former President of Stanford University. Consequently, the company has at least one of everything: CPU, GPU, ASICs, and FPGAs. While this may appear to be a...

NeMo Megatron Reinforces NVIDIA AI Leadership In Large Language Models

LLMs are changing AI, and NVIDIA is changing its platform to excel in this fast-growing field Alberto Romero, Cambrian-AI Analyst, contributed to this story Transformer-based large language models (LLMs) are reshaping the AI landscape today. Since OpenAI established...

HPE Adds Support For Qualcomm Cloud AI 100 Inference Accelerator

HPE’s endorsement for the Qualcomm Technology Cloud AI 100 is a huge step for most efficient and high-performance AI inference engines in market today. When I was working at AMD to get the first generation EPYC server SoC added to HPE servers, I learned that the...

NVIDIA Needed A CPU, But Did It Need To Buy Arm To Get One?

I often opine that NVIDIA needs a data center-class CPU to compete with Intel and AMD, both of whom have used tightly-coupled CPU/GPU technology to win the first three U.S. exascale supercomputer deals. Connecting massive GPUs to fast CPUs over a painfully slow PCIe...