The Latest News in AI

We publish news articles on Forbes, which are copied here for your convenience.  

Nvidia Contributes Blackwell And Ethernet Tech To Meta’s Open Compute

Many think of Nvidia as a closed ecosystem. But as AI transitions from fast chips to address a full-system challenge, Nvidia is helping drive an open industry. Yes, CUDA is closed. Nvidia says it has to be closed in order to develolp an optimized software abstraction...

Hyundai And Samsung Lead $100M Investment Round In Tenstorrent

Partners make great investors because their intent is to secure influence and gain access to advanced technology. Hyundai Motor Group and the Samsung Catalyst Fund have co-led a $100M investment in Tenstorrent, and both companies plan to use Tenstorrent’ tech. I love...

Xilinx Readies Versal AI Edge For 2022 Availability

Platform includes updated AI Engine with 4- and 8-bit integer math, along with new memory architecture. Xilinx has just launched the first edge model of the flexible Versal ACAP (Adaptive Compute Acceleration Platform) family, the third Versal to be announced in...

Who Is The Leader In AI Hardware?

A few months ago, I published a blog that highlighted Qualcomm’s plans to enter the data center market with the Cloud AI100 chip sometime next year. While preparing the blog, our founder and principal analyst, Patrick Moorhead, called to point out that Qualcomm ,...

Nvidia Improves Performance With 5x Faster AI. Yes, Software Matters.

Nvidia’s pre-emptive strike may blunt AMD MI300 news, pointing to the company's key advantage in AI software. AMD will host a big announcement this week in San Jose, where the company will announce details about its new flagship GPU for generative AI, the MI300....

NVIDIA Again Claims The Title For The Fastest AI; Competitors Disagree

Intel Habana Labs and Graphcore add scale and software optimizations, while Google skips this round, choosing to put a stake in the ground for half-trillion parameter models. Every six months, the AI hardware community gathers virtually to strut their hardware stuff...

How To Run Large AI Models On An Edge Device

It can be done, but it requires the edge device vendor to work to optimize the model. A hybrid approach can also extend the applicability of LLMs by combining Cloud and Edge processing. When most people think of Artificial Intelligence (AI), they imagine a berserk...

IBM Announces Next Telum Mainframe Processor With AI Accelerators

IBM has announced the Telum II Processor with shared on-chip AI and, perhaps surprisingly, the Spyre Accelerator, delivered on a PCIe card and designed to accelerate AI models, including LLM Generative AI. The new chips will ship in the yet-to-be-announced...

$110M In Funding Will Help d-Matrix Get Generative AI Inference Platform To Market

Company sees a window where they can launch their cost-effective solution and get traction ahead of other’s next-gen silicon. d-Matrix has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The funding should enable...

IBM Announces Two Innovations To Advance Quantum Computing

Error Mitigation and Dynamic Circuits will unlock new era of exploration on IBM Quantum computers for at least the next five years. In addition to the new Osprey chip, IBM has announced two innovations at the annual Quantum Summit, one around error mitigation, and one...