The Latest News in AI

We publish news articles on Forbes, which are copied here for your convenience.  

Speeding AI With Co-Processors

Most chips today are built from a combination of customized logic blocks that deliver some special sauce, and off-the-shelf blocks for commonplace technologies such as I/O, memory controllers, etc. But there is one needed function that has been missing; an AI...

Nvidia’s New HW Alleviates Concerns For Blackwell Transition

I awoke Sunday morning to an article in The Information written to instigate fear, uncertainty and doubt amongst Nvidia investors and users. Don’t worry. Nvidia’s got this. The article circulating this weekend highlighted the thermal challenges some customers face...

MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU

As you AI pros know, the 125-member MLCommons organization alternates training and inference benchmarks every three months. This time around, its all about training, which remains the largest AI hardware market, although not by much as inference drives more growth as...

Is The AMD GPU Better Than We Thought For AI?

MosaicML, just acquired by DataBricks for $1.3B, published some interesting benchmarks for training LLMs on the AMD MI250 GPU, and said it is ~80% as fast as an NVIDIA A100. Did the world just change? To be brutally honest, everyone wants to see a fight, between AMD...

Qualcomm Becomes A Mobile AI Juggernaut.

As we approach Nvidia GTC, its worth noting that there is another player in town. Or south of town, in San Diego: Qualcomm. The company has been building AI expertise and technology for over a decade, and we believe its lead over mobile rivals in both AI hardware and...

Mythic Launches Industry First Analog AI Chip

Mythic claims 400 TOPS with 1.28B parameters on 16-chip PCIe Card Please welcome new Cambrian-AI Analyst Gary Fritz, who contributed to this article. Artificial Intelligence applications are starting to show up in everything from cell phones to supertankers. But at...

AI Inference Is King; Do You Know Which Chip is Best?

Everyone is not just talking about AI inference processing; they are doing it. Analyst firm Gartner released a new report this week forecasting that global generative AI spending will hit $644 billion in 2025, growing 76.4% year-over-year. Meanwhile, MarketsandMarkets...

Who Needs Big AI Models? Amazon Web Services Using Cerebras Hardware

The AI world continues to evolve rapidly, especially since the introduction of DeepSeek and its followers. Many have concluded that enterprises don't really need the large, expensive AI models touted by OpenAI, Meta, and Google, and are focusing instead on smaller...

Intel Announces Neuromorphic Loihi 2 AI HW And Lava SW

Intel Research believes that brain-like Neuromorphic computing could hold the key to AI efficiency and capabilities. Intel has announced the availability of the second generation “Loihi” chip to further research into neuromorphic computing techniques that more closely...

$110M In Funding Will Help d-Matrix Get Generative AI Inference Platform To Market

Company sees a window where they can launch their cost-effective solution and get traction ahead of other’s next-gen silicon. d-Matrix has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The funding should enable...