The Latest News in AI

We publish news articles on Forbes, which are copied here for your convenience.  

Qualcomm Ups The Snapdragon AI Game

The leader in premium mobile SoCs has applied AI across the entire platform. At the annual Snapdragon Summit in Hawaii, Qualcomm unveiled the next Snapdragon 8 Gen 2 for premium mobile devices. As usual, it is a tour de force of features that make Qualcomm-powered...

read more

Flex Sees Opportunities In The New AI Data Center

Data center cooling is hot. At least it’s becoming a hot market, with industry projections likely to increase as the Nvidia Blackwell GPU push increases the percentage of silicon that requires liquid cooling. Goldman Sachs estimates the server cooling market is...

Qualcomm Could Benefit Most From DeepSeek’s New, Smaller AI

While the Deep Seek Moment crashed most semiconductor stocks as investors feared lower demand for data center AI chips, these new, smaller AI models are just the ticket for on-device AI. “DeepSeek R1 and other similar models recently demonstrated that AI models are...

IBM Teams With AMD For Cloud AI Acceleration

This could be quite telling, as IBM had previously been using Nvidia for it’s internal cloud AI research. IBM has selected AMD to provide AI accelerators for the IBM Cloud. This is another milestone for AMD, which needs cloud adoption to achieve its goals. And IBM...

Who Won The Latest AI Drag Race? AWS Or NVIDIA?

Of course the answer depends on who you ask. There is a quiet drama simmering over AI chip benchmarks. This isn’t the first time, and will certainly not be the last. Keeps me busy. This time, it is between Amazon Web Services (AWS) and NVIDIA. It is always NVIDIA. Not...

NVIDIA L40S: A Datacenter GPU For Omniverse And Graphics That Can Also Accelerate AI Training & Inference

I’m getting a lot of inquiries from investors about the potential for this new GPU and for good reasons; it is fast! NVIDIA announced a new passively-cooled GPU at SIGGRAPH, the PCIe-based L40S, and most of us analysts just considered this to be an upgrade to the...

d-Matrix Emerges From Stealth With Strong AI Performance And Efficiency

Startup launches “Corsair” AI platform with Digital In-Memory Computing, using on-chip SRAM memory that can produce 30,000 tokens/second at 2 ms/token latency for Llama3 70B in a single rack. Using Generative AI, called inference processing, is a memory-intensive...

Eliyan Technology May Rewrite How Chiplets Come Together

Company’s interconnect eliminates the need for expensive interposers, accelerating AI processing; could double the amount of memory for AI like ChatGTP, saving millions. Artificial Intelligence is finally having its iPhone moment. The launch of ChatGPT led to waves of...

Perceive AI Launches 2nd Edge AI Chip For Low Power Applications

Company claims Ergo2 is up to four times faster than Perceive’s first-generation Ergo chip, and can handle much larger models such as NLP. Edge AI is coming into its own, with a variety of chips being launched that offer low cost, low power, and high performance....

RISC-V Startup Esperanto Technologies Samples First AI Silicon

Select customers are now evaluating the chip; early results look promising across a broad range of AI workloads Esperanto has been talking about their edge AI chips for several years, and now the company can demonstrate working AI acceleration for image, language, and...

Cerebras Partners With Qualcomm, Launches 3rd-Gen Wafer-Scale AI

The newest system from Cerebras can handle multi-trillion parameter generative AI problems at twice the performance of its predecessor, while partnering with Qualcomm will help them cut inference processing costs by 10X. Cerebras Systems, the innovative startup who’s...