The Latest News in AI
We publish news articles on Forbes, which are copied here for your convenience.
IBM Teams With AMD For Cloud AI Acceleration
This could be quite telling, as IBM had previously been using Nvidia for it’s internal cloud AI research. IBM has selected AMD to provide AI accelerators for the IBM Cloud. This is another milestone for AMD, which needs cloud adoption to achieve its goals. And IBM...
d-Matrix Emerges From Stealth With Strong AI Performance And Efficiency
Startup launches “Corsair” AI platform with Digital In-Memory Computing, using on-chip SRAM memory that can produce 30,000 tokens/second at 2 ms/token latency for Llama3 70B in a single rack. Using Generative AI, called inference processing, is a memory-intensive...
Cerebras Now The Fastest LLM Inference Processor; Its Not Even Close
The company tackled inferencing the Llama-3.1 405B foundation model and just crushed it. And for the crowds at SC24 this week in Atlanta, the company also announced it is 700 times faster than Frontier, the worlds fastest supercomputer, on a molecular dynamics...
Nvidia’s New HW Alleviates Concerns For Blackwell Transition
I awoke Sunday morning to an article in The Information written to instigate fear, uncertainty and doubt amongst Nvidia investors and users. Don’t worry. Nvidia’s got this. The article circulating this weekend highlighted the thermal challenges some customers face...
Nvidia Leads The HPC Landscape At SuperComputing ’24
Nvidia has gone from a niche provider of HPC technology to becoming a dominant force in the industry. This year’s SC event reinforces that leadership. The annual SuperComputing event in North America is taking place this week in Atlanta, Georgia, and as usual the show...