The Latest News in AI
We publish news articles on Forbes, which are copied here for your convenience.
d-Matrix Emerges From Stealth With Strong AI Performance And Efficiency
Startup launches “Corsair” AI platform with Digital In-Memory Computing, using on-chip SRAM memory that can produce 30,000 tokens/second at 2 ms/token latency for Llama3 70B in a single rack. Using Generative AI, called inference processing, is a memory-intensive...
Cerebras Now The Fastest LLM Inference Processor; Its Not Even Close
The company tackled inferencing the Llama-3.1 405B foundation model and just crushed it. And for the crowds at SC24 this week in Atlanta, the company also announced it is 700 times faster than Frontier, the worlds fastest supercomputer, on a molecular dynamics...
Nvidia’s New HW Alleviates Concerns For Blackwell Transition
I awoke Sunday morning to an article in The Information written to instigate fear, uncertainty and doubt amongst Nvidia investors and users. Don’t worry. Nvidia’s got this. The article circulating this weekend highlighted the thermal challenges some customers face...
Nvidia Leads The HPC Landscape At SuperComputing ’24
Nvidia has gone from a niche provider of HPC technology to becoming a dominant force in the industry. This year’s SC event reinforces that leadership. The annual SuperComputing event in North America is taking place this week in Atlanta, Georgia, and as usual the show...
Nvidia Sweeps Benchmarks. AMD Is MIA, Again
It should not surprise anyone: Nvidia is still the fastest AI and HPC accelerator across all MLPerf benchmarks. And while Google submitted results, AMD was a no-show. This blog has been corrected on 11/14 with a fresh TPU Trillium vs. Blackwell comparison. Say what...