Four short months ago, Cerebras announced the most significant deal any AI startup has been able to assemble with partner G42 (Group42), an artificial intelligence and cloud computing company. The eventual 256 CS-2 wafer-scale nodes with 36 Exaflops of AI performance will be one of the world’s largest AI supercomputers, if not the largest.
Cerebras has now finished the first data center implementation and started on the second. These two companies are moving fast to capitalize on the $70B (2028) gold rush to stand up Large Language Model services to researchers and enterprises, especially while the supply of NVIDIA H100 remains difficult to obtain, creating an opportunity for Cerebras. In addition, Cerebras has recently announced it has released the largest Arabic Language Model, the Jais30B with Core42 using the CS-2, a platform designed to make the development of massive AI models accessible by eliminating the need to decompose and distribute the problem.
In addition to Cerebras’s progress on Condor Galaxy, the company and Argonne National Labs have announced a 130-fold speedup over the NVIDIA A100 on a Monte Carlo particle simulation application using a CS-2 wafer-scale engine. Another CS-2 customer, supercomputer center Kaust, is a finalist for the prestigious Gordon Bell Award, to be announced on Tuesday at SC’23.
Conclusions
After eight years of watching AI hardware startups come, fade, and go, it is incredibly satisfying to see one (alas, only one) succeed and land some dozen customers, one of them at a commitment level exceeding perhaps $800M. Others will surely follow G42 and adopt the wafer-scale technology to build the next generation of AI. Momentum matters.