Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page

d-Matrix Emerges From Stealth With Strong AI Performance And Efficiency

by Karl Freund | Nov 19, 2024 | In the News

Startup launches “Corsair” AI platform with Digital In-Memory Computing, using on-chip SRAM memory that can produce 30,000 tokens/second at 2 ms/token latency for Llama3 70B in a single rack. Using Generative AI, called inference processing, is a memory-intensive...

$110M In Funding Will Help d-Matrix Get Generative AI Inference Platform To Market

by Karl Freund | Sep 6, 2023 | In the News

Company sees a window where they can launch their cost-effective solution and get traction ahead of other’s next-gen silicon. d-Matrix has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The funding should enable...

D-Matrix AI Chip Promises Efficient Transformer Processing

by Karl Freund | Jul 7, 2022 | In the News

Startup combines Digital In-Memory Compute and chiplet implementations for data-center-grade inferencing. This article was written by Cambrian-AI Analysts Alberto Romero and Karl FreundD-Matrix was founded in 2019 by two veterans in the field of AI hardware, Sid Sheth...
Sid Sheth and Sudeep Bhoja on Startup D-Matrix Unique Approach to AI Inference Processing

Sid Sheth and Sudeep Bhoja on Startup D-Matrix Unique Approach to AI Inference Processing

by Karl Freund | Jun 21, 2022 | Video

Startup D-Matrix is taking a unique approach to AI Inference processing: Digital In-Memory Compute on Chiplets for Transformers More Cambrian-AI Visions Video Interviews CEO Ingolf Held & VP Business Development Mahesh Makhijani discuss Grai Matter Labs’s focus...

More Recent AI News>>

  • Speeding AI With Co-Processors
  • EDA Vendors Help Intel Get The USA Back Into Chip Manufacturing
  • Meta Enters The Token Business, Powered By Nvidia, Cerebras And Groq
  • Can CEO Lip-Bu Tan Save Intel?
  • AI Inference Is King; Do You Know Which Chip is Best?

Companies

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qiskit Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Ventana Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qiskit Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Ventana Xilinx

Recent Posts

  • Speeding AI With Co-Processors
  • EDA Vendors Help Intel Get The USA Back Into Chip Manufacturing
  • Meta Enters The Token Business, Powered By Nvidia, Cerebras And Groq

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS