Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page

How Enfabrica Is Reimagining, And Disrupting, The AI Data Center

by Karl Freund | Sep 25, 2023 | In the News

The AIHW and Edge AI Summit had a treasure trove of insightful presentations from luminaries such as Andrew Ng, Lip-Bu Tan, Marc Tremblay, and many others. I hope to get around to writing about what I learned, but first, I want to share the innovations from a startup...

Intel Gaudi2 Looked To Be A Credible Alternative To Nvidia. Until…

by Karl Freund | Sep 11, 2023 | In the News

In the latest inference processing MLPerf benchmark contest, Gaudi 2 came surprisingly close to Nvidia H100. But Nvidia promised faster software soon, which is a constantly changing picture. In the latest round of AI benchmarks, all eyes were on the new Large Language...

NVIDIA Adds New Software That Can Double H100 Inference Performance

by Karl Freund | Sep 8, 2023 | In the News

TensorRT-LLM adds a slew of new performance-enhancing features to all NVIDIA GPUs. Just ahead of the next round of MLPerf benchmarks, NVIDIA has announced a new TensorRT software for Large Language Models (LLMs) that can dramatically improve performance and efficiency...

NVIDIA L40S: A Datacenter GPU For Omniverse And Graphics That Can Also Accelerate AI Training & Inference

by Karl Freund | Aug 30, 2023 | In the News

I’m getting a lot of inquiries from investors about the potential for this new GPU and for good reasons; it is fast! NVIDIA announced a new passively-cooled GPU at SIGGRAPH, the PCIe-based L40S, and most of us analysts just considered this to be an upgrade to the...

Enhanced Memory Grace Hopper Superchip Could Shift Demand To NVIDIA CPU And Away From X86

by Karl Freund | Aug 8, 2023 | In the News

The company’s new high bandwidth memory version is only available with the CPU-GPU Superchip. In addition, a new dual Grace-Hopper MGX Board offers 282GB of fast memory for large model inferencing. The AI landscape continues to change rapidly, and fast memory (HBM)...
« Older Entries
Next Entries »

More Recent AI News>>

  • Cerebras AI Lands A Whale As It Prepares To Go Public
  • Nvidia Leapfrogs Google And AMD With Vera Rubin
  • IBM Is Positioned To Lead In Quantum Computing
  • My 2026 AI Predictions Have A Few Surprises
  • Google AI Shot Heard Globally; Another Shoe Is About To Drop

Companies

AI AMD Apple Arm AWS Blackwell Blaize BrainChip Cadence Cerebras ChatGPT Data Center D Matrix Edge AI ENFABRICA Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Apple Arm AWS Blackwell Blaize BrainChip Cadence Cerebras ChatGPT Data Center D Matrix Edge AI ENFABRICA Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Recent Posts

  • Cerebras AI Lands A Whale As It Prepares To Go Public
  • Nvidia Leapfrogs Google And AMD With Vera Rubin
  • IBM Is Positioned To Lead In Quantum Computing

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS