Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page

$110M In Funding Will Help d-Matrix Get Generative AI Inference Platform To Market

by Karl Freund | Sep 6, 2023 | In the News

Company sees a window where they can launch their cost-effective solution and get traction ahead of other’s next-gen silicon. d-Matrix has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The funding should enable...

Synopsys Opens The Next Chapter Of AI Tools For Chip Design And Manufacturing

by Karl Freund | Sep 6, 2023 | In the News

The EDA leader has generated over $500M to date in AI tools and technologies. Now a new data analytics solution applies data management, curation, and analysis across the entire pipeline of chip creation. Synopsys was the first EDA company to apply AI to chip design,...

NVIDIA L40S: A Datacenter GPU For Omniverse And Graphics That Can Also Accelerate AI Training & Inference

by Karl Freund | Aug 30, 2023 | In the News

I’m getting a lot of inquiries from investors about the potential for this new GPU and for good reasons; it is fast! NVIDIA announced a new passively-cooled GPU at SIGGRAPH, the PCIe-based L40S, and most of us analysts just considered this to be an upgrade to the...

BrainChip Sees Gold In Sequential Data Analysis At The Edge

by Karl Freund | Aug 22, 2023 | In the News

Unlike in image processing or large language models, few AI startups are focused on sequential data processing, which includes video processing and time-series analysis. BrainChip is just fine with that. With all the buzz around LLM generative AI, it is understandable...

Enhanced Memory Grace Hopper Superchip Could Shift Demand To NVIDIA CPU And Away From X86

by Karl Freund | Aug 8, 2023 | In the News

The company’s new high bandwidth memory version is only available with the CPU-GPU Superchip. In addition, a new dual Grace-Hopper MGX Board offers 282GB of fast memory for large model inferencing. The AI landscape continues to change rapidly, and fast memory (HBM)...
« Older Entries
Next Entries »

More Recent AI News>>

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU
  • Nvidia Takes Major Step To Leverage Its Rack-Scale Ecosystem
  • Speeding AI With Co-Processors

Companies

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Recent Posts

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS