Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page

NVIDIA L40S: A Datacenter GPU For Omniverse And Graphics That Can Also Accelerate AI Training & Inference

by Karl Freund | Aug 30, 2023 | In the News

I’m getting a lot of inquiries from investors about the potential for this new GPU and for good reasons; it is fast! NVIDIA announced a new passively-cooled GPU at SIGGRAPH, the PCIe-based L40S, and most of us analysts just considered this to be an upgrade to the...

BrainChip Sees Gold In Sequential Data Analysis At The Edge

by Karl Freund | Aug 22, 2023 | In the News

Unlike in image processing or large language models, few AI startups are focused on sequential data processing, which includes video processing and time-series analysis. BrainChip is just fine with that. With all the buzz around LLM generative AI, it is understandable...

Enhanced Memory Grace Hopper Superchip Could Shift Demand To NVIDIA CPU And Away From X86

by Karl Freund | Aug 8, 2023 | In the News

The company’s new high bandwidth memory version is only available with the CPU-GPU Superchip. In addition, a new dual Grace-Hopper MGX Board offers 282GB of fast memory for large model inferencing. The AI landscape continues to change rapidly, and fast memory (HBM)...

Hyundai And Samsung Lead $100M Investment Round In Tenstorrent

by Karl Freund | Aug 2, 2023 | In the News

Partners make great investors because their intent is to secure influence and gain access to advanced technology. Hyundai Motor Group and the Samsung Catalyst Fund have co-led a $100M investment in Tenstorrent, and both companies plan to use Tenstorrent’ tech. I love...

Micron Looks To Be First To Market With HBM3 Update For Generative AI And HPC

by Karl Freund | Jul 26, 2023 | In the News

According to the company, the new Gen-2 of HBM increases memory capacity by 50%, with another bump in the works for 2024. As you may have heard, in addition to NVIDIA GPUs, generative AI eats memory for lunch. And dinner. In fact, running ChatGPT takes 8 or 16 GPUs...
« Older Entries
Next Entries »

More Recent AI News>>

  • Skipping Nvidia Left Amazon, Apple And Tesla Behind In AI
  • New Fabrics Enable Efficient AI Acceleration
  • Who Needs Big AI Models? Amazon Web Services Using Cerebras Hardware
  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details

Companies

AI AMD Apple Arm AWS Blackwell Blaize BrainChip Cadence Cerebras ChatGPT Data Center D Matrix Edge AI ENFABRICA Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Ventana Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Apple Arm AWS Blackwell Blaize BrainChip Cadence Cerebras ChatGPT Data Center D Matrix Edge AI ENFABRICA Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Ventana Xilinx

Recent Posts

  • Skipping Nvidia Left Amazon, Apple And Tesla Behind In AI
  • New Fabrics Enable Efficient AI Acceleration
  • Who Needs Big AI Models? Amazon Web Services Using Cerebras Hardware

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS