Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page

SiMa.ai Creates Drag-And-Drop Platform For Building AI Workflows

by Karl Freund | Sep 12, 2023 | In the News

Assuming it is the software that enables or prevents the adoption of a new AI chip, a reasonable assumption, SiMa has created a new tool called Palette Edgematic to simplify AI solution creation greatly. And build demand for their silicon. As everyone says, “It’s the...

Esperanto Joins The Rush To Generative AI With A New Appliance

by Karl Freund | Sep 12, 2023 | In the News

The company is also working on its next generation chip which will add HPC features and higher performance with HBM instead of today’s DRAM memory. Esperanto, the company with a chip containing over 1000 cores of RISC-V, has previewed a new appliance for running...

Intel Gaudi2 Looked To Be A Credible Alternative To Nvidia. Until…

by Karl Freund | Sep 11, 2023 | In the News

In the latest inference processing MLPerf benchmark contest, Gaudi 2 came surprisingly close to Nvidia H100. But Nvidia promised faster software soon, which is a constantly changing picture. In the latest round of AI benchmarks, all eyes were on the new Large Language...

Cadence Design Is Working With Renesas To Build The World’s First LLM Tool For Up-Front Chip Design

by Karl Freund | Sep 11, 2023 | In the News

The company sees this as an augmentation, not a replacement, for its portfolio of reinforcement learning AI tools that improve the productivity of chip design teams, addressing the most challenging part of chip design. Cadence has been aggressively rolling out...

NVIDIA Adds New Software That Can Double H100 Inference Performance

by Karl Freund | Sep 8, 2023 | In the News

TensorRT-LLM adds a slew of new performance-enhancing features to all NVIDIA GPUs. Just ahead of the next round of MLPerf benchmarks, NVIDIA has announced a new TensorRT software for Large Language Models (LLMs) that can dramatically improve performance and efficiency...

$110M In Funding Will Help d-Matrix Get Generative AI Inference Platform To Market

by Karl Freund | Sep 6, 2023 | In the News

Company sees a window where they can launch their cost-effective solution and get traction ahead of other’s next-gen silicon. d-Matrix has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The funding should enable...

Synopsys Opens The Next Chapter Of AI Tools For Chip Design And Manufacturing

by Karl Freund | Sep 6, 2023 | In the News

The EDA leader has generated over $500M to date in AI tools and technologies. Now a new data analytics solution applies data management, curation, and analysis across the entire pipeline of chip creation. Synopsys was the first EDA company to apply AI to chip design,...

NVIDIA L40S: A Datacenter GPU For Omniverse And Graphics That Can Also Accelerate AI Training & Inference

by Karl Freund | Aug 30, 2023 | In the News

I’m getting a lot of inquiries from investors about the potential for this new GPU and for good reasons; it is fast! NVIDIA announced a new passively-cooled GPU at SIGGRAPH, the PCIe-based L40S, and most of us analysts just considered this to be an upgrade to the...

BrainChip Sees Gold In Sequential Data Analysis At The Edge

by Karl Freund | Aug 22, 2023 | In the News

Unlike in image processing or large language models, few AI startups are focused on sequential data processing, which includes video processing and time-series analysis. BrainChip is just fine with that. With all the buzz around LLM generative AI, it is understandable...

Enhanced Memory Grace Hopper Superchip Could Shift Demand To NVIDIA CPU And Away From X86

by Karl Freund | Aug 8, 2023 | In the News

The company’s new high bandwidth memory version is only available with the CPU-GPU Superchip. In addition, a new dual Grace-Hopper MGX Board offers 282GB of fast memory for large model inferencing. The AI landscape continues to change rapidly, and fast memory (HBM)...
« Older Entries
Next Entries »

More Recent AI News>>

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU
  • Nvidia Takes Major Step To Leverage Its Rack-Scale Ecosystem
  • Speeding AI With Co-Processors

Companies

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Recent Posts

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS