Cambrian AI Research
  • What We Do
  • Research
    • The Latest News in AI
    • Research Papers
    • Cambrian AI Visions
  • Why “Cambrian AI”?
  • Contact Us
  • Login
Select Page
Mukesh Khare of IBM Research on the Benefits of IBM’s “Full Stack” Approach to Research.

Mukesh Khare of IBM Research on the Benefits of IBM’s “Full Stack” Approach to Research.

by Karl Freund | Oct 10, 2022 | Video

IBM Research constantly explores the intersection of clients, partners, semiconductors, system design, and software to find the best solutions to business problems. In this video, Mukesh Khare, VP of Hybrid Cloud, IBM Research, explores the benefits of this “full...

NVIDIA Launches Lovelace GPU, Cloud Services, Ships H100 GPUs, New Drive Thor, And ….

by Karl Freund | Sep 29, 2022 | In the News

It’s impossible to convey the excitement of an NVIDIA CEO Jensen Huang keynote address at GTC, but here’s what caught my eye this week. NVIDIA made a slew of technology and customer announcements at the Fall GTC this year. Highlights included an H100 update, a new...

New Cerebras Wafer-Scale Cluster Eliminates Months Of Painstaking Work To Build Massive Intelligence

by Karl Freund | Sep 29, 2022 | In the News

The architecture eliminates the need to decompose large models for distributed computing to train: Push-button AI? The hottest trend in AI is the emergence of massive models such as Open AI’s GPT-3. These models are surprising even its developers with capabilities...

Qualcomm Is Still The Most Efficient AI Accelerator For Image Processing

by Karl Freund | Sep 29, 2022 | In the News

Company’s recent design wins will help Qualcomm turn MLPerf benchmarks into sales. AI energy efficiency matters! Ever since Qualcomm announced its first-generation cloud edge AI processor, the Qualcomm Cloud AI 100, the company has been at the top of the leader board...

NVIDIA Keeps The Performance Crown For AI Inference For The 6th Time In A Row

by Karl Freund | Sep 29, 2022 | In the News

In The Data Center And On The Edge, the bottom line is that the H100 (Hopper-based) GPU is up to four times faster than the NVIDIA A100 on the newly released ​MLPerf V2.1 benchmark suite. The A100 retains leadership in many benchmarks versus other available products...
« Older Entries
Next Entries »

More Recent AI News>>

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU
  • Nvidia Takes Major Step To Leverage Its Rack-Scale Ecosystem
  • Speeding AI With Co-Processors

Companies

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Categories

  • AI and Machine Learning
  • DataCenter AI
  • In the News
  • Research Paper
  • Semiconductor
  • Video
Cambrian-AI Logo

Tags

AI AMD Arm AWS Blackwell Blaize BrainChip Broadcom Cadence Cerebras ChatGPT Data Center D Matrix Edge AI Esperanto Gaudi2 Google GPU Graphcore Groq IBM INTEL Intel/Habana Labs Jensen Huang Llama2 MediaTek Meta Microsoft MLCommons mlPerf NeMo NeuReality NVIDIA Omniverse OpenAI oracle Qualcomm Quantum RISC-V Sambanova SiMa.ai Snapdragon Synopsys Tenstorrent Xilinx

Recent Posts

  • Is Nvidia Competing With Its GPU Cloud Partners?
  • AMD Announces MI350 GPU And Future Roadmap Details
  • MLPerf Shows AMD Catching Up With Nvidia’s Older H200 GPU

Archives

  • Home
  • Contact Us
  • Privacy Policy
  • X
  • RSS