Could We Really Design Chips in 24 Weeks Instead of 24 Months?
The Semiconductor Industry is enjoying renewed growth despite chip shortages plaguing everything from cars to kitchen appliances. But while the chips themselves continue to get faster and smarter, the chip design process itself hasn’t changed that much in 20+ years. It typically takes 2-3 years to design a chip with a large engineering team and tens or hundreds of millions of dollars to get a chip from ideas to fabrication. But now, change is coming in the form of Artificial Intelligence, which has recently demonstrated significant improvements in optimizing layouts for power, area (cost), and performance.
Using a Reinforcement Learning (RL) based approach, similar to the one that beat the world’s GO champion way back in 2016, Samsung announced that they now have a chip back from their factory that was optimized by the Synopsys DSO.ai platform we discussed in May 2020. As far as we know, this is the industry’s very first working chip whose layout was designed by AI.
We expect a large share of the semiconductor industry will begin using these AI platforms; the impact is too large to ignore. In fact, EDA vendor Cadence Design Systems recently followed Synopsys’ lead and has introduced AI tools that can dramatically improve performance, power, cost, and design. There is a lot of money to be made helping design teams produce better chips in significantly less time.
You can download this exclusive Cambrian AI research paper by clicking below.
Table Of Contents
- The EDA Landscape
- How Far Can One Apply AI in Chip Design?
- A Concurrent Design “Cyclone”
- Implementing the Design Cyclone
- From Software-Defined to Software-Designed Hardware
- Figure 1: The Semiconductor Design Space
- Figure 2: Taking AI Design Optimization to the Next Level
- Figure 3: Traditional “Waterfall” Semiconductor Design
- Figure 4: Synopsys ‘Cyclone’ Agile Design Model
- Figure 5: The path to Software-Designed Hardware
- Cadence Design Systems