Synopsys Laps Competition With Second Generation AI

by | Aug 23, 2021 | In the News

EDA Leader Synopsys paints path to Software-Designed Si

Last year, EDA leader Synopsys announced it had developed an AI chip design instrument called DSO.ai that could produce faster, lower cost, and more power-efficient chips by using an AI to figure out how to best place the transistors on a piece of silicon. Now, 16 months later, Samsung has confirmed it used Synopsys DSO.ai to design the next generation Exynos chip set.

Now, Synopsys has announced it has developed a second generation AI platform that can optimize beyond just physical layout, enjoying at least a 16-month heads start and effectively lapping the competition.

What did Synopsys Announce?

Synopsys Co-CEO Aart de Geus held the keynote address at the annual Hot Chips conference, which took place virtually on August 23-24. Dr. de Geus announced that the company has taken Reinforcement-Learning-based design technology, two steps further, adding optimization for a chip’s architectural structure and end-to-end application behavior. Dr. de Geus shared data from applying this second-generation AI to real design cases, demonstrating an astonishing 28% power reduction – that is over a full manufacturing technology node worth of scaling – by managing the exploration of many choice-points the chip design team could consider across this massive search space. Design teams could opt for higher performance (frequency), lower cost (area), or combination of all three, depending on business and market objectives. And like the first release of DSO.ai, this technology would save significant design time. Dr. de Geus envisions a future state of the art that could cut design time from many months to just weeks.

Synopsys believes the industry is on the verge of a revolution in AI designed semiconductors. source: Synopsys

Without AI, it would take a tremendous amount of compute power to evaluate all parameters for each specific chip project. The first generation of AI from Synopsys could search through 1090,000 possible ways to place and route. (That is a 10 to the power of 90,000!). By comparison, the game of GO was conquered by Google’s Deep Mind by searching through 10360 possible moves in 2016, at the time an amazing accomplishment. Still, with the second generation expanding into the behavior of the chip, and the structure of how that behavior would be architected, Dr. de Geus imagines it will take a lot of compute power to identify the right trade-offs. That would be good news for Synopsys clients who design the AI accelerators and CPUs needed to crunch those numbers!

From Software-Defined to Software-Designed Hardware

Software is eating the world, but it is AI that is now eating the software. As more and more software applications become data-driven, and neural networks are already crossing the trillion-neuron mark, how is the semiconductor industry going to deliver the petaflop-months of compute required for AI from the data center to the edge?

Software-defined hardware has been proposed by industry luminaries as an elegant solution. It is based on the premise that chip could become personalized to the needs of specific applications, putting software in direct control of the instruction set architecture (ISA), chip structure (microarchitecture), and implementation method (silicon technology). Personalizing chips could deliver 1,000X better performance and energy efficiency but here comes the problem: it currently takes 2-3 years to put a new idea into an actual socket.

AI could be the answer. AI-driven design systems like DSO.ai have delivered the productivity to accelerate months-long design tasks down to days. With more global system level optimization on its way, we could have the ability to create new, personalized chips, in just weeks. More AI in the design process could indeed enable the expansion of the software-defined hardware concept to software-designed hardware, making it both possible and economically attractive to deliver many flavors of acceleration to match the needs of the most intricate applications.

Conclusions

We are just beginning to imagine a future where AI will be used to design chips, including other AI chips, to be far more efficient and powerful. More AI in the design process could indeed enable the expansion of the software-defined hardware concept to software-designed hardware, making it both possible and economically attractive to deliver many flavors of acceleration to match the needs of the most intricate data-driven applications. The impacts will be profound, and all chip designers should take note of the momentum this movement is generating. Google and NVIDIA are researching a similar approach, and now Samsung has announced the company has working silicon back. It looks to us that Synopsys has the pole position to benefit from the trend.

We take a deeper look into the latest Synopsys innovations in this research paper.