nVidia uses AI for place and route on it’s chips

nVidia uses AI for place and route on it’s chips

nVidia just published a paper and blog post revealing how its AutoDMP system can accelerate modern chip floor-planning using GPU-accelerated AI/ML optimization, resulting in a 30X speedup over previous methods. Hopefully it doesn’t get the treatment the Google AI place-and-route solution got.

AutoDMP is short for Automated DREAMPlace-based Macro Placement. It is designed to plug into an Electronic Design Automation (EDA) system used by chip designers, to accelerate and optimize the time-consuming process of finding optimal placements for the building blocks of processors. In one of Nvidia’s examples of AutoDMP at work, the tool leveraged its AI on the problem of determining an optimal layout of 256 RSIC-V cores with 2.7 million standard cells and 320 memory macros. AutoDMP took 3.5 hours to come up with an optimal layout on a single Nvidia DGX Station A100. 

Initial metrics shows it does an amazing job – in a fraction of the time. Definitely worth the read.

AutoDMP is open source, with the code published on GitHub. Below is a link to an article about Cadence’s Cerebrus AI place-and-route solution.

Article:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.