How much of a typical chip is based on IP reuse? As a percentage, is it going up or down? Here are some figures that may surprise you.
I’m wondering if
he [that would be me] is confusing die area with the number of IP blocks
being used. In the chart you used, it looks at the percent of die area
dedicated for each category.
If you think about it, memory is
pretty area efficient, so the impact of adding a lot of memory is less
than if you added a lot of analog functions since the analog functions
don’t scale all that well. However, if you were adding a great deal of
memory, then I think the die area dedicated to memory would impact the
percent of die area utilized for the memory.
Does this impact the
die area dedicated to the increasing number of IP blocks being used? I
think it depends on the types of IP blocks we are talking about. You can
put a lot of different IP blocks on an SoC. However, if we are talking
about a 5-6 billion-transistor SoC, the IP blocks won’t take up all that
much die area. An ARM CPU core at 400K gates comes to mind. You can put
a lot of them on such a chip without having much impact on total die
area. This is some of my thinking behind having such a chart. If I
remember correctly, when the ITRS originally came out with this chart,
they had memory equal to 94% of die area, re-used logic at 4% and new
logic at 2%. Their thinking was that on 10 billion transistor chips, no
one would have any time to put a lot of new logic on it. 2% of 10B is
200M. Back then no one was building even 100M transistor chips, so I
think they felt that the overwhelming majority of such an SoC was going
to be either memory or re-used logic. It is a lot easier to emplace
memory and replicate it over and over than to spend time designing new
I always felt this was a very short-sighted view of
how designers were going to design their parts. Essentially it said
that logic designers were going to become memory designers and this was
never going to happen.
So, the real question
becomes: is true innovation in hardware being constrained by how much
logic can be designed or have we already reached the plateau of what can
be reasonably added in a new design without raising risk to
unacceptable levels? How big an impact will ESL have on that? Will it
allow an increase in novelty without adding risk? Is the system
architecture and verification becoming the bottleneck?
Brian Bailey – keeping you covered
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you).