As a follow-up to my previous column on Magma Design Automation's gain-based synthesis technology, I received the following email from Hirendu Vaishnav, President & CEO of SynApps Software Corp.
"Hi Max, that was an excellent EEDesign.com column on 'Gain-based synthesis.' Given the complexity of the technical problem, you captured and presented it rather well for most of us. However, I would like to point out a couple of things...
When you look at the 'gain-based' mechanism from a greater perspective, it is a dual problem of the traditional design methodology (e.g. "nodes become vertices and vertices become nodes" type of equivalence in graph theory!)
Fundamentally, synthesis tools suffer from the 'unknown-load' problem. The prevalent methodology is such that the unknown-load problem results in an 'unknown-timing' problem, because the area (at least cell area) is assumed constant. You can switch this problem around and assume timing is fixed as in case of Magma (and others before them -- the concept of a constant delay methodology has been around for a while and has been used successfully in real high-performance chip designs). But guess what happens? Now the area is unknown.
To put this in really simple terms, Magma's idea is that -- by means of their gain-based technology -- you can assume the delay of a cell to be constant, and further assume that you can achieve this by sizing the cell up and down after synthesis until you get the "assumed" fixed delay. However, this results in unpredictable areas and, sometimes, as in case of non-continuous sizes available in library, even unpredictable delays.
The point I am trying to make is that a 'dual problem definition,' while looking very attractive at first glance, is not inherently superior to its counterpart -- it's just that the problems are now of a different type and are pushed into a different domain.
I could further argue that in fact the older methodology was better because timing closure issues affected only 5% of the design and the techniques for addressing them are well established.
The real problem and where Magma seems to have provided the solution is:
1) Better infrastructure (i.e., faster database/tools
2) Integrated design database/tool/flow
In fact, if Synopsys or Cadence had to rewrite their entire database (smartly) and their toolset with the goal of handling the mega designs of today -- and had to build a reasonably good integrated toolset on that database -- I believe that Magma may not have a significant advantage over them owing entirely to its "gain-based" methodology. In fact, I suspect that even Magma internally is using a methodology that is a mix of traditional and 'gain-based' mechanisms. Anyway, this is my 0.02$ to the discussion."
I emailed Hirendu back as follows:
"Hi there, Hirendu. Thank you very much for your kind words and also for your very interesting and well considered feedback. You are correct in saying that Magma has indeed transformed a fixed-area, variable-timing problem into a fixed-timing, variable-area problem. You are also correct that, in and of itself, this transformation does not solve the problem. However, as you noted, the problem is now in a different domain, which allows Magma's gain-based synthesis to attack it from a different perspective.
One major advantage of Magma's transformation is that area is additive while timing is not. Of course placement using Magma's technique is somewhat tricky, since cell areas are modified as placement is performed. However, as some cells become larger others tend to become smaller and things tend to average out. Furthermore, according to the folks at Magma, if a feasible solution exists for the given timing constraints the placement is guaranteed to converge (in fact I think they have applied for a patent on this).
Actually, there are a number of considerations here. As you know, the traditional approach is to use timing analysis during (and post) synthesis to give you a list of critical paths. However, in addition to the fact that this analysis is based on estimated (inaccurate) wireload models, traditional tools don't give you any indication as to how hard they had to work to meet the timing constraints. Thus, although you know things are going to change when you move into the physical domain, you have no feel for the probability that you will still be able to meet the timing constraints when you get there.
By comparison, in addition to highlighting critical paths via a timing report, Magma also provides a 'Gain Analysis' report that identifies paths with low gain values (these are the paths that have little room for maneuver). This provides a good feel for the probability that you will still be able to meet timing constraints in the physical domain.
As you know, traditional timing-driven placement is followed by (typically iterative) post-placement techniques to improve the timing. However, no current placement algorithm can guarantee that it will always find a solution that meets timing (even assuming one exists). Placement algorithms apply a set of heuristics to keep the cells/elements on a critical path close together. The problem is that the path will change as placement proceeds. Also, things get really tricky if you have a significant number of 'almost-critical' paths, because these can easily become critical following placement.
Another major consideration using traditional approaches is that there can be any amount of over-designing in the non-critical paths that is never recovered. By comparison, although Magma places timing as the number one constraint, this doesn't mean that area is ignored. During Magma's gain-based optimization phase aggressive 'gain trimming' is performed to adjust the gain value of every cell in order to meet timing. Gain values are also lowered to eliminate negative slack and raised to eliminate positive slack. One by-product of this is that signal integrity problems are reduced, due to the fact that aggressive paths are not overdriven and victim paths are not under-driven.
As I noted in my original article: "The smallest possible sizes are used for each gate so as to just meet the timing budget. This means the chip occupies the smallest amount of silicon real-estate, which dramatically reduces congestion, power consumption, and noise problems."
Your point was: "I could further argue that in fact the older methodology was better because timing closure issues affected only 5% of the design and the techniques for addressing them are well-established." Although the traditional techniques may be well established, they aren't very predictable. The typical flow is to iterate back and forth between synthesis and physical design trying to meet timing, and sometimes this iteration never converges. Magma's approach of adjusting gain on all cells (as opposed to the 5% considered by traditional flows) not only provides a predictable timing closure path but, as noted above, also results in smaller area and reduced power consumption, congestion, and noise/SI issues.
I agree with you that Magma's infrastructure and full RTL to GDSII integrated database/tool/flow is a major part of their solution. Also I know that every EDA tool vendor has a story they want to shout from the rooftops, and that there's always a lot of hype floating around (like any design engineer I've been on the receiving end of much of it).
Anyway, thanks a lot for your very interesting feedback -- regards, Max"
An energetic dialog
As you can imagine, Hirendu responded energetically, and we ended up having a very interesting dialog that covered a lot of ground. Suffice it to say that, amongst many other points, Hirendu feels we should make a distinction between "traditional" approaches and new approaches that augment traditional methodologies with physical information early in the process (for example, PKS by Cadence or Physical Compiler by Synopsys.
Hirendu's position is that if you have a smart physical synthesis tool where placement and synthesis are truly integrated, a "pure" gain-based methodology may hold marginal, if any, advantage over traditional methodologies.
My response was that although some modern tools do have synthesis and placement integrated, thus far my impression is that Magma holds the high ground regarding the tightness of its integration between synthesis, placement, and routing (this is in addition to their gain-based methodology). And so it goes ... to be honest we could bounce this one back and forth for a long time.
What would be really interesting would be to hear from engineers in the trenches who have worked with both mainstream tools and Magma's gain-based alternatives on the same design, to see what their impressions are. Any takers? (Note that names can be changed to protect the innocent). In the meantime, let's just leave it that if you are poised on the brink of yet another huge, complex, high-performance design, then it can't hurt to at least take a look at Magma's solution to see if it's applicable to your design challenge.
Until next time, have a good one!
Clive (Max) Maxfield is president of Techbites Interactive, a marketing consultancy firm specializing in high-tech. Author of Bebop to the Boolean Boogie (An Unconventional Guide to Electronics) and co-author of EDA: Where Electronics Begins, Max was once referred to as a "semiconductor design expert" by someone famous who wasn't prompted, coerced, or remunerated in any way.