All of the attempts at creating electronic design automation (EDA) tools that are interoperable through the mechanism of a grand-unifying open source database have failed and are likely to continue to fail. Why?
First, a story: I have a good friend whose first full-time job out of school in the early 1980s was with RCA in the layout artwork portion of the design automation group. When he joined, the team was still recovering from an attempt to move all layout data to a common database. Even in the ‘80s, IC implementation was replete with formats—APPL from Applicon, GDS from Calma, etc. The landscape was complicated with numerous proprietary internal layout formats devised to allow for the digitization of rubylith. Other than the layout stations, most EDA tools came from an in-house design automation group, if only because there were no alternatives.
These in-house teams built all the layout manipulation and analysis tools. They wrote and supported everything from place and route, to design rule checks, extraction, plotting, and all of the low-level logical operators necessary to support those functions. The concept of a unifying database was an obvious solution to the problem of tool interoperability. The RCA unified database idea was to write all known formats to one central repository. The team would then be out of the situation where each tool had its own data format. RCA could eliminate the time spent in the onerous task of conversions.
A wonderful idea but one that failed. I remember my friend commenting that, by the time he joined in 1981, the idea of a central database was ridiculed as failed mythology of the past. The actual experience of the “central repository” was that file sizes some two years after the project was conceived and specified were much larger than the designers had expected. The database became unreasonably convoluted as each individual application attempted to cram more information into the already stretched schema. The capacity limits of what was then the IT infrastructure were strained. Read/write accesses became ever slower. A common database for all IC layout design had proven to be impractical.
RCA was certainly not the only organization with the idea to unify EDA data. Across the intervening decades, many righteous attempts have been made both within large integrated device manufacturers and by EDA vendors themselves. Still, well into the 1990s there remained no commercially viable successes in delivering a grandly unifying EDA database.
Then, Cadence contributed its Genesis database to the growing effort among the user community to create an open database standard. Genesis eventually became the Open Access database (OA) with Cadence retaining control of the source code. The OA concept was birthed during a period of great excitement over the business prospects of “open source” code development and distribution as a way to deliver a product. This new model had captivated some entrepreneurs and investors and was a darling of the dot-com boom of the ‘90s.
Cadence turned distribution of their database over to the Silicon Integration Initiative (Si2) consortium. While OA has brought some standardization benefits, it has also remained problematic. The status today is that everyone who consumes EDA tools knows what OA is, but worldwide adoption of OA as a centralized platform has not materialized. And even though OA source code is available to all members of Si2, a true open-source model for OA has never been implemented. Perhaps to avoid the resource-consuming chaos they might endure if the entire industry was indeed opened up to modify the OA source, Cadence has kept a tight control on the OA source code, thereby guaranteeing a high degree of stability and conformity while retaining control over what is the company’s intellectual property. Therein lays the problem. OA is called an “open standard” capability but code ownership rights are somewhat fuzzily maintained by Si2 and Cadence—definitely not publically owned.
That said, OA is available and it is stable and self-consistent. OA has provided a good underlying data control and storage mechanism for Cadence tool users and for the creators of a certain class of EDA tools. However, OA has serious limitations. It is not suitable for multithreaded or distributed, concurrent EDA applications; thus, it may be of increasingly limited usefulness as the next generation of EDA tools evolve to take advantage of these modern architectures.
Can OA overcome its limitations and become an extensible, high-performance, multi-threaded truly open standard with the source code owned and controlled by the industry rather than one company? Without this, many EDA vendors view moving to it natively as a risky proposition. After all, would you want the very foundation of your product to be controlled by a company that is in some, or all respects, your competitor?
The few EDA companies who have decided to depend on OA natively are now in the unenviable position of having to maneuver their product around the controls and limitations enacted by the true owners of their core architecture. These business complications are reflected in the technology itself. Unless some kind of parallel data-management techniques are developed, tools built on OA will certainly never run faster than OA, they will have no higher capacity than OA, nor will they be able to implement features that span beyond those of OA.
Sorry to see that the custom implementation group at Mentor is so negative about OpenAccess - not an opinion shared by the Calibre group as a couple of other people have pointed out. I am not sure what "universal acceptance" is, but OA is at least partially adopted by many EDA companies (Synopsys, Magma, SpringSoft, Jedat, Cadence, etc) and major IC companies (Intel, IBM, ST Micro, TSMC, Samsung, etc).
As Ed Petrus pointed out, the code available from Si2 is a reference implementation of the OA standard. Any company is free to create their own implementation (as is being done now with scripting language bindings) or modify the reference implementation source code. Another EDA company I worked for made several major changes to OA and have those in their production code. The only requirement is that any code modifications be contributed to Si2. There is no requirement to wait for contributed code to show up in the reference implementation before shipping it in a product.
Last, curious comments in the article about router integrations in OA. Many EDA companies (Magma, Pulsic, Cadence) have router integrations that translate from OA to another data structure and back. Since both the source and target databases are controlled by the particular EDA company it is much easier to maintain complete data integrity and performance is excellent. SpringSoft and Pyxis (now owned by Mentor) have a very mature shared runtime memory integration in OA for Pyxis NexusRoute-HP. As for OA not working well for DRC, here is a quote from DeepChip "Springsoft and Mentor working together to enable full signoff DRC check in a DRD style environment. Calibre run[s] in seconds in the background every time you unselect a polygon in Laker based on layer. When layout is complete it's 100% DRC clean." Full sign-off DRC checks in seconds sounds like pretty good performace to me. Unfortunate that Mentor Deep Submicron Division (analog) isn't on the same page as the rest of Mentor.
On Multi-threading: Ciranova’s device placer – Helix - is fully multi-threaded and works fine with a non-thread safe OpenAccess.
Helix has been in production making ICs for at least 2 years. A sound architecture and careful implementation is a prerequisite.
As of this year’s latest release of the reference implementation from Si2 (oa22.41p004), support for multi-threading is explicit
(announced at the October 2010 OAC). So Linda’s comment on multi-threading is not well informed.
A typical situation we find at our customers involves layouts generated by Helix that are subsequently analyzed by Mentor’s
DFM tools. Helix populates design databases with layout views and Mentor’s world class DFM tools analyze the layouts
for DR & LVS correctness and subsequently extract post layout netlist for simulation. Such flows are OpenAccess based and
no data translation is required between Ciranova and Mentor’s DFM tools. Typically, such designs are also PyCell/iPDK based
which implies the design data is open to all other OpenAccess tools.
This comment concerns a few technically inaccurate statements in the article above.
OpenAccess is a specification of a schema for representing electronics design data.
OpenAccess as such is not a database.
The distribution we all receive through Si2 is a reference implementation for the schema expressed in the OpenAccess spcification. It happens to be a very good implementation – probably one of the best designed and implemented software in EDA.
Many tools in production rely on OpenAccess for advanced design.
A design database comes about when design tools populate a design library with the many representations possible in OpenAccess.
OpenAccess also represents and controls the architecture of design libraries.
Many tools can manipulate the design database - sometimes simultaneously, aided by design management tools. Viewed this way, an OpenAccess design database is a “centralized database” and this model has been in play for a long time.
Almost taken for granted.
If by “centralized database” Linda is referring to in memory data model accessed by multiple tools then I’m aware of at least one case where OpenAccess based tools from different companies work with one in memory image of OpenAccess design data. Although, this level of tight integration is not always necessary.
The reference implementation is not a requirement to be OpenAccess compliant. It is possible for a company/tool to undertake their own development of an implementation of a certain aspect of the OpenAccess specification. I'm aware of at least one case where this was done successfully.
The OpenAccess effort by Si2 is forging ahead in 3D/Stacked chips as well for standardizing chip power, thermal & stress models. But if a decade of so called cooperation hasn't yielded the results one had hoped for, what is the motivation for established as well as startup EDA tools providers to contribute? Are we just throwing more into the mix fully knowing & expecting an outcome we don't like, I wonder...
Dr. MP Divakar
Here's an article from the latest issue of Electronic Design which gives details on the different ways of using OpenAccess and the success companies have had
Mentor is horribly schizophrenic on OA. Calibre supports, analog tools do not (maybe some weak translation). Analog tools continue to gimp along on AMPL, a language developed back in the Falcon Framework days.
To build a best in class interoperable product, you don’t need OA in-memory database, but you absolutely need to use OA API.
If you have a product that uses in-memory OA database but does not add any more value than the incumbent, then you are not going to overcome user inertia to adopt your product. Your product needs to provide value (productivity or quality of design etc.) while providing interoperability using OA API. That's how we are making our Titan customers using OA successful.
Accellera has, what,14 member companies, Si2 has over 100, who more represents the industry? Just look at the Si2 Board of Directors, and yes, Cadence and Synopsys are both on there.
Gee, here's a nice article from Mentor Graphics' web site which starts out like this:
"Now that almost all of the major custom design tools run on OpenAccess, we often get asked about how well Calibre supports OpenAccess (OA). The truth is that Calibre has supported reading polygonal data from OA since February 2007 and we have kept up with the new releases of OA as they come along" Here's the full link, you'll probably have to cut/paste it, but if a problem, just go to Mentor web site and search for OpenAccess.