All of the attempts at creating electronic design automation (EDA) tools that are interoperable through the mechanism of a grand-unifying open source database have failed and are likely to continue to fail. Why?
First, a story: I have a good friend whose first full-time job out of school in the early 1980s was with RCA in the layout artwork portion of the design automation group. When he joined, the team was still recovering from an attempt to move all layout data to a common database. Even in the ‘80s, IC implementation was replete with formats—APPL from Applicon, GDS from Calma, etc. The landscape was complicated with numerous proprietary internal layout formats devised to allow for the digitization of rubylith. Other than the layout stations, most EDA tools came from an in-house design automation group, if only because there were no alternatives.
These in-house teams built all the layout manipulation and analysis tools. They wrote and supported everything from place and route, to design rule checks, extraction, plotting, and all of the low-level logical operators necessary to support those functions. The concept of a unifying database was an obvious solution to the problem of tool interoperability. The RCA unified database idea was to write all known formats to one central repository. The team would then be out of the situation where each tool had its own data format. RCA could eliminate the time spent in the onerous task of conversions.
A wonderful idea but one that failed. I remember my friend commenting that, by the time he joined in 1981, the idea of a central database was ridiculed as failed mythology of the past. The actual experience of the “central repository” was that file sizes some two years after the project was conceived and specified were much larger than the designers had expected. The database became unreasonably convoluted as each individual application attempted to cram more information into the already stretched schema. The capacity limits of what was then the IT infrastructure were strained. Read/write accesses became ever slower. A common database for all IC layout design had proven to be impractical.
RCA was certainly not the only organization with the idea to unify EDA data. Across the intervening decades, many righteous attempts have been made both within large integrated device manufacturers and by EDA vendors themselves. Still, well into the 1990s there remained no commercially viable successes in delivering a grandly unifying EDA database.
Then, Cadence contributed its Genesis database to the growing effort among the user community to create an open database standard. Genesis eventually became the Open Access database (OA) with Cadence retaining control of the source code. The OA concept was birthed during a period of great excitement over the business prospects of “open source” code development and distribution as a way to deliver a product. This new model had captivated some entrepreneurs and investors and was a darling of the dot-com boom of the ‘90s.
Cadence turned distribution of their database over to the Silicon Integration Initiative (Si2) consortium. While OA has brought some standardization benefits, it has also remained problematic. The status today is that everyone who consumes EDA tools knows what OA is, but worldwide adoption of OA as a centralized platform has not materialized. And even though OA source code is available to all members of Si2, a true open-source model for OA has never been implemented. Perhaps to avoid the resource-consuming chaos they might endure if the entire industry was indeed opened up to modify the OA source, Cadence has kept a tight control on the OA source code, thereby guaranteeing a high degree of stability and conformity while retaining control over what is the company’s intellectual property. Therein lays the problem. OA is called an “open standard” capability but code ownership rights are somewhat fuzzily maintained by Si2 and Cadence—definitely not publically owned.
That said, OA is available and it is stable and self-consistent. OA has provided a good underlying data control and storage mechanism for Cadence tool users and for the creators of a certain class of EDA tools. However, OA has serious limitations. It is not suitable for multithreaded or distributed, concurrent EDA applications; thus, it may be of increasingly limited usefulness as the next generation of EDA tools evolve to take advantage of these modern architectures.
Can OA overcome its limitations and become an extensible, high-performance, multi-threaded truly open standard with the source code owned and controlled by the industry rather than one company? Without this, many EDA vendors view moving to it natively as a risky proposition. After all, would you want the very foundation of your product to be controlled by a company that is in some, or all respects, your competitor?
The few EDA companies who have decided to depend on OA natively are now in the unenviable position of having to maneuver their product around the controls and limitations enacted by the true owners of their core architecture. These business complications are reflected in the technology itself. Unless some kind of parallel data-management techniques are developed, tools built on OA will certainly never run faster than OA, they will have no higher capacity than OA, nor will they be able to implement features that span beyond those of OA.