There was a time when humans could keep all needed knowledge in their heads. Writing hadn’t yet been invented, so if it didn’t stay in your head, it was gone forever.
At some point, we ended up with too many things to remember, so someone figured out how to write things down. That was about 4000 BCE. Some of the earliest extant writing samples are for keeping track of commerce and trade; we did that for about 5300 years until someone decided that ad hoc recordkeeping was inadequate; too many mistakes were being made – inadvertently or intentionally. So formal double-entry bookkeeping systems were developed as a way to be double-sure that everything was properly accounted for.
Chip design has been on that same trajectory: from mental notes to ad hoc notes to a search for a formalized structure for keeping track of the entire process. But that last step – from ad hoc to structured – has proved elusive.
We’re long past the simple design of silicon chips that do a bit of logic or analog. Today’s projects are enormously complex. Just the silicon portion is now beset with complications like process variation, mushrooming design rules, and, soon, multiple patterning, just to name a few. And that’s just the “basic” chip. Beyond that, systems-on-a-chip (SoCs) have software to worry about. 3D packaging means the housing of various chips and interposers within a single package. And we have the integration of micro-mechanical-electrical systems (MEMS), either on a chip or with a chip in a package.
While each company does the best it can at keeping track of its design process to ensure that money isn’t lost on mask re-spins and missed market windows, until recently, there have been no agreed formal tracking mechanisms. Just as double-entry bookkeeping ushered in an era of quality financial reporting, it’s now time to formally automate the scattershot, error-prone ways that companies use to track their projects to a quality result.
Chasing first-pass success
The most obvious reason for tracking project progress is to have a product that works right the first time. With modern process complexity, there are simply too many things to go wrong. Tracking provides both visibility into possible issues as well as suggestions on possible solutions when things do go awry.
Complexity notwithstanding, projects still must be completed faster than ever due to shortening product windows. Any kind of hiccup can interrupt that rush to market, meaning execution has to be flawless. And, of course, there are numerous stakeholders – managers, partners, and customers, to name a few – looking over everyone’s shoulders, with expectations of ongoing reporting.
As obvious as these requirements may sound, it’s too easy to associate formalized tracking methods with standardized processes. The problem is, there are no standard ways of creating an SoC or any other electronic component. Each process at each company is unique. This has stymied efforts to standardize tracking.
Making things harder yet is the fact that any process involves numerous disconnected elements. There are things to be done manually; things to be done by sub-contractors; design and verification steps that are automated by tools; and differing disciplines like analog, digital, optical, mechanical, and software, all with different tools, many of which can’t talk to each other.
Further complicating matters is the fact that, from project to project, things change. Tools change, expectations change, customers and partners and managers change. It’s hard to set up a fixed methodology in such a dynamic environment. And, if that weren’t enough, engineers are the ones who end up having to implement most tracking and reporting, and they see it as burdensome and time-consuming, providing little value, and maybe even squelching their creativity. It’s no wonder companies struggle with formalizing process tracking.
Although no two processes are alike, there are high-level components that they share:
- Each process step must be specified for the project. What versions of each tool are to be used? Which blocks will be designed, and which will be purchased as IP? And, for the latter, which IP?
- Completion of each design step must be monitored: what errors or warnings, if any, were generated by the tools? And how were they resolved?
- Progress towards technical objectives must be tracked: how are performance, power, and area converging towards their goals?
- The overall end result must be validated against the initial specification: is the test suite exhaustive, and has it been run and passed?
- Documentation must be complete: are all of the customer materials correct and up-to-date?
- The project itself must be managed: how are we doing against the schedule? How many bugs remain to be fixed?
- You may even track customer engagement: how are customer questions and issues trending during the first six months of sales?
The way these are handled today is by a broad mix of tracking sheets, spreadsheets, checklists, email archives – even hallway conversations. Scripts can be used to partially automate the generation of data from log files and tools reports, but they’re fussy and hard to maintain. None of these pieces ties to any other piece, making it hard to unify everything. So reporting is manual, meaning that an engineer must begrudgingly set aside design work and pull the data together into a report or presentation.
To formalize the quality monitoring of any complex project, a system needs several critical capabilities.
- It must be able to accommodate any process.
- It must be able to monitor progress, understanding where the disparate data sources reside and pulling from them.
- It must be able to measure results, again pulling from different pieces of data.
- t must be able to report on any aspect of the design. For some steps of the process, that means delivering a value, like the power consumption or the number of transistors. For others, it will be a pass/fail judgment, or a “verdict,” that articulates whether the goals of a particular step or parameter have been met.
Since there are no standard processes, this must work at a meta-process level, pulling together the far-flung components of a heterogeneous flow. Importantly, capturing and formalizing a process can stimulate a vital internal conversation about which rules, practices, and metrics make the most sense, helping to hone and streamline the process. Once the process is captured, then it’s a matter of monitoring and reporting, all of which can be automated in a manner that doesn’t become a burden to the designers.
We have passed the point of being able to track designs on bits of paper and scattered files. SoC design is overdue for its own version of double-entry bookkeeping. It’s the only way that all project stakeholders can rest assured that they are creating a high-quality product through faithful execution of a high-quality process.About the author
Dr. Michel Tabusse is CEO and co-founder of Satin Technologies. He was previously with Synopsys, serving as director of worldwide business development for the Telecom IP product line, then in charge of IP and services sales to several semiconductor companies. Before Synopsys, Michel was founder and general manager at Arcad SA until its acquisition by Synopsys in 1994.
Dr. Tabusse has a degree in Engineering (ENSEEIHT, 1983) and a PhD in Microelectronic Design from the Laboratory of Automatics and Microelectronics in Montpellier (Laboratoire d'Automatique et de Microélectronique de Montpellier (LAMM), 1985).
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you).