by: Joe Davis, Mentor Graphics
Chip integration is where all of the pieces for a new chip come together. In an ideal world, all of those pieces get done on time, they all go in one batch to the chip group responsible for the integration, who combines them all together in perfect harmony, pushes the button, and voilà!, off the full chip goes to the foundry, where it manufactures perfectly and performs exactly as it should.
Of course, the real world is more complicated, and that “waterfall” type of project is giving way more and more often to more “agile” or “concurrent” projects, where different blocks are on different development schedules and encounter different production issues, so the chip assembly is repeated over and over as pieces are completed or updated. This process requires compiling inputs from multiple groups, feeding back information on the results of the integration, and then managing updates and adjustments from each of those input streams, all while managing and controlling the overall delivery schedule. In addition to the obvious challenges of managing multiple datastreams, significant technical challenges must also be satisfied to ensure the integrity of those data, as well as maximize overall process efficiency.
“Just in Time” chip development
“Just in time” chip development describes a process wherein the constituent blocks of a chip are developed at the same time as the overall chip. This parallel development inevitably means that changes made to any one of the blocks or the full chip design can affect either performance or layout (or both) of either some other component or the integrated design, which in turn may require modification, resulting in further impacts, and so on. The result is a constant interweave of changes back and forth throughout the components and full chip design until a unified design is arrived at that satisfies all the physical verification and performance requirements of both the individual components and the full chip.
As you would expect, there are a few challenges associated with that sort of process flow.
In any sensible production design environment, there are revision controls to ensure consistency throughout the flow. Designs are checked out and checked in with all of their updated collateral, which allows the downstream groups to do their jobs. Even with quality check-ins, though, the recipient often learns the hard way that it is always a good idea to check the consistency of the inputs before including them in the full chip. For instance, what is the most common cause for LVS errors at the place and route (P&R) stage? Differences between the actual layout and the abstract that is given to the P&R tool. Trust, but verify. Ask a few simple questions:
- Do the abstracts match the GDSII? A simple XOR between the abstract and the GDSII is all it takes.
- What is different between the current and updated blocks? What you check depends on what you care about, but some frequent concerns include: which libraries were used (did they change?), did ports or pins move, did the number of layers change, is the extent of the block different, etc. Any physical parameter that will affect the integration process should be checked.
When there are differences, you want to be able to overlay the comparison results, so you can see them without having to merge the layouts. With the right layout viewer, this is a simple matter of selecting the before, after, and XOR results, then creating an overlay of all three.
In the past, chip designs started and ended in a custom layout environment. Now, the very beginning of chip design is the creation of the standard cells and other IP, and the very end is the chip assembly, where the results of P&R and custom blocks are brought together into the final product. As chip sizes grow in both area and complexity, the full custom layout environment is no longer the most effective way to get the job done. Many design flows now use GDSII and OASIS merging tools to assemble the final chip for final verification and tape-out.
By using a GDSII or OASIS merging flow, customers avoid the time it takes to read all of the different blocks into a custom design database. With different groups responsible for each of the blocks, errors are fed back to the originators for correction, rather than being fixed at the end. Not only is most of the power of the custom layout tool now unnecessary, but the stream-based flow is also orders of magnitude faster.
Once this merging flow is established, it also starts to make its way into the P&R flow, to eliminate the penalty of merging libraries into the P&R top-level views for full-layer verification. P&R tools are notoriously poor at merging in the library data for stream-out. By only streaming out the top level from the P&R tool, then merging on input to the verification tool, customers can achieve significant savings in cycle time. Using this technique, the total input time for a full-chip, full-layer stream-out that normally takes two hours from the P&R tool—before the GDSII even goes to the verification tool—can be reduced to less than 10 minutes. Just use the right tools for the right job.
Of course, this merging bit isn’t quite a simple as it sounds. In an ideal world, every cell referenced in the design would be unique, and only those cells that are used would be present. Once again, reality is a bit more complicated. The merging solution needs to be able to handle name and content conflicts. For instance, two different blocks may be delivered using different versions of the same library. You can’t just change the contents of the cells when doing the merging and integration, because you don’t know how that will impact the electrical performance. Therefore, when assembling, you need to pick up library version 1 for block A, and version 2 for block B.
Finally, as blocks are assembled and then blocks assembled into chips, the libraries get passed along. If no one’s paying attention, the un-referenced cells from the libraries get merged into the merged layout. These un-referenced cells collect as the process progresses, and can cause both excess verification times and spurious name conflicts.
Of course, there are solutions out there for all of these issues, but these are the sorts of concerns designers need to care about when using such a flow.
OASIS vs GDSII
Due to the much larger file sizes associated with the just-in-time process, OASIS is increasingly used in place of GDSII in the chip assembly/chip finishing stage, especially for large designs. This is the process point where reducing file size can reduce not only the overall infrastructure burden, but also the turnaround-time for such activities as physical verification, file merging, etc. By enabling design teams to minimize file sizes, the use of OASIS can reduce processing time and minimize the resource requirements. Figure 1 shows the relative sizes of OASIS files compared to GDSII files. Design files can see reductions in file size on the order of 20-30x, depending on the exact layout. Another further 2x reduction can typically be achieved by using C-block compression in the OASIS file.
Figure 1. Reduction in file sizes achieved with OASIS for nine designs
Why do you care about file size? File size equals runtime on a network drive. And everything is on a network drive these days. While reducing file size may only save a small percentage of time on one run of the overall cycle time, these savings add up to a significant number over the multiple iterations required by these designs.
Summing it up
All in all, if you are designing large chips in a just-in-time environment, there are a few tactics you can employ to achieve the best results:
- Have a defined quality check-in process for all components that includes a comparison of the abstract to the layout for each check-in,
- Adopt a merging flow using specialized merging tools that can reduce stream-out time while also accounting for common merging issues,
- Consider using OASIS instead of GDSII to reduce overall processing time and minimize resource usage.
Are you using the just-in-time approach? If so, what was the main motivation for implementing this type of flow, and how is it working for your organization?
About the author
Joe Davis is currently the Product Manager for Calibre interactive and integration products at Mentor Graphics in Wilsonville, Oregon, USA. His career in the IC industry spans over 20 years at high-profile companies such as Analog Devices, Texas Instruments and PDF Solutions, and covers both sides of the EDA industry—designing ICs and developing tools for IC designers and manufacturers. Prior to joining Mentor, he was the senior product manager for yield simulation products at PDF Solutions, where he managed semiconductor process-design technologies and services, including yield simulation and analysis tools. Joe earned his BSEE, MSEE, and Ph.D. in Electrical and Computer Engineering from North Carolina State University. Contact him at email@example.com.
If you found this article to be of interest, visit EDA Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to all aspects of Electronic Design Automation (EDA).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for the EDA Designline weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you).