Flexibility and automation. Those two terms can determine a company's profitability in today's competitive marketplace. Chip developers are faced with more complex designs, shrinking design windows and shorter time between process generations. Without increased automation, the innovative process will slow dramatically.
Retargeting has become a crucial element in the development process for several reasons. First, the time between process generations has shrunk to as little as nine to twelve months. Given the amount of IP that a company has in each generation that they want to use again, it doesn't match the current process any longer. With libraries consisting of hundreds of IP cells, there's no way to move IP from one process generation to another within a few months using the current methodology. For this reason, companies are often forced to consciously skip process generations. If companies could bring the IP over more quickly, they could take advantage of leading edge processes, resulting in better performance or smaller chip size.
Another key reason for automated retargeting is that products are increasing in complexity, with millions of transistors on one chip, and the lifetime of the products has shrunk. The only way to design a chip with the required complexity in a reasonable amount of time is to achieve a higher degree of reuse. You use things you've already done and work on a block level instead of an individual transistor level every time. But, if you don't automate the retargeting, it's not doable.
In addition, many companies now want to be more flexible in terms of production capacity. Building a fab today at 0.18 micron and lower processes is very expensive. Companies that traditionally were completely self-sufficient a couple of years ago are now also starting to use capacity from public foundries. Obviously, you're vulnerable and dependent upon someone else providing the capacity you need. For example, you've developed a product and you're using some internal capacity but also external resources. If sudden market trends indicate that you could sell six million a year of this device, instead of two million, you need to find a way to quickly produce the additional four million. By the time you move the design from one process to another which may have capacity, half the year is gone and you've lost the opportunity. If you can move the design very quickly from one supplier to another, you gain tremendous advantage in terms of taking market opportunities.
Until now, the retargeting process has been very cumbersome and time consuming. In this paper, we'll first explore problems with existing options for retargeting. Then, we'll propose a faster, more flexible infrastructure that works with existing methodologies, and provide a design example.
Let's keep in mind that we're aiming to reduce the retargeting time by at least 60 percent. By automating the process, we can reduce retargeting from five months to two. That's significant. Automation can help to keep the entire chip on the leading-edge processes, and offer flexibility to take advantage of capacity needs.
Most companies start a project with the intent of reusing at least some portion of the design. In a typical scenario, project specifications are determined, and blocks are defined. Often, a block in the new chip will resemble something from a previous project. The reuse effort is on! However, if the original designer is not available to lead the re-targeting effort, the education process begins. The first step is a review of the designer's files. This might be followed by interviews with the designer, or in larger companies, the individual chartered with the continued support of the design. After all that work, it is often decided that the amount of effort just to determine the viability of the design is greater than starting from scratch.
Why doesn't this process work? Quite often, only the original engineer understands how and why the analysis plans were determined and implemented. Which simulations belong to which configurations? Why did he do things the way he did? There's no clear dependable way to document a design.
So how are people doing the retargeting today? Due to a significant lack of tools, the retargeting process is done manually. It's time consuming, costly, inefficient and ineffective. A manual retargeting can take as long as doing a design from scratch. Not a viable option in today's tight market. It's costly, not only in engineering costs, but also in terms of missed market opportunities. Plus, designers prefer to use their skills to generate new designs, not retarget existing ones. Therefore, without the stimulation, companies risk losing their top talent to another firm. That just escalates the costs.
Manual retargeting depends on the original designer's memory. In today's tight labor market, the person who designed a chip is not always around or on the same project when it's time to retarget. There needs to be an easy way for someone new to come in, pick up the design and understand what the original designer had done.
Each company, even each designer, has a different style, a different way of attacking the problem. Given these significant issues, manual retargeting is inefficient and ineffective.
The industry needs an infrastructure to capture and retain the complexities of the design process. If everyone has access to all of the data, it is much easier to pick up a block and understand what was done. Then, replacing the process file and re-running the analysis plans would indicate clearly where the design does and doesn't meet the performance requirements. However, it's important to accommodate the idiosyncrasies of individuals, and to promote a process that clearly complements existing design methodology, not require any changes. Otherwise, it just won't be useful.
Let's take a look at four key elements for a successful retargeting/reuse infrastructure: plan-based, comprehensive, manageable and effective.
In order for retargeting automation to work, we have to go back to the idea of a common database in which to store design documentation. It's crucial to capture the designer's knowledge for future use. A plan-based methodology records the design flow and procedure so that as you're generating the design, everything you do is recorded and documented. If someone else needs to research what you did, or if you need to go back and reconstruct what you did, you can easily follow the plans and flow from the database. It will tell you exactly what was behind the thinking process when you did the design, and what kind of characterizations you ran to verify that a particular parameter in the spec was met. It will also describe the dependencies between the different parameters, and how that is reflected in the characterization and optimization plans.
Even if the infrastructure is plan-based, it needs to be flexible. The engineer should be able to use the tools to generate his plan starting at any point in his design. What's needed is an infrastructure that's flexible enough to address each engineer's unique approaches to problem solving while still providing the reservoir of knowledge.
The idea behind this proposed methodology is to offer the designer advantages in doing the work he needs to do. If the simple, repetitive tasks can be automated and documented, the design itself will be of a higher quality in less time. For example, if setting up the simulation decks and viewing the results can be a simple automated task, the design itself will be more thoroughly analyzed.
Many analog engineers keep meticulous records of their sweeps. However, many do not. It's important to sweep the parameters to verify that performance is still met. There are numerous ways to do this. As a result, it's impossible to remember what you did and why. If a team uses the same knowledge data base, then you can directly see which test benches are related to which performance parameters in the specs, and what the dependencies between them are. That is of big value.
Manageability comes in being able to track the sub-blocks and their parameterized characteristics, and implementing them as needed. Once you've designed a block, you try to do it in a manner that is parameterized. So whenever you need a block of that functionality again, but with slightly different performance, you could actually go in and optimize it to achieve the performance you need. By doing this, gradually, you start to work more on the block level.
For example, let's say you need an amplifier with a specific characteristic. The tool will search the library elements available and find the one that fits what you're asking for. It then uses optimization to fit the performance by changing transistor sizes so that you get the performance you need.
This allows you to build a library at a higher level. A parameterized library can then be re-targeted. Topology selection tools will go in and find the best topology for what you want to do, then try to fit it to what you need. Obviously, when you begin this new methodology, there won't be a lot of elements in your library. Therefore, you might only be able to achieve 50 percent automation. However, within a year, you should be able to reach 70-80 percent automation, which is significant.
Given what we've reviewed in this section, you can see how effective a plan-based approach is. It's cost conscious in that you can reuse your work. It's resource efficient because you're spending your time where it's most needed. And, it's an effective communication tool because anyone can pick up your work and understand your thinking through the process.
Now, let's use a design example to illustrate how this process works.
During the last few years, Antrim Design Systems has developed Antrim-ACV, a plan-based system for design, reuse, and retargeting of mixed-signal intellectual property. We'll use Antrim-ACV to depict the key steps to automating design retargeting/reuse.
This design example describes a method for automating the retargeting of mixed-signal IP blocks, not only between two foundries or two process generations, but also between two different sets of performance requirements. The method is based on an automatic characterization technology. This helps establish the performance target from the existing technology.
At the same time, a set of behavioral models reflecting the target performance is generated. These models can then immediately be used, in a top-down design flow, in the target technology. Once the performance target has been established, the performance delta to the new technology can be acquired. The retargeting of the individual design blocks is accomplished by a plan-driven synthesis tool set based on sophisticated optimization techniques working over multiple domains and multiple test benches. By using a plan-driven approach, the maximum degree of flexibility required for handling the diversity of analog IP blocks is guaranteed. Finally, this design example will further help demonstrate the approach and how it handles the hierarchical nature of the design process.
In general we differentiate between retargeting without topology changes and retargeting where modifications to the architecture or circuit topology is required. In this particular example we address each type. Both cases are based on a current steering D/A Converter. In the first case we moved the design from 0.18 micron to 0.12 micron using double oxide transistors. In this case the supply voltage remains the same, and no topology changes are required in order to meet the target performance. In the second case the same design was moved from 0.18 micron to 0.12 micron but using the single oxide transistors. In this case the supply voltage changed from 1.8v to 1.2v. The net effect of this was that topology changes had to be done, in order to meet the targeted performance.
A typical architecture for a current steering DAC is shown in this figure:
The architecture consists of an array of weighted and non-weighted current cells. Typically an array of unit current cells would be used for the Most Significant Bits (MSB) and a number of weighted current cells for the Least Significant Bits (LSB) as indicated on the block diagram. The structure of the individual current cells is shown in this figure above.
For these projects, we used a five-step process to achieve re-targeted designs that met specification. I'll define each step and highlight the differences between the two projects.
The overall retargeting flow is described below, along with a block diagram that details the flow. Please keep in mind that this is just one way to accomplish the retargeting objectives using Antrim-ACV. The tool can be inserted into the existing design methodology at any point in the design process.
The procedure used in retargeting consists of first breaking down the design into its individual components as outlined in the description above. The individual blocks would then be characterized in the current technology. This basically tells us what is the required performance at the cell or sub-block level in order to achieve the current, overall system performance. At this stage also, the characterization plans and test harnesses for the individual cells and sub-blocks would be created. That is, if they do not exist already.
This procedure would then be repeated, using the same characterization plans and test harnesses, for the new target technology. This would help us establish the performance "delta" not only at the overall chip or system level, but also at the underlying sub-block or cell level. The performance delta will then serve as the basis for creating the Antrim-Mixed Signal Synthesis retargeting plans. The synthesis software will attempt to optimize circuit parameters (device sizes) in order to minimize the performance delta.
The overall flow including the retargeting for the actual physical layout is shown in the diagram below.
Step One allows us to put a stake in the ground, so we know what our performance goals are for the re-targeted design. Our first action is to set up test benches for the overall DAC as well as for the individual cells and subcircuits as necessary. This allows us to analyze and understand the design, and how the top level performance specifications propagates down to the underlying blocks that make up the design. The analysis is done in ACV using the 0.18 micron process files. The existing data sheet for the design has also been entered into ACV for reference and cross checking purposes.
Test benches were developed for the following blocks: full analog DAC, current source cell, logic part, amplifiers and bandgap. The blocks were then characterized over the full process, supply voltage, and temperature range. Since we already had a data sheet for this design, it was easy to verify that we had the correct answer.
In the following section, some of the main tests being conducted on the design will be explained. Basically, each of the principal performance parameters in the data sheet, would correspond to a test bench and a "line item" within ACV.
The static performance is basically measured by the Integral- and Digital Non-Linearity (INL, DNL). Due to non-idealities in the circuit, the actual transfer characteristic of the converter will divert from the ideal characteristic as shown on the figure below. A too large DNL, would cause the converter to be non-monotonic, i.e. although the digital input is increasing, the analog output voltage would be decreasing for a certain code combination. Such behavior is, in most cases, not desirable. On top of that, poor linearity performance would also impact the dynamic performance of the DAC. The block diagram of the test bench and the definition of the INL, DNL is also shown on the figure below. Further to the linearity, separate experiments were run to verify the offset as well as the output swing of the converter. In video applications an output swing of 700 -- 800 mV would typically be required.
In addition to the static performance parameters, we set up measures for various dynamic design parameters, including signal to noise, spurious free dynamic range, total harmonic distortion, glitch size, settling time and power consumption. We set these up on the top level to verify that this silicon matches the data sheet. Once the high level matches specification on the data sheet, we look at individual blocks to determine what is needed from each one to support the top level. That provides me with a data sheet for each of the individual blocks. Antrim-ACV offers extensive support for dynamic performance measures as well. The advantage to the designer is that he does not need to know all the 'mechanics' of doing a measure of, for example, Spurious Free Dynamic Range (SFDR) or Total Harmonic Distortion (THD). This is all 'pre-programmed' inside the tool. The method being used is based on a DFT (Discrete Fourier Transform) being performed along with the corresponding transient analysis. This is all part of the Antrim-AMS simulator. One of the advantages of this method is that the impact of interpolation errors is minimized, allowing more accurate results. On top of that, the method offers more efficient run times and lower memory usage. Also, to avoid any possible spectral leakage problems, proper windowing functions are being applied to the DFT input.
All of these items are reusable for subsequent characterizations and migrations.
Once we've characterized the existing design, we simply remove the process files for the 0.18 micron process, insert the files for the 0.12 micron process, and rerun the same characterization plans using new process models and, if necessary, new supply voltage. We then compared the results versus the ACV spec sheet generated in the first step. This is the step in the process where reuse is most evident. An example of the ACV spec-sheet is shown below. Apart from giving the designer the possibility to see if the block passed the spec at 'one glance', it also provides large amount of useful data for the design process.
For step three, we identified circuit limits with regard to the new process, new supply voltage (if applicable), new device sizes and the same topology. We selected optimization parameters based on the diagnosis in step two. We wrote optimization plans and generated resized netlists, then ran intensive simulations to recharacterize the circuit. ACV plans were still available for more complete analysis. Then, we generated the final spec sheet with circuit limits. Again, all of this process is reusable for subsequent characterizations and migrations.
As an example, in more detail, is our MSS optimization plan for the bandgap circuit. We optimized six design parameters: size of resistor 1, ratio between resistor 1 and resistor 2, the ratio between the bipolar devices, and three sizes of MOS transistors. Then, we did a minimum variation in output voltage with Vdd swept from -0.2 to 0.2 V and temperature swept from 0 to 75 C. All test benches were defined inside ACV. Total MSS runtime was about 30 minutes.
(C) 1998-2000 Antrim Design Systems, Inc. All rights reserved.
Unpublished rights reserved under the Copyright Laws of the United States. THIS SOFTWARE CONTAINS CONFIDENTIAL INFORMATION AND TRADE SECRETS OF ANTRIM DESIGN SYSTEMS, INC. USE, DISCLOSURE, OR REPRODUCTION IS PROHIBITED WITHOUT THE PRIOR EXPRESS WRITTEN PERMISSION OF ANTRIM DESIGN SYSTEMS, INC.
U.S. GOVERNMENT RIGHTS
Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in the Antrim Design Systems, Inc.license agreement and as provided in DFARS 252.227-7013(c)(1)(ii) (OCT 1988), FAR 12.212(a)(1995), FAR 52.227-19, or FAR 52.227-14 (ALT III), as applicable.
Antrim Design Systems, Inc.
5550 Scotts Valley Drive, Ste. 300
Scotts Valley, CA 95066
Bandgap Synthesis Plan for sizing the Bandgap circuit
The Performance Characteristics are:
Open a project directory and a design directory
set_design_dir("bandgap_wd"); # wd stand for Working Directory
Specify a library map file
Specify include directories needed for AMS simulator
Select the lib, cell and view
Declare the optimization parameters
set_opt_param('w12', 6u, 24u, 60u, 0.6u);
set_opt_param('w34', 6u, 48u, 120u, 0.6u);
set_opt_param('r1value', 5000, 10000, 25000, 100);
set_opt_param('rratio', 6, 10, 16, 1);
Set performance characteristics
Set_perf_spec('voutdiff', 'min', '-', 0, 0.003, 1.0);
set_perf_spec('area', 'min', '-', 0, 5e-8, 0.3);
optimize("-design","bandgap", "-effort", "high", "-learn", "-lookup", "-algo", "gradient");
First we looked at the results on the top level to see if we met all the specifications. Obviously, we didn't. Now we can go in and look at the block level and compare the current block with the previous one from the prior process. We can fine tune with the synthesis software and optimize the transistor sizes so that the block level meets the same performance as the previous one. Once we've optimized all the blocks, the top level is right and the process retargeting is complete.
This entire flow takes approximately six to eight weeks. At this point, we were complete with the double oxide project one.
For the single oxide design, project two, we realized we couldn't get the output swing that we had on the data sheet with only 1.2 v supply. Analyzing the design further, it became evident that there would not be enough headroom for a cascoded current source as was currently being used in the current cell. The solution, as shown on the figure below, was to replace the cascode with a single transistor current source as shown on the figure below. We put the new current cell in and changed the topology. We reran the whole characterization plan again with the new topology to see that it's now meeting the spec.
We've heard it all before - shorter time-to-market, increased design complexity - but it's not going away. The industry needs a way to capture design steps in a way that expands the engineer's productivity while not forcing a new methodology. Automated design, reuse and retargeting are crucial to exploit foundry flexibility and meet volume market demands. This tool should provide a consistent format for providing data, but also allow the engineer the flexibility to design in his own unique way and with existing tools and infrastructure. This will allow greater design reuse, higher quality designs, and free designers to concentrate on the more critical portions of the chip because the mundane tasks are automated.
Antrim has worked closely with semiconductor companies to develop a means to accomplish unobtrusive automated reuse, particularly in the area of analog and mixed-signal design. Antrim-ACV is the result of that effort, and is being used worldwide for design, reuse and retargeting.
Bendt Sorensen is vice president of Antrim Design Systems, based in Le Vaud, Switzerland. Prior to Antrim, Sorensen was president of Meta-Software, Europe. He was also founder and part of the MOSCAD consulting group for several years, and served as a design engineer prior to that. Sorensen has a Degree in Business Administration from Horsens/Denmark, and a BSEE/BSC from Aarhus Teknikum in Denmark. He also holds several mixed-signal patents.