Timing analysis, thermal analysis, routability, signal quality, logical correctness, power subsystem quality, testability, ease of assembly and rework-all of these activities are an essential part of making sure that a PCB design is stable and ready for manufacture and use. One way to accomplish all of these goals is to build hardware prototypes, test them and revise the design as necessary to achieve stable operation. However, with the ever-increasing speeds of components as well as the increase in complexity of the parts and designs, is it still practical to design stable products using the hardware prototyping approach? This article will examine the speeds and performance of modern components to determine the viability of this approach.
First, it is useful to look at the environment in which hardware prototyping was developed. From there we will examine changes that have taken place in recent years to ascertain the current state of the art and how the design process must change to assure stable designs in today's design environment.
The "TTL" design environment
The hardware prototyping approach to design has often been referred to as the "TTL" design method. What was meant by this term was that speeds and times involved in the design were slow enough that designers did not have to worry about most of the issues listed in the above introduction. All that was necessary was to make sure that the design was logically correct.
Two measures of speed are clock rate and rise and fall times of signals-often referred to as edge rates. With TTL designs, these were 50 MHz or less and 5 nanoseconds or slower respectively. If these times are converted to dimensions using the flight time or velocity of signals in PCBs and cables (roughly 6 inches per nanosecond), the length of one clock cycle is about 120 inches and the length of an edge is 30 inches. For speed to be an important issue, the dimensions of a design needed to approach these dimensions. Most designs did not. As a result, it was possible to achieve functionality without spending much engineering time on speed-related issues.
On the thermal front, most components dissipated less than one watt and whole products consumed less than 100 watts. As a result, cooling was rarely an issue.
The modern "MOS" environment
As technology matured and hardware became more sophisticated, we moved from the relatively "slow" environment of the TTL world to more complex, faster technologies such as ECL, MOS and CMOS.
Traditionally, ECL, or emitter coupled logic, was the only semiconductor technology capable of very fast operation. Because of the way it operated, even modestly complex products consumed large amounts of power. This high power consumption and the related cooling issues confined very fast logic to large, expensive super computers. When fast operation was needed, ECL was used and the industry came to describe fast design rules as ECL rules.
Enter submicron MOS and CMOS. As the semiconductor manufacturing industry mastered finer geometries, it became possible to make MOS circuits operate at speeds as fast as, and sometimes faster, than ECL circuits. Because of the inherent low power consumption of MOS circuits, it became possible to put very large, very fast logic blocks in a single IC. Now, it is possible to put a supercomputer in a briefcase.
Currently, virtually all ICs-even those intended for relatively slow products-are made from CMOS or MOS circuits and are very fast. The same measures of speed, clock rates and edge rates have changed dramatically. Clock rates over 1,000 MHz (1GHz) and edge rates as fast as 200 picoseconds (0.2 nanoseconds) are becoming common. Converting these times to length (using flight time) results in 6 inches and 1.2 inches respectively. At these dimensions, virtually all products are large enough to experience problems that were once only important in supercomputers.
A common error is to decide a product is not high-speed because the clock rate is slow. Unfortunately, the edge rates of logic components still remain very fast. It can safely be said that there are no slow parts available with which to design slow products. On top of very fast edge rates, the edge rates among components vary from lot to lot during normal production, and thus complicate the task of insuring that a design is stable for all combinations of components. Figure 1 shows the range of waveforms that may appear on a single signal line as the edge rates vary.
The set of waveforms depicted in Figure 1 was generated by a signal integrity analysis tool simulating the waveforms that will appear on a single net. This was done as the power supply voltage was changed, the operating temperature was changed, and the edge rates were changed over the ranges that will be seen in a product during normal operation. In order to guarantee stable operation, it is necessary to manage the impedance and other parameters of the signal wire or transmission line with enough care that these waveforms always meet the input conditions of the driven circuit.
The example in Figure 1 is typical of what can occur on every signal line in a design. There are several additional variables that must be taken into account. Among these are crosstalk, Vcc and ground bounce, power system decoupling, package parasitics, timing, heat dissipation and logic correctness. Clearly, the number and variety of factors impacting the design require a level of engineering precision that cannot be achieved using hardware prototyping. How, then, does a development group achieve a stable design that accounts for these variables?
Correct-by-construction designs using analysis and simulation
When speed and design complexity reach a point where hardware prototyping fails to achieve stable operation, "virtual prototyping" must be used. Virtual prototyping consists of using a variety of simulation and analysis tools to account for all the variations that will occur in the product. As analysis and simulation uncovers design problems, changes can be made in the virtual circuit to correct the problem. Once the model or virtual prototype operates properly over conditions that the product will see, the design rules are passed on to physical layout to create the final hardware. This approach to design is often called "correct by construction." It is a very powerful, successful method.
The analytical tools involved in this process are:
- Signal integrity analyzers
- Power system analyzers
- Timing analyzers
- Logic emulators
- Logic modelers
- Thermal modelers
- Schematic capture
- Mechanical simulators
Somehow, all of these tools must be tied together. At a minimum, the placement data, logic model and net list must be made available to each tool. Traditionally, component placement has been done after a design arrives at the PCB layout department. From there, data has been passed backward to engineering groups to perform their piece of the analysis. A better approach would be to move the placement activity back into the design operation to facilitate this analysis. Once all of the design conditions have been met in the virtual prototype, the design constraints can be passed on to mechanical design and PCB layout. Enter the floorplanner.
Floorplanners are design tools that tie all of the analytical tools together. Figure 2 illustrates how a floorplanner fits into the design flow. As can be seen from the diagram, the floorplanner is at the very heart of the design process. There are two ways that this design flow can be implemented. Which one is best will depend on the tool sets used to do the analysis.
The ideal solution is to buy an integrated set of tools from one vendor with the floorplanner tying all of the analytical tools together. This solution is being offered by a number of EDA tool suppliers with varying levels of completeness. However, no single vendor has a full suite of tools that satisfies all of the requirements.
A more common solution is to select "best of breed" point tools in each of the areas of interest and tie them together using a floorplanner. There are a number of choices for each of the point tools as well as for the floorplanner.
The ideal arrangement is to tie the floorplanning tool directly to the schematic capture tool right on the design engineer's desk. With this arrangement, data can be exported to all of the other tools in the design flow. Once the design goals have been met, the placement and design rules can be passed on to PCB layout.
Figure 3 shows typical component placement with the proposed net list displayed as "rubber band" connections. From this placement, lengths of routed wires can be estimated and the proposed route can be exported to a signal integrity tool to determine the quality of the signal if routed as proposed.
Examples of design systems that include all or most of the design tools and a floorplanner: Cadence Specctraquest and Mentor Interconnectix. Examples of floorplanners that can be used to tie together the design tools suite: Innoveda ePlanner and Avanti EDA Navigator
The increased speed and complexity of modern logic designs has forced a change in design methodology from the traditional hardware prototype (sometimes called "trial and error") design process to one with far more up front analysis. This new process involves many types of analytical tools that operate on data provided by some form of placement process. It is possible to perform the placement after the design data arrives at the PCB layout station and iterate back and forth between this activity and the design engineering activity.
A far more efficient solution is to add a floorplanning tool to the design engineering activity. This floorplanning tool ties together all of the other tools and allows rapid "what if" analysis to determine what design rules will result in a successful, stable design. Once this has been done, the design and design rules can be passed to PCB layout with the confidence that the design will work right the first time.
Lee W. Ritchey is owner of Speeding Edge, a company that offers courses in high-speed design, as well as consulting services. He has been designing high-speed products for more than 35 years and is currently working with several leading internet equipment suppliers on Gigabit and higher data paths. Ritchey has a B.S.E.E. from California State University at Sacramento. He regularly teaches courses in high-speed design for the University of California at Berkeley.
© 2001 CMP Media LLC.
9/1/01, Issue # 1809, page 18.
Reproduction and distribution of material appearing in PCD and on the www.pcdmag.com Web site is forbidden without written permission from the editor.
Contact information for Reprints.
NOTE: The articles presented here contain only the text originally published in Printed Circuit Design magazine. Any accompanying graphics and illustrations have not been recreated here. You may view the article in its entirety in each printed issue of PCD.