# Techniques for reducing signal-integrity pessimism

**1 Introduction**

As the number of production designs on nanometer geometries has increased, signal integrity (SI) has progressed from being a concern for a few leading-edge designers to being a pervasive nightmare for all designers.

Although several methodologies have evolved to address these SI challenges, the confluence of unintended electrical effects and burgeoning design complexity at sub-130 nanometer geometries is leading to an exponential number of reported SI violations. As a result, designs are suffering from prolonged design schedules and, often, missed market windows.

A key factor in the lengthening of the sub-130 nanometer design cycle is not only the increase in noise sensitivity, but also the excessive pessimism that exists in some SI closure methodologies and, in particular, SI analysis engines. While a reasonable margin is desirable to build the necessary guard bands to help designers tape out with confidence, excessive pessimism greatly increases design cycle time and leads to over-design.

Over-design often results in increased congestion, which hampers yield, and increased power, including leakage — a major concern for designs at 90nm and below.

Much of this excess pessimism comes from the underlying models used to estimate SI and some of the tradeoffs used to simplify the analysis process. Often, however, shortcuts in analysis lead to more design iterations due to over-fixing and over-constraining the design implementation. Hence, when selecting an SI closure or analysis solution, particular attention should be paid to ensuring that there is sufficient filtering of false violations.

**2 Noise-glitch false-failure filtering**

SI analysis will determine the worst-case glitch that can occur on a given victim net due to switching on neighboring attacker nets. There are number of techniques that can be used to filter pessimism during glitch analysis, such as using logic relationships and timing windows to determine the set of attacker nets that can switch simultaneously.

This type of filtering will typically reduce the number of glitch violations by a factor of two or three. The next level of filtering determines whether or not the calculated glitch will be likely to cause a functional violation.

The most basic glitch check is to check the height of the glitch against a pre-defined threshold voltage (40 percent of supply). This approach will typically generate thousands of violations, with the vast majority of them being false (see Figure 1).

This is because CMOS logic gates act as low-pass filters and, therefore, most noise glitches are attenuated by the receiving logic gate. If the noise is suppressed before it reaches a storage element such as a flip-flop or latch, it will not cause a functional problem.

Figure 1 — Basic noise-glitch filtering

The second level of failure filtering goes beyond the simple noise-peak threshold filtering. It is cell-based rejection and relies on how the glitch will behave on propagation through one stage of logic. Cell-based rejection determines if noise at the receiver input will propagate to the output.

There are two approaches to checking for cell rejection. One is to use pre-characterized rejection curves, and the other is to perform an on-the-fly simulation of the receiver with associated parasitics to see how it behaves in the presence of the calculated noise.

The latter approach is more realistic and will filter more violations because the former approach must use a pessimistic approximation of the glitch as an isosceles triangle. However, while cell-based rejection will reduce noise-glitch violations typically by 5-10 times over input peak checking, this approach runs out of steam at sub-90 nanometer geometries and still reports a lot of pessimistic glitches that do not necessarily pose any design problem (see Figure 1).

**3 Advanced glitch filtering**

Noise propagation extends the one-step cell-based rejection approach and propagates SI glitches across multiple logic gates to register endpoints. These glitches are allowed to combine with other crosstalk-induced glitches along the path.

In effect, only those glitches that are large enough to cause a functional failure at the sequential elements (latch/flip-flop) are flagged. Several real customer designs have demonstrated a dramatic reduction in the number of reported SI glitches using noise propagation versus cell-based rejection, typically by one or two orders of magnitude (see Figure 2).

Figure 2 — Noise propagation and glitch pessimism reduction

Noise propagation ensures functional validity by propagating glitches through logic gates and checking that the register is not driven unstable. Noise propagation to the register endpoints reduces the number of false violations by an order of magnitude compared to traditional methods.

Noise propagation is best achieved through on-the-fly transistor-level simulation rather than with pre-characterized propagation tables. Propagation tables cannot accurately account for a combination of noise sources, such as cells with glitches at multiple inputs, or a combination of propagated noise, power supply noise (IR drop or ground bounce), and crosstalk.

As a glitch propagates through a receiver, it dynamically reduces the receiver's holding strength, making it much more sensitive to crosstalk at its output. Furthermore, characterization of propagation tables is tedious, often taking many weeks to create the necessary data to cover all the possible input glitch scenarios.

**4 SI-on-delay pessimism reduction**

To calculate noise impact on delay, neighboring attacker nets are switched in conjunction with the victim net, either in the opposite direction for the maximum delay change or in the same direction for the minimum delay change. The exact time (alignment) that the attackers and victim nets switch is critical to determining the worst-case impact.

While the use of logic and timing windows will reduce pessimism of SI delay calculation, it is not uncommon to have a few nanoseconds of additional negative slack due to SI. Consequently, closing timing in the presence of SI is very challenging. Much of SI delay pessimism is due to pessimistic timing window iteration, the representation and propagation of noisy transitions (slews), and the way attacker and victim nets are aligned.

**5 Timing window iteration**

Since SI delay change is dependent on timing windows and timing windows are dependent on SI, accurate analysis requires iteration between SI delay calculation and static timing analysis. One common technique to speed up SI analysis is to first calculate SI delay effects (ignoring timing windows) and then to iterate only on the critical paths. This speedup, however, increases pessimism as the timing windows and slews for non-critical nets will be overestimated. Consequently, the impact of non-critical nets attacking critical nets will also be pessimistic and result in excessive delay change.

A better approach is to start with nominal timing windows and perform fast iteration on all nets, incrementally re-calculating nets whose timing windows change. This way all nets used for the final SI delay change calculations use realistic timing windows and slews.

**6 SI slew propagation**

Most timing and SI analyzers use a linear ramp to model slew, but this is not a good approximation for noisy nets that have bumpy non-linear transitions. Linear ramps tend to be more pessimistic, especially if they measure slew in the same fashion for noisy and non-noisy transitions. Using traditional measurements, such as 20-80% of supply, will greatly overestimate the impact of the noisy transition on the downstream logic path.

For example, in Figure 3, attacker A1 timing switches low at the same time as the victim switches high, causing an SI-induced delay increase for the path. The resulting red bumpy rising waveform is difficult to model accurately with a linear slew model.

If slew thresholds are 20-80% of Vdd, then a large slew degradation is reported, resulting in considerable delay pessimism on the downstream path. What is required is an SI delay method that accurately accounts for the effective slew impact of the receiver's output.

Figure 3 — Pessimism due to inadequate linear slew modeling

Another source of error is the driver model of the victim net. Typically the driver is modeled using 2D tables characterized with a set of input slews and output loads.

However, mapping the switching waveforms to their corresponding slew value is "many-to-one" mapping. In other words, many different waveforms with the same slew value may exist. Crosstalk-impacted switching waveforms present exactly such a situation, resulting in inaccurate crosstalk path delays. To model such waveforms accurately, a more accurate driver model is required based on a current source or, in the case of very non-linear waveforms, based on the actual transistors of the driver. As shown in Figure 4, a current-based delay model that is independent of characterized, "well-behaved" slew values helps eliminate the inaccuracies of the traditional delay table. It accurately computes delay on the receiver gate output, given any arbitrary input waveform.

Figure 4 — Delay measurement and the current-based delay model

The accurate computation of slews and delays at both the receiver input and output can be accomplished by a current-based delay model, which characterizes output current as a function of input and output voltages as well as internal capacitances, including Miller capacitance.

**7 Alignment of attackers**

The delay on a victim net can vary quite dramatically depending on the relative switching times of the attackers. SI delay analysis will determine the worst-case delay change caused by aligning the attackers while honoring the constraints imposed by the victim and attacker timing windows.

There are, however, two sources of pessimism in this approach. First, the worst-case attacker/victim alignment may occur in the middle of the victim's timing window and consequently may not change its leading- or trailing-edge arrival times. Second, the worst-case delay for the victim net may occur due to noise at the trailing edge of the transition-too late to impact the receiving gate.

To address the first source of pessimism, the attacker alignment that maximizes the expansion of the victim's timing window should be found, while other alignments should be rejected — even those that create a larger delay change but have less impact on the timing window.

To address the second source of pessimism, SI delay measurements should be made at the receiver output rather than the input to ensure that the alignment of the attackers will indeed have the maximum impact on the downstream logic. This technique, called path-based alignment (see Figure 5), eliminates the localized net-based, worst-case result (local maxima) and, instead, provides a global worst-case result that takes the downstream path into account (global maxima).

Figure 5 — Path-based alignment and SI-delay pessimism reduction

As shown in Figure 5, Plot A, traditional alignment results in a noise bump on the waveform tail of the receiver input (red), but this has no impact on the receiver output waveform (blue). However, the linear slew model interprets the receiver input waveform as the dotted-red waveform, which results in the pessimistic receiver output dotted-blue waveform.

In Plot B, the path-based alignment results in the worst-case receiver output waveform (blue), caused by a much earlier aggressor alignment on the victim net. Notice that there is no perceivable slew degradation in the receiver output waveform. The end result is a path-delay pessimism reduction of 700ps over the linear slew model.

Path-based alignment has demonstrated a dramatic reduction in the bad pessimism for SI on delay in several industrial designs (see Table 1).

Table 1 — Path-based alignment pessimism reduction results

Table 1 compares the use of path-based alignment to worst-case net-based alignment for three 130nm designs. The worst-case negative slack decreases significantly when using path-based alignment, making it much easier to achieve SI closure.

Another benefit of path-based alignment is its ability to leverage the inherent filtering of the receiver so that slew and delay measurements are performed on a smooth receiver output that is well-defined. This is true even for very bumpy waveforms where the glitch magnitude is greater than half of the supply voltage. Consequently, the slew measurement on the receiver output is also less pessimistic — very significant since it impacts the delay of the next logic stage.

**8 Conclusion**

As process technologies shrink, the number of potential SI problems increases non-linearly. This makes design closure almost unattainable unless engineers employ accurate SI analysis that uses advanced filtering techniques such as noise propagation and path-based alignment. These techniques will empower designers to tackle the handful of real SI violations, keep their design projects on schedule, and deliver on time.

Looking forward to 65nm processes and below will require even further advances in reducing SI analysis pessimism. In particular, statistical and probability-based techniques will be required to realistically handle on-chip process variation and the improbable accumulation of the worst-case SI delay increase along long critical paths.

*Rahul Deokar is senior product marketing manager for timing and signal
integrity at Cadence Design Systems, Inc. He has over 10 years
experience in R&D, marketing and business development in the areas of
static timing analysis, logic and physical synthesis.
*

*
*

post a commentregarding this story.