High-speed I/O--Page 4.
All of this means that adding analogue functionality to large digital chips will add significant risk to the project. This risk usually means that a re-spin is required, leading to time to market issues and the risk of completely missing the window of opportunity.
So while there is a clear and continuing drive to further integrate functionality into a single ASIC, there are problems and disadvantages. The obvious alternative is to keep high-performance interface functionality, such as equalization, separate. This will have a number of advantages, as well as drawbacks.
The key advantage of a separate device approach is that this enables the digital logic to be designed in the most advanced CMOS process, without the risk of the analogue parts not working. The expensive, high-end CMOS process wafer real estate will only be used for digital gates, enabling a smaller die or more advanced functionality. This then leads to a more cost effective implementation of this chip.
Regarding the power dissipation, and therefore the heat generation, the discrete approach will also lead to better distribution, enabling better thermal design. Figure 5 illustrates very simply how the thermal properties of SoC and disintegration approaches will differ.
The two thermal scenarios – SoC vs. Disintegration.
In terms of drawbacks, the key concern is the additional system power associated with the additional interface between the equalizer and the ASIC. While the ASIC interface can obviously be kept much simpler (and lower power) if it does not have to include significant amounts of signal conditioning functionality if it only needs to connect to an equalizer located in close proximity on the same PCB, this still takes power. Table 2 gives an analysis of the power dissipation in the two different scenarios.
Table 2. Comparison of high-speed interface power consumption scenarios
The additional PCB real estate is also an important consideration. While some of this can be mitigated by using glueless chip-to-chip interfaces and high-density chip solutions (multi-lane equalizers with 4, 8 or more lanes), the stand-alone equalizer approach will inevitably take more space. In some systems this will be acceptable, while in others there plainly is not enough space.
There are other benefits to separating the digital processing functions from the signal conditioning interface functions. Time to market is clearly a key metric in the success of any semiconductor company. This is put at high risk where high complexity circuits, which push the maturity of the design process, are required in areas of low design competency.
Smaller process geometries, at least as the technology matures, mean that the engineering cost for a device is high. This dictates that the product must either be a high volume product or can sell for high prices in order to recover costs. This is a barrier to the deployment of the technology into many analogue applications, with lower ASPs and lower volumes than consumer or mobile applications can support.
There is a cost, risk and performance-balancing act to be performed. The response in low to medium volume applications is to remain with older processes where costs and risks are lower and where power savings and performance gains due to the analogue characteristics of new processes are unproven.
In many cases the drive towards higher integration is the only factor pushing the use of small geometries and the perceived cost savings this provides. This is always a false perception given that any extra feature means that there is additional risk and in the case of high risk, poorly characterized features, the time to market increases and associated loss of revenue with market position are often too high to bear.
System in Package
One way to reduce the additional power associated with the equalizer to ASIC interface is the use of a so-called Multi-Chip Module (MCM) or System in Package (SiP) approach, where multiple die are placed in a single package. The multiple die can use different geometry processes, or even different technologies, and handle completely separate functions accordingly. These die can be connected using chip substrates, bonding, or more advanced technologies such as Through Silicon Vias (TSVs). This latter technology enables direct stacking of die using bumping for interconnect (instead of bond wires) as illustrated in figure 6.
Using multiple die in a MCM or SiP
The reduction in distance between the different chips enables reduced power interfaces, with small signal swings and even simplified approaches such as forward clocking.
While SiP has its obvious benefits in offering cost effective semiconductor systems, this approach will still not address the thermal considerations mentioned earlier. Where heat generation is of paramount concern, the only viable option will be to implement the processing and backplane connectivity functionality in separate chips. It seems that for the foreseeable future there will be a space for each of these different solutions.
About the Author
Allard van der Horst is Senior Scientist at Maxim Integrated Products. Allard joined Maxim in September 2010 through the acquisition of Phyworks where he held the position of CTO. His current interests include Signal Integrity Solutions and Next Generation PON, collaborations and standards development. Allard has 20 years experience in the industry and holds an MSc in Electrical Engineering and Computer Science from the University of Twente (Netherlands).