The concept of "Interconnect Components" is as old as the concept of busses and data transfer between components. Interconnect components enable data transfer between a variety of processing elements that do not have a compatible interface. They are also used to extend system bus that does not have the required fanout or sufficient bandwidth to meet the data transfer need.
Depending on the application, different types of interconnect components are used with different specifications. Some of the popular interconnect components are dual-port buffers, SERDES, and PCI bridge chips. Following are several examples where these interconnect components are used effectively:
- Connecting one processing element to another processing element, e.g., using a dual-port buffer to connect a DSP to an RISC processor.
- Connecting a processor to a backplane, e.g., using a SERDES chip to connect a parallel interface on a processor to a serial backplane.
- Connecting a processor to a bus, e.g., using a PCI bridge chip to connect a host processor to a PCI bus.
- Expanding a bus, e.g., using a PCI bridge to expand the PCI bus.
This article explores applications, requirements and parameters of chip-to-chip interconnect, such as dual-port buffers, that are used to connect one processor to another. Due to simple memory interface, lack of complex protocol and a wide variety of options, the dual-port buffer is one of the most popular chip-to-chip interconnects available. Also, with a wide portfolio of components, dual-port buffers effectively support a variety of requirements, from the basic physical layer like electrical signaling and data transfer bandwidth, to more sophisticated needs like multiple virtual channels.
The following sections delve into the interconnect standard evolution process, dual-port as the default interconnect, and how dual-port meets the generic and specific needs of a variety of applications.
Figure 1: Interconnect Component Examples.
Click to Enlarge
Standards: Relevance, Risk and a Safer Approach
Standards play a significant role in defining interfaces. There are multiple advantages in using a standards-based interface compared with a proprietary interface. The fact that multiple components could be available with a standard interface provides benefits such as ease of design, low cost, and component availability. Also designs using a standard interface provide easy scalability and design upgradeability. Application-specific standards also boost performance because they are optimized for a given application.
However, standard evolution takes time and effort. As shown in an example timeline from an IEEE standard below (Figure 2), standard formation can easily take two years before it is approved and possibly another year before components become widely available.
The fact that multiple parties are involved for several years makes it risky to adapt an upcoming standard interface before it is ratified and widely adapted.
Figure 2: Timeline from an IEEE Standard Process for Illustration.
Click to Enlarge
A safer and practical alternative is to use a default standard. A default standard is not necessarily developed by an industry body for the given application. However, a default standard is one that is used in many devices with clearly defined specs. Using a default standard results in accurate performance with limited business risk.
SRAM Interface: A Default Standard
The SRAM interface with bidirectional data bus, unidirectional address bus and relevant control signals is one of the most common interfaces found on a variety of microprocessors, digital signal processors and other processing elements. Standard SRAM interface is also the simplest bus that can be implemented in programmable logic devices such as CPLDs and FPGAs.
Today's SRAM interface provides the flexibility to choose bus width and high speed I/Os, such as HSTL and LVTTL. It can provide a data transfer rate from a few Kbps to 36 Gbps, with a 72-bit bus width and 250-MHz access frequency. This covers the performance needs for a wide range of applications for chip-to-chip communication.
The standard SRAM interface is also very common across most processor platforms like RISC and CISC CPUs, DSPs, and ASSPs. Following (Figure 3.) are examples of CPUs with the SRAM interface. A key point to note here is variation of interface speed and bus width among these processors.
Figure 3: Several Popular DSPs and Processors with a Memory Interface.
Click to Enlarge
Dual-Port Buffer as an Interconnect Component
Given that the presence of a memory interface is quite common and provides required performance such as bandwidth, memory interfaces can be used as a "default" interface for interconnect functions. However, a memory interface is bi-directional for data and uni-directional for control. For peer-to-peer data transfer, bi-directional control is absolutely required. A dual-port buffer enables both processors to behave as the "master" and effectively creates bi-directional control.
In addition, a dual-port buffer provides full flexibility to ease system design and achieve performance goals, without adding the complexity of multilayer protocols and processing delays associated with them. Therefore, the dual-port buffer is commonly used as an interconnect component.
To meet different system requirements, dual port buffers are available in a wide variety of features and parameters. The next section describes how dual-port buffers address the generic as well as application-specific demands of various systems.
For chip-to-chip data transfer, an interconnect component, such as the dual-port buffer, effectively bridges the gap between different system components, addressing several key interconnect issues:
Clock Frequency: The clock frequency difference between two processing elements creates a serious challenge for transferring data from one processing element to another. For example, in video infrastructure equipment, an image processing element could be operating at one clock rate while an MPEG encoder is operating at another.
In such a scenario, a true dual-port memory can connect these two processing elements using their own clock. Because both ports of the device are absolutely independent of each other, these clocks can run at a different rate.
Transfer Rate: The rate of transfer mismatch between two processing elements is quite common in a multi-processing environment. Having a built-in buffer memory in a dual-port interconnect addresses this effectively.
Figure 4: Dual Port Solutions Meets Physical Layer Interconnect Requirements.
Click to Enlarge
Bus Mismatch: The mismatch of bus width and electrical interface can be either a constraint or a desirable situation. For example, in a design upgrade, a proven ASSP with 100-MHz LVTTL I/O needs to connect to an FPGA. The ASSP can pump-out the data at a constant rate, but the FPGA design requires it to pump-in the data in bursts and at a much higher rate of transfer. This means if the FPGA can use its 200-MHz HSTL I/O, the design upgrade can be made much simpler.
Even if an FPGA could pump-in the data at a constant rate, using 200 MHz HSTL I/O with only half the bus width could significantly enhance the PCB routing of the board. More flexible dual ports provide a variety of different electrical interfaces on either port, addressing this situation in a most effective way.
Common Control and Data Interface: The common physical interface for a control plane and data plane is an optimum requirement in a complex multiprocessing environment. For example, in a wireless basestation where multiple DSPs transfer chip rate and symbol rate data between themselves, they also need to communicate control information such as antenna number, signal strength and the overload situation, to another DSP. Ideally, this needs to be achieved through the same high-speed interface that is used for the data plane to keep the complexity of the board design under control. This can be done only if the interface has the ability to support multiple virtual channels.
A dual-port buffer supports this requirement without any protocol overhead to achieve the performance. Through the use of mailboxes, dual-port buffers can even send interrupt signals to the receiving processor for high priority control plane communication.
Virtual Channels: Virtual channels for transferring multiple, independent streams of data between the processors can take significant protocol overhead off the processors.
The dual-port buffer enables flexible management of multiple virtual flows by providing mechanisms so that these virtual flows can be addressed by a processor, as if they were in a specific memory location.
Applications Requirements for Interconnect
In addition to the generic system design problems solved by dual-port buffers, as described above, different application requirements define parameters of dual-port buffers for that particular application.
For example, wireless infrastructure is highly driven by performance. Chip-to-chip data transfer in a 3G basestation can run into multiple Gbps. This requires that interconnect components not only support high data transfer rates, but also provide sufficient buffer depth and architectural flexibility to support high-rate data transfer.
Similarly, for a high-end storage system, the ability to support multiple virtual channels is useful for supporting credit buffers for different data streams. Such an ability in an interconnect component can add much more value to the system design.
The new generation of cell phones use a separate baseband processor and application processor. A dual-port buffer connecting these two processors in a cell phone needs to be extremely small and use low power.
Finally, consider that a video infrastructure environment typically processes multiple video feeds at the same time. Multiple virtual channels through interconnect components greatly simplify the design.
Figure 5: Dual Port Meets Application-Specific Interconnect Requirements.
Click to Enlarge
Inter-processor interconnect requirements usually evolve with applications. For chip-to-chip interconnect requirements--until application architecture and optimized interface standards mature--using a "default" interface, such as the memory interface, provides the best performance without additional risk.