The recent appearance of multicore system-on-chip (SoC) devices has rearranged the boundaries among silicon devices, boards and subsystems. This trend has led to significant changes in chip-to-chip and board-to-board interconnect requirements. Are existing standards-based interconnects ready for this transition?
With the introduction of the microprocessor in the 1970s, simple computing systems were constructed on a single board using a discrete processor, memory controller and I/O interface device. Board-level buses connected the devices; when higher performance was desired, multiple boards were assembled together. Backplanes provided communication between cards using a system-level bus.
These board- and system-interconnect protocols were proprietary. Over time, closed protocols gave way to standardized protocols such as Ethernet, PCI Express or RapidIO.
Concurrently, integrated circuit technology followed Moore's law and increased both the number and speed of available transistors at a given price. Together these trends dramatically boosted processor performance.
Many generations of silicon devices have leveraged this virtuous cycle. Unfortunately, the rate of increase in single-processor performance has leveled off substantially. The most important contributing factor to this decline has been power. Smaller transistors led to faster switching times. Shrinking transistors made them leakier, which lead to increasing static power. And as transistors switch faster, dynamic power increases.
This spiraling power increase highlighted several realities driven by the physics of current silicon process technology. First, individual processor performance will be limited by how much power can be supplied and dissipated in a system. Second, transistor budgets will continue to increase but achievable clock rates will not.
With still-growing transistor budgets, the industry has moved quickly to devices with multiple processor cores that also integrate memory controllers, application accelerators and I/O interfaces to form a multicore SoC. Multicore devices promise significantly increased system performance.
The rise of SoC devices has reduced the boundaries between individual components and the system architectures they implement. Where once a board held one complete computing system, today many such systems are present on a single device.
The transition to SoC devices changes the requirements for interconnects used between SoCs and other devices and networks. Board and system-level interconnects were initially shared bus-based. As with past processors, the demand for more interconnect performance was addressed in a similar fashion: Increase the clock rate and widen bus widths. As with processors, physics eventually intervened, demanding that the number of devices on the bus be reduced. This led to bus segmentation, hierarchical topologies and, ultimately, point-to-point, switch-based networks.
Embedded systems are often partitioned into three subsystem functions: control plane, data plane and system management. When a system consisted of one computing system, the number of system-level traffic streams was limited. This was fortunate, because available bus-based interconnects by definition accommodate just a single traffic stream.