Over the past 10 years, the advancement of silicon technology has
enabled many applications to process more data and produce higher
quality results than had ever been imagined for these applications.
Today, advancements in microprocessor technologies are opening doors
for even more applications to handle richer data types and produce
results much more quickly than was historically possible.
With this new ability to process more data, electronic systems in areas
such as telecommunications, industrial automation, video, avionics and
many others, now have a need to efficiently move data to and from their
processing units. The rate at which data needs to be moved within these
systems can now approach several gigabits per second.
Historically, gigabit data rate applications have been relegated to
telecommunications and data communication systems. But today, systems
such as medical imaging, machine vision, and many others also have the
need to implement gigabit serial links within their systems in order to
move data efficiently between sources such as high-speed imaging
cameras and high definition imaging systems, and data processing units.
Gigabit serial link implementation is not new to the electrical
engineering community. However, gigabit serial links have been
historically implemented using fiber optic cable and optical modules
that convert electrical signals to optical and vice versa. The optical
medium is very good for moving very high bit rate data as it is not
affected by impediments that typically degrade electrical signals such
as EMI, skin effects and cross talk. When high bit rate electrical
signals are transmitted over copper media, these impediments, in the
electrical domain, often lead to bit errors that typically are not
acceptable for many applications.
Ten to 15 years ago, if an engineer wanted to implement multi-gigabit
serial links within an application, an optical link was likely to be
the only choice available given the state of semiconductor technology
at the time. Despite their benefits, optical links have several key
drawbacks that make them prohibitive for many applications. These
drawbacks include high cost to implement. Typically, optical
implementations can cost several times that of copper cable
applications, for example.
Additionally, other drawbacks include the skill set for working within
the optical domain as well as special testing equipment needed for
optical signals. Engineers that have not worked with optical links will
have a steeper learning curve in implementing these serial links. These
drawbacks often lead to increased development and production costs that
many applications may not be able to bear given that most applications
have specific price points that need to be met for market acceptance.
Over the past 10 years, advancements in silicon technology have enabled
multi-gigabit serial links over copper media such as twisted pair
cables and backplane trace. Semiconductor technologies such as receive
equalization and transmit pre-emphasis, now can enable engineers to
implement multi-gigabit serial links that can reach up to 20 to 40
meters of twisted pair copper media. Further, these new technologies
(equalization and transmit pre-emphasis) are now more readily found in
integrated circuit devices such as gigabit serdes
(Serializer/De-Serializers) that make implementing cable and
backplane-based serial links much easier for the system designer. These
advances in semiconductor technology are enabling systems designers to
cost-effectively implement gigabit serial links within their
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.