TORONTO – As Ethernet speeds get faster, Rambus is looking to make sure memory and interfaces can keep up with the recent launch 56G SerDes PHY.
The analog-to-digital converter (ADC) and (DSP) architecture of the 56G SerDes PHY is designed meet the long-reach backplane requirements for the industry transition to 400 GB Ethernet applications, said Mohit Gupta, senior director of product marketing at Rambus. This means it can support scaling to speeds as fast as 112G, which are required in the networking and enterprise segments, such as enterprise server racks that are moving from 100G to 400G.
"Ethernet is moving faster than ever,” Gupta said. “The pace has picked up substantially due to big data, the Internet of Things (IoT) and other trends putting high demands on communication channels. There is already a forum for 112G SerDes speed which will drive the 800G standard.”
One clear usage case, said Gupta, is data center deployment by the “big four” — Facebook, Microsoft, Amazon and Google.
Wireline and wireless communications are also guiding Rambus' memory and interface development. The move from 4G to 5G is driving new architectures such as C-RAN, which further pushes SerDes requirements to communicate between remote radio head (RRH) and baseband unit (BBU) from 12G to as high as 48G.
Generally speaking, Gupta said, data center networking speeds are keeping pace with the processing power and storage capabilities, and there're many of architectures being discussed. “Power is the big element of every decision being made as that equates to money,” he said. “New memory architectures are also being looked at for the same reason.”
Rambus will use long-term partner Samsung's 10nm Low-Power Plus (LLP) process technology for the 56G SerDes PHY, which Gupta said provides a higher performance and lower power node in comparison to first generation FinFET nodes. Rambus has enjoyed success with Samsung's 14nm process for its 28G SerDes, he said. Network applications has been one of Samsung's key segments of focus and the composition of its 10LPP network solution.
Gupta said the evolution of memory interfaces for networking applications is being driven by high bandwidth and low latency requirements. High bandwidth memory (HBM) technology that was originally targeted at graphic companies such as NVIDIA and AMD is now being deployed in networking chips, and the technology is becoming more accessible as 2.5D integration reaches maturity.
On-chip memory continues to be dominated by SRAMs and TCAMs for communication chips, he said. “SRAM performance still matters a lot for buffers being able to communicate at higher speed as logic is being driven faster."
DDRx remains prevalent for off-chip memory due to ease of use and cost, but HBM is finding its way initially for applications able to justify the cost addition.
Gupta said SerDes PHY IP is one of Rambus' flagship products, and that its Snowbush IP acquisition is enabling it to broaden its offerings. Last summer, it announced it had developed the first production-ready 3200 Mbps DDR4 PHY available on Globalfoundries Inc.'s FX-14 ASIC platform using its power-performance optimized 14nm LPP process. The Rambus R+ DDR4 PHY intellectual property uses Rambus' proprietary R+ architecture, based on the DDR industry standard, and is also part of the company's suite of memory and SerDes interface offerings for networking and data center applications.
Rambus' 56 Gbps Multi-Protocol SerDes (MPS) PHYs are a PAM-4 and NRZ compliant IP solutions designed to provide reliable performance across challenging long-reach data center environments.
As high speed networking tracks upwards of 400 GB, it's not just about speeding up the interfaces between network nodes, it's also about being able to distribute network nodes, said Jim McGregor, principal analyst at TIRIAS Research. “We're talking about speeds we only dreamed about a couple of years ago." However, existing network architectures aren't built for the emerging applications that are driving the need for speed, he said.
Those applications include wireless communications, IoT, artificial intelligence and deep learning applications, as well as autonomous vehicles, which will be gathering a great deal of data that will be processed in the cloud, said McGregor. “Any way you look at it, the amount of data these networks have to handle is going grow exponentially over the next 20 years," he said. “Being able to go to 40GB to 100GB to 400GB and beyond is really critical."
With semiconductors, it's got to the point where everything is being put on a single chip to bring everything together as close as possible. For a data center with all of its storage, processing and compute elements, that's just not feasible, said McGregor. “You have to have these high speed interfaces," he adddd.
He said the key to Rambus' strategy has not been memory, but the memory interface. “They've tried to take that IP and extend it, but not just to memory but to any interface." Partnering with Samsung gives Rambus the advanced manufacturing capabilities for these types of high speed interfaces.
Long term, applications such as AI are going to push us to new architectures, said McGregor, as present ones are limited by physics and Moore's Law. This includes Rambus' efforts on cryogenic memory subsystems that can be cooled at cryogenic temperatures to support quantum computing.
—Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.