Breaking News
News & Analysis

Datacenters Drive 25GE Effort

Ethernet group includes Google, M'soft
7/3/2014 09:40 AM EDT
14 comments
NO RATINGS
Page 1 / 2 Next >
More Related Links
View Comments: Oldest First | Newest First | Threaded View
Page 1 / 2   >   >>
TanjB
User Rank
Rookie
a practical goal
TanjB   7/3/2014 7:04:24 PM
NO RATINGS
25Gbps is where the transducers are landing for simple, single fiber capacity.  10Gbps underutilizes the hardware.  40Gbps does not look feasible without 2 or more fibers, it is an implausible step up for transducers any time soon.

In a data center with millions of connectors it makes sense to aim at the optimal feasible point which seems to be 25G for the next few years.  It is a good idea to get all the parties interested in buying and selling at this performance point to agree on compatibility.

 

Nicholas.Lee
User Rank
Rookie
10G,40G,100G v.s. 25G.50G,100G
Nicholas.Lee   7/4/2014 4:12:48 AM
NO RATINGS
Good article.

At first glance it might not seem obvious to readers why you would have a 40G standard and a 50G stsndard as they are so similar in speed, but this all needs to be seen through the lens of the SERDES transeiver rates on the chips.

When a 10G SERDES was the fastest transeiver available then it made perfect sense to use bundles of fibres each of which terminated at a 10G transeiver, and so we ended up with standards based on 1, 4 and 10 fibres aggregating to bandwidths of 10G, 40G and 100G respectively.

Now, chips are avilable with 25G SERDES transeivers, so companies are right to revisit the standards and update them to be based on multiples of 25G. Hence bundles of 1, 2 and 4 fibres aggregating to bandwidths of 25G, 50G and 100G Ethernet respectively.

 

rick merritt
User Rank
Blogger
What's after 25G serial?
rick merritt   7/4/2014 4:46:18 AM
Thanks Nicholas for cutting to the heart of the issue--the rise of the 25G serial link after so many byears of hard work.

It's easy to predict quite a big wave of products wikl ride this technology.

So what are engineers (serdes experts) turning their energies to next? I know there was a 100G serial workshop sponsored by Ethernet Allaince this month, but methinks that's pretty far future stuff, yes?

markhahn0
User Rank
Rookie
IB?
markhahn0   7/4/2014 6:15:02 PM
NO RATINGS
I always find it strange how the eth world bumbles along, apparently oblivious to the IB world.  In spite of IB being almost sole-sourced by Mellanox, and that company appearing in this effort.

56Gb IB (which is admittedly 4-lane, thus 16Gb/lane) has been around for years, and is pretty much the entry level in the computational datacenter.  And it's copper.  So this whole thing is puzzling on two counts: if the demand is there, why are these eth/optical efforts lagging, and why are they insisting on optical?  power can't possibly be the issue, since we're not talking about high-density applications (even a bundle of 25 Gb coming from a rack is never going to compare to the power dissipated by the *compute* contents of the rack, or even disks.  cable *length* instead?

GSMD
User Rank
Manager
SRIO at 25Gbit
GSMD   7/4/2014 10:20:35 PM
Now it looks like Etherent is tracking 25G SRIO ! The most logical upgrade path is always to track commodity SERDES and also to support single lane configurations. For some reason the IEEE Eth group could not garner enogh votes for subsets of the 100G spec'  4x25G variant.

Since the RapidIO consortium is  advocating replacing ETH with SRIO in datacenters, this makes an apples to apples comparison easier. While SIO supports 16 lanes, common config at 25G will be 1-4 lanes. 

Not sure why IB went to the Intermediate 14G lane before going to the 26G lane. 25/28G is getting standardized and it would have made sense to wait for it. I guess they wanted to keep the tag as the fatest link to gain marketshare. Not sure it is worth it though since using non-std (relatively speaking) strategies pushes up cost. IB already has a cost issue.

PCIe has enough volume so it can afford to go down its own path for lane speeds. Also they have to cater to low cost designs and hence cannot easily go 25G. 

But it wouls be nice if all protcols standardize on lane speeds, connectors and cabling and differentiated only at the protocol level. I do not think any of teh protocls use lane speed or encoding difference as a marketing ploy. I guess that is too much to ask but we kind of getting there.

Ethernet, HMC and SRIO are at 25G per lane. IB is a 26G. Forget where Interlaken is going. Only holdout is PCie. But lane encoding standardization will also help in creating common SERDES parts. Eth is at 64/66b, PCIe is at 128/130b. But I prefer teh interlaken/SRIO 64/67b since it limits 1/0 disparity and helps maintain DC balance. This is crucial in making line interface design simpler and keep costs lower. Will definitely make teh lives of the FPGA makers simpler. Currently the SERDES configurations in FPGAs is a trifle complex !

 

 

DrFPGA
User Rank
Blogger
Google and Msoft
DrFPGA   7/5/2014 6:32:57 PM
NO RATINGS
crafting standards for Ethernet. I can't wait to see what 'extras' they load us down with. Maybe corrosion protection on the bottom of the connection...

TanjB
User Rank
Rookie
Re: IB?
TanjB   7/5/2014 11:46:41 PM
NO RATINGS
Mark, IB does not scale to the data center.  It is nice for building a supercomputer in a rack with RDMA and a few dozen number crunchers, but not designed to connect 100,000 or more servers in a data center.  That is where the interest in 25G serdes based links comes into play.  Maximum data in minimum counts of connectors and fibers.

Mellanox is playing with Ethenet using RRoCE to try to get some of the IB benefits on an ethernet fabric, but it is unclear if the PFC mechanism used to replace the tokenized pacing in IB will really work with an interesting number of machines and the short bursts of random sourced traffic which characterize the DC.

interconnect Guy
User Rank
Manager
Re: SRIO at 25Gbit
interconnect Guy   7/6/2014 8:06:22 AM
In the end the winner is Ethernet,  it has the backers, and the Eco-system,   Count the Teir One Vendors and Customers supporting the effort, and its clear.

rick merritt
User Rank
Blogger
Re: IB?
rick merritt   7/6/2014 10:21:28 AM
NO RATINGS
@TanjB: Aree there any hard numbers on # of cluster connections IB can support vs. Ethernet? and SRIO?

GSMD
User Rank
Manager
Re: IB?
GSMD   7/7/2014 12:58:02 AM
NO RATINGS
1. Basic IB I think is 16 bit addressing but there is extended addressing support available. Cost is an issue with iB but where performance matters, IB is used in storage and RDBMS interconnects.

 

2. SRIO does not even have the adressing limitations and the new 10xN spec allows 25KM links. SRIO dis not have a standard SW ecosystems for large clusters but that is being remedied. Once thats spec is out, I supect you will see SRIO adoption increase in data centers. I am member of a couple of SRIO WGs so I am necessairily biased !

3.  Ethernet is surviving in the data center only becuase of legacy reasons. It is a horrible anachronism in this day and age of fast, low latency interconnects. In fact I question the very need for networking in a data center. IP packets are a horribly inefficient way to communicate in a closed data center. RDMA with proper capability based security at OS level is vastly more efficient. Just imagine the wasted bandwidth due to the IP stack and the processing overhead. And when you start to deploy large storage newtorks like NVMe or my own lightstor, Ethernet is not even an option that can be considered. Does someone seriously think that when I connect two CPU cards or two boxes ina rack at 100G per link (backplane lane  or fiber), I have to give up a max of 30% capacity to protocol overheads ?

4. And when you to extra large clusters of CC-NUMA machines, Ethernet is even more of a killer since packets tend to be of 64KB size. 

 

Fundamentally teh computing model ina data center is changing and Ethernet frankly does not have a place in it. But there are dyed in the wool diehards who cannot conceptualize a non-ethernet world and we are paying the price for it !

Forget the non-technical arguments for a  while and let anyone prove that Ethernet is better in any respect. 

 

- usable bandwidth  (protocol efficiency)

- cost per 10G port

- enery per 10G port

- latency

- error resiliency at HW level

- cost of cable, connectors (washout since all use the same)

- efficiency in tunneling other protocols

- cost of switch IC

- error resiliency in lane failover

Page 1 / 2   >   >>
Flash Poll
Radio
LATEST ARCHIVED BROADCAST
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week