Design Con 2015
Breaking News
News & Analysis

Startup's Cluster Trumps InfiniBand

2/25/2014 03:38 PM EST
15 comments
More Related Links
View Comments: Threaded | Newest First | Oldest First
rick merritt
User Rank
Author
Not your father's PCIe
rick merritt   2/25/2014 5:32:10 PM
Can a non-stadard PCIe system interconnect gain much traction?

rob18767
User Rank
Manager
Re: Not your father's PCIe
rob18767   3/4/2014 10:07:11 AM
NO RATINGS
A non-standard PCIe system may gain traction in specific markets. However if you wish to make end point cards for off the shelf cards is standard form factors (ITX and ATX) then no it won't. 

PCI express is going nowhere in this respect. Wishing it would 'go away' is a non-starter. 

interconnect Guy
User Rank
Manager
Re: Not your father's PCIe
interconnect Guy   2/26/2014 5:58:32 AM
sadly RIO or RapidIO is several signaling generations behind, at 10GB/sec  While others are at 27Gb/Sec  (per pair)  and RapidIO switches are single sourced at IDT.

interconnect Guy
User Rank
Manager
Re: Not your father's PCIe
interconnect Guy   2/26/2014 7:54:53 AM
 

Per the RapidIO website:

"The 10xN specification, backward compatible with RapidIO Gen1 and Gen2 systems, supports 10.3125 Gbaud per serial lane"

The 25GB story is:


"RapidIO specifications are under development to support 25...."


Other interconnects are shipping 25Gbaud Now, and yes IP has been available for years, but never full cores for RapidIO (just thin Phy layers) complete cores are available for FPGA's for interconnects like PCIe and Ethernet.


You may have access to information about roadmaps for Gen 3 RapidIO parts, but the the reality is Infiniband, PCIe and Ethernet are shipping these speeds, and have been for sometime in volume.


In Niche applications the second sourcing may not be a issue, but in volume it is.


Freescale and TI support both PCIe and Ethernet. (and older sRIO) (Gen2)


Per the RapidIO Product showcase last Freescale product for sRIO was 2008
 

rick merritt
User Rank
Author
Re: Not your father's PCIe
rick merritt   2/26/2014 10:44:35 AM
@GSMD: Thanks for sharing your expertise.

As I underdstand it, A3Cube is claiming advances in latency, not necessarily throughput. Their goal is to enable a single system image over a variant of PCIe.

What do you think about their latency claims?

tb100
User Rank
CEO
Re: Not your father's PCIe
tb100   3/5/2014 11:27:19 AM
NO RATINGS
It is hard to comment on A3Cube's latency claims, since their implementation is secret.

There's one latency and throughput number that I don't see being mentoined here. If you are transmitting small (<64 byte) frames, then the packet overhead will add significantly to the latency and throughput, and the fact is that PCIe packets have a bigger overhead than RapidIO packets. For small packets, there's no question which one is faster.

 

antoanto
User Rank
Rookie
Re: Not your father's PCIe
antoanto   3/5/2014 7:42:31 PM
NO RATINGS
This is correct at hardware level, but when you use sockets ( RioNet) the kernel latency kill your performance, so A3CUBE's approach is bypass the operating system for the communication.

rick merritt
User Rank
Author
Re: Not your father's PCIe
rick merritt   2/26/2014 11:09:53 PM
@GSMD: So A3Cube's latency of 100 nanoseconds is the same as SRIO--so why do they bother? Game over for them?

ANON1245102408113
User Rank
Rookie
GSMD
ANON1245102408113   2/27/2014 4:30:47 PM
NO RATINGS
GSMD, We would love a chance to talk with you to go over all of this and dive deeper into the technology so you have the right understanding of what it is. Where can we contact you?

antoanto
User Rank
Rookie
Re: Not your father's PCIe
antoanto   2/27/2014 4:41:11 PM
Who wrote the article wrong the latency number, we have a memory to memory latency of about 750 nanoseconds ( including software latency).

The goal of the system is to use the memory mapping capability of the PCIe to the fabric to avoid any protocol encapsulation ( like Eth) to maintain the lower latency is possible and working bypassing the operating system kernel stack.

In x86 server there is no RIO native interface so also Rapid IO bridge must plug into a PCIe slot, also for the IB so in x86 server the minimum latency that you can experience it the rout complex latency.
That is the reason because we extend the memory mapping of the PCIe.

RONNIEE Express is a low latency interconnection or data plane that use shared memory mapping for the communication between nodes extending the memory mapping used by the PCIe.

RONNIEE Express implement the support for a low latency 3D torus topology with flow control and traffic congestion management.

The most interesting thing of this interconnection is that permit to implement a TCP socket in memory that use the memory mapping for the communication bypassing the kernel and permit to use unmodified TCP application with a 1-2 microseconds latency with no modification.

This latency of our memory mapped socket is 10 time less that RapidIO RIONET so our memory approach is something that is really really powerful and it is not limited to the PCIe, but opens a new way to use distributed memory for communication

 

To understand better watch this video http://www.youtube.com/watch?v=YIGKks78Cq8

 

PCIeFabricNow
User Rank
Rookie
Re: Not your father's PCIe
PCIeFabricNow   2/27/2014 4:43:19 PM
The discussion in this thread seems to have migrated off topic, since the original question pertained to the viability of a PCIe fabric.  But since this has turned into a discussion of the merits of SRIO, it makes sense to compare PCIe and SRIO for use as a general-purpose fabric.  And that comparison comes out significantly in favor of PCIe.

For purposes of full disclosure here, I am an advocate of using PCIe as a rack-level fabric, and a PLX employee.

The major advantage of PCIe as a fabric is that almost every device -- CPUs, NICs, HBAs, HCAs, FPGAs, you name it -- has a PCIe port as at least one of its connection points, meaning you can eliminate all of the bridging devices necessary in other fabric types.  This reduces power and cost, since you are eliminating a lot of bridges, and for that same reason it also reduces latency, so it seems obvious that this is the right approach.

There seems to be a series of assumptions here about how PCIe can be used as a fabric, and much of it is outdated.  A PCIe-based fabric that connects directly to existing devices can be constructed, and it doesn't need to use non-transparency (which seems to be an implicit assumption in this thread).   PLX is doing just that, and you can go to www.plxtech.com/expressfabric for a more in-depth explanation.

SRIO has the same drawbacks as Ethernet and InfiniBand in regards to needing additional bridges in most cases, since the devices that have a direct SRIO interface can be easily counted – and that's not many.  And SRIO doesn't even have the advantage of the nearly universal usage that Ethernet has, or the incumbency for HPCs that InfiniBand has.  So it has all of the disadvantages of the other alternatives, and none of their advantages.

This has nothing to do with the technical merits of SRIO at the SerDes or protocol level.  It is a well-defined, high-performance interconnect.  But it lost out to PCIe as the general-purpose connection back when it mattered, and the virtuous cycle that developed for PCIe made this a non-competition. You can argue about the speeds and feeds as much as you want, but as none other than the philosopher/actor Bill Murray once said, "It just doesn't matter."

TonyTib
User Rank
CEO
ARM Impact
TonyTib   2/27/2014 5:29:48 PM
What about the impact of ARM?  How many ARM MPUs have direct PCIe interfaces?  What happens if ARM vendors adopt SRIO?  (Probably not likely, but could happen).


PCIe has achieved its success primarily because of Intel's dominance.  If Intel loses its dominance (or, arguabley, has already lost outside of the traditional PC desktop/laptop world), then we could see changes.

antoanto
User Rank
Rookie
Re: ARM Impact
antoanto   2/27/2014 7:29:24 PM
It will be perfect because our architecture can be ported into RapidIO or Hypershare.

betajet
User Rank
CEO
Re: ARM Impact
betajet   2/27/2014 7:50:35 PM
I can think of two ARM SoCs with PCIe off the top of my head: Freescale i.MX6 and Xilinx Zynq.  There are probably others, but most ARM SoCs are designed for mobile devices so they don't need high-speed wired connectivity.  As more ARMs get designed for servers, I bet you'll see plenty of PCIe.

PCIe is quite common in the PowerPC space, such as Freescale PowerQUICC 3 and QorIQ, and AMCC (now Applied Micro).

PCIeFabricNow
User Rank
Rookie
Re: Not your father's PCIe
PCIeFabricNow   2/28/2014 10:13:43 AM
My post on PCIe vs SRIO (or others) was not comparing the business side to the technical – I was explaining that PCIe was technically the best solution based on key metrics.  The fact is that PCIe provides a universal connection – this is not really open to reasonable debate – and that this offers advantages – technical advantages – that no other interconnect can offer.

PCIe provides the largest universe of components with direct connection, which offers a clear advantage in power, cost, and latency, and this leads to a potential performance edge as well.  And it allows the use of most devices – since they almost all have PCIe as a connection – so that you can build up a system that is tuned to your need with off-the-shelf components.  These are not business advantages, but technical advantages.  And it is these technical advantages that lead to the business success, not the other way around.

I would encourage people to go to the page that I noted before (www.plxtech.com/expressfabric) to see that the PCIe solution that I was referencing goes beyond a simple PCIe network.  It offers both DMA and RDMA, and in each case they are compatible with the vast number of applications that have been written for Ethernet and InfiniBand.  You benefit from the advantages I mentioned, and you can use the same applications.

And it allows sharing of I/O devices among multiple hosts with existing devices and drivers.  These features can all be had with a single, converged fabric.

Most Recent Comments
Radio
NEXT UPCOMING BROADCAST
EE Times Senior Technical Editor Martin Rowe will interview EMC engineer Kenneth Wyatt.
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Flash Poll