Embedded Systems Conference
Breaking News
Newest First | Oldest First | Threaded View
Page 1 / 2   >   >>
User Rank
Re: Not your father's PCIe
antoanto   3/5/2014 7:42:31 PM
This is correct at hardware level, but when you use sockets ( RioNet) the kernel latency kill your performance, so A3CUBE's approach is bypass the operating system for the communication.

User Rank
Re: Not your father's PCIe
tb100   3/5/2014 11:27:19 AM
It is hard to comment on A3Cube's latency claims, since their implementation is secret.

There's one latency and throughput number that I don't see being mentoined here. If you are transmitting small (<64 byte) frames, then the packet overhead will add significantly to the latency and throughput, and the fact is that PCIe packets have a bigger overhead than RapidIO packets. For small packets, there's no question which one is faster.


User Rank
Re: Not your father's PCIe
rob18767   3/4/2014 10:07:11 AM
A non-standard PCIe system may gain traction in specific markets. However if you wish to make end point cards for off the shelf cards is standard form factors (ITX and ATX) then no it won't. 

PCI express is going nowhere in this respect. Wishing it would 'go away' is a non-starter. 

User Rank
Re: Not your father's PCIe
PCIeFabricNow   2/28/2014 10:13:43 AM
My post on PCIe vs SRIO (or others) was not comparing the business side to the technical – I was explaining that PCIe was technically the best solution based on key metrics.  The fact is that PCIe provides a universal connection – this is not really open to reasonable debate – and that this offers advantages – technical advantages – that no other interconnect can offer.

PCIe provides the largest universe of components with direct connection, which offers a clear advantage in power, cost, and latency, and this leads to a potential performance edge as well.  And it allows the use of most devices – since they almost all have PCIe as a connection – so that you can build up a system that is tuned to your need with off-the-shelf components.  These are not business advantages, but technical advantages.  And it is these technical advantages that lead to the business success, not the other way around.

I would encourage people to go to the page that I noted before (www.plxtech.com/expressfabric) to see that the PCIe solution that I was referencing goes beyond a simple PCIe network.  It offers both DMA and RDMA, and in each case they are compatible with the vast number of applications that have been written for Ethernet and InfiniBand.  You benefit from the advantages I mentioned, and you can use the same applications.

And it allows sharing of I/O devices among multiple hosts with existing devices and drivers.  These features can all be had with a single, converged fabric.

User Rank
Re: ARM Impact
betajet   2/27/2014 7:50:35 PM
I can think of two ARM SoCs with PCIe off the top of my head: Freescale i.MX6 and Xilinx Zynq.  There are probably others, but most ARM SoCs are designed for mobile devices so they don't need high-speed wired connectivity.  As more ARMs get designed for servers, I bet you'll see plenty of PCIe.

PCIe is quite common in the PowerPC space, such as Freescale PowerQUICC 3 and QorIQ, and AMCC (now Applied Micro).

User Rank
Re: ARM Impact
antoanto   2/27/2014 7:29:24 PM
It will be perfect because our architecture can be ported into RapidIO or Hypershare.

User Rank
ARM Impact
TonyTib   2/27/2014 5:29:48 PM
What about the impact of ARM?  How many ARM MPUs have direct PCIe interfaces?  What happens if ARM vendors adopt SRIO?  (Probably not likely, but could happen).

PCIe has achieved its success primarily because of Intel's dominance.  If Intel loses its dominance (or, arguabley, has already lost outside of the traditional PC desktop/laptop world), then we could see changes.

User Rank
Re: Not your father's PCIe
PCIeFabricNow   2/27/2014 4:43:19 PM
The discussion in this thread seems to have migrated off topic, since the original question pertained to the viability of a PCIe fabric.  But since this has turned into a discussion of the merits of SRIO, it makes sense to compare PCIe and SRIO for use as a general-purpose fabric.  And that comparison comes out significantly in favor of PCIe.

For purposes of full disclosure here, I am an advocate of using PCIe as a rack-level fabric, and a PLX employee.

The major advantage of PCIe as a fabric is that almost every device -- CPUs, NICs, HBAs, HCAs, FPGAs, you name it -- has a PCIe port as at least one of its connection points, meaning you can eliminate all of the bridging devices necessary in other fabric types.  This reduces power and cost, since you are eliminating a lot of bridges, and for that same reason it also reduces latency, so it seems obvious that this is the right approach.

There seems to be a series of assumptions here about how PCIe can be used as a fabric, and much of it is outdated.  A PCIe-based fabric that connects directly to existing devices can be constructed, and it doesn't need to use non-transparency (which seems to be an implicit assumption in this thread).   PLX is doing just that, and you can go to www.plxtech.com/expressfabric for a more in-depth explanation.

SRIO has the same drawbacks as Ethernet and InfiniBand in regards to needing additional bridges in most cases, since the devices that have a direct SRIO interface can be easily counted – and that's not many.  And SRIO doesn't even have the advantage of the nearly universal usage that Ethernet has, or the incumbency for HPCs that InfiniBand has.  So it has all of the disadvantages of the other alternatives, and none of their advantages.

This has nothing to do with the technical merits of SRIO at the SerDes or protocol level.  It is a well-defined, high-performance interconnect.  But it lost out to PCIe as the general-purpose connection back when it mattered, and the virtuous cycle that developed for PCIe made this a non-competition. You can argue about the speeds and feeds as much as you want, but as none other than the philosopher/actor Bill Murray once said, "It just doesn't matter."

User Rank
Re: Not your father's PCIe
antoanto   2/27/2014 4:41:11 PM
Who wrote the article wrong the latency number, we have a memory to memory latency of about 750 nanoseconds ( including software latency).

The goal of the system is to use the memory mapping capability of the PCIe to the fabric to avoid any protocol encapsulation ( like Eth) to maintain the lower latency is possible and working bypassing the operating system kernel stack.

In x86 server there is no RIO native interface so also Rapid IO bridge must plug into a PCIe slot, also for the IB so in x86 server the minimum latency that you can experience it the rout complex latency.
That is the reason because we extend the memory mapping of the PCIe.

RONNIEE Express is a low latency interconnection or data plane that use shared memory mapping for the communication between nodes extending the memory mapping used by the PCIe.

RONNIEE Express implement the support for a low latency 3D torus topology with flow control and traffic congestion management.

The most interesting thing of this interconnection is that permit to implement a TCP socket in memory that use the memory mapping for the communication bypassing the kernel and permit to use unmodified TCP application with a 1-2 microseconds latency with no modification.

This latency of our memory mapped socket is 10 time less that RapidIO RIONET so our memory approach is something that is really really powerful and it is not limited to the PCIe, but opens a new way to use distributed memory for communication


To understand better watch this video http://www.youtube.com/watch?v=YIGKks78Cq8


User Rank
ANON1245102408113   2/27/2014 4:30:47 PM
GSMD, We would love a chance to talk with you to go over all of this and dive deeper into the technology so you have the right understanding of what it is. Where can we contact you?

Page 1 / 2   >   >>

Autonomous vehicles on our roads soon? What could go wrong with that? Listen in as EE Times' Junko Yoshida asks industry experts what the intended and unintended consequences will be.
Most Recent Comments
Ajaypal Sandhu
Max The Magnificent
Duane Benson
Flash Poll
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Special Video Section
The LTC®4015 is a complete synchronous buck controller/ ...
The LTC®2983 measures a wide variety of temperature sensors ...
The LT®3042 is a high performance low dropout linear ...
Chwan-Jye Foo (C.J Foo), product marketing manager for ...
The LT®3752/LT3752-1 are current mode PWM controllers ...
LED lighting is an important feature in today’s and future ...
Active balancing of series connected battery stacks exists ...
After a four-year absence, Infineon returns to Mobile World ...
A laptop’s 65-watt adapter can be made 6 times smaller and ...
An industry network should have device and data security at ...
The LTC2975 is a four-channel PMBus Power System Manager ...
In this video, a new high speed CMOS output comparator ...
The LT8640 is a 42V, 5A synchronous step-down regulator ...
The LTC2000 high-speed DAC has low noise and excellent ...
How do you protect the load and ensure output continues to ...
General-purpose DACs have applications in instrumentation, ...
Linear Technology demonstrates its latest measurement ...
Demos from Maxim Integrated at Electronica 2014 show ...
Bosch CEO Stefan Finkbeiner shows off latest combo and ...
STMicroelectronics demoed this simple gesture control ...