To add to your point. We all know that there is a massive divergence between Moore's Law and the progression of SerDes technology. This means that we are packing more transistors into processors but we can't keep up with feeding them with data. Moreover applications that were formerly "uni processor" span across mutiple processors. Intel has made in the range of $500M in interconnect acquisitions because it sees interconnect as a limiter to extracting the full performance of its processors in data centers and in high performance computing. RapidIO actually just takes main stream SerDes technology and uses the "highway" more efficiently. Kind of like treating each SerDes lane as a well used high occupancy lane that can connect any exit with any exit. Anyway, you are correct that there is a big opportunity for RapidIO in data center and supercomputing and the innovators in the market are already proving that using today's technology from embedded and wireless for wins.
Being the interconnect of choice within telecommunications systems, we believe that RapidIO can be used in much more and diverse application, especially by leveraging existing RapidIO enabled compute and switchting modules into Datacenter and networking products. As RapidIO can answer the need for low jitter, scaleability, robustness in a variety of applications, it is our drive to prove this within the RTA DDCN task group with reference designs.
A key value proposition of RapidIO is low latency. RapidIO switches typically offer 100 ns latency for packets and end to end latencies for communications are typically under 1 microsecond with very low jitter. Typical Ethernet packet latencies are measured in 10's of microseconds with very high jitter. Embedded applications have valued the deterministic low latency (as well as reliability) of RapidIO. More mainstream server applications are also starting to recognize the value of low latency in the services they provide. Witness the investment in flash to reduce storage latency. While RapidIO will never replace Ethernet as the LAN or WAN for computing systems, it can still play a valuable role by providing an open and well supported interconnect for peered processors. Neither PCI Express nor Ethernet do this very well and all other approaches to solve this problem are proprietary.
Sam, it is good you bring out this angle. In embedded systems, there has been a lot of any to any processing for years. Yesterday I was having a discussion with a data center hardware architect, and he was adamant that the networking function (ethernet) needed to brought down to the computing node, because all the function were limited to one node and the nodes don't need to talk to each other. That might apply in his data center, but that probably is not the case for most compute or analytic oriented servers.