1. SRIO 3.0 which is the current standard specifies lane speed at 10 Gbit/sec and 25 Gbit/sec and a max of 16 lanes per port. You should see parts this year. The lower speed is faster than PCIe and the same speed as 10Ge. SRIO is basically tracking Ethernet SERDES standards and hence is assured of market competitive lane speeds. the higher speed is competitive with other standards. So I think that takes care of your obselete line speed comment.
the encoding is also 64/67b which makes it more reliable than PCIe or 10Ge on longer PCB tracks.
Higher than 25 G speeds are problematic on PCBs and it is not clear on how the industry is going to proceed. We are going optical beyond 25G for longer lenths but may continue to use electrical up to 50g speeds for ultra short lengths.
2. All interconnects other than GigE have sourcing issues. PCIe switches are also available only with IDT and PLX and IDT nearly acquired PLX last year. So I am not sure what your point is since it is a problem for all interconnects, not just SRIO.
I had also pointed out that my insititution, IIT-Madras is releasing a commercial grade SRIO 3.0 IP (10 G first and then 25G) with BSD license. We will also jointly release with Xilinx dev kits using Xilinx 10G/25G SERDES so that users have a ready to use FPGA platform. The open source kit will contain all digital portions including the PCS/PMA components. It will take some time but the non-PHY components are already online (bitbucket.org/casl). I would like to release the SERDES also but that involves coordinating with foundries, so we will use 3rd part SERDES for now.
the kit will also include a complete verification IP, again completely free under a BSD lic. More than 20 man years of effort will go into this effort.
Commercial entities have already started evaluating this IP.
That should take care of any second sourcing concern since no other interconnect will have this wide an availability.
In any case you are wrong about IDT being the only IP source. Xilinx, Altera, Praesum, Mobevel and possibly others provide FPGA and Silicon IP.
PCIe as a fabric and an interconnect is a lost cause ! I wish efforts to make it one would stop. It is a good interconnect for what it was intended to do, connect master devices to I/O sub-systems. Of course, its technical deficiencies will not stand in its way to becoming a sucesses since marketing trumps technology any day !
Full disclosure: I am a member of the RapidIO trade association but have no commercial interests since I work for a University. Our SRIO controller is also open source.
Hence I would recommend SRIO which works and you can buy PCIe HBAs today to do the same. Of course using PCIe increases latency but only Freescale and TI CPUs have SRIO controllers. IB offers similar perf. and latency but is pretty expensive since vendors are few.
And there is no push to make it a CPU to CPU interconnect.