SerDes signals sent differentially across two nets will have a communications limit. But, if my understanding of this technology is correct, then one of the limiting factors is that you are only sending differential signals across these nets. You are limiting yourself to an encoding that isn't necessarily very efficient, yet it is the only one all of us have been using up till now.
Instead of considering it one channel, consider the two nets as two channels that influence each other. One way of taking advantage of the crosstalk between the two channels is to only send differential signals. But is that the most efficient approach?
Of course if you send non-differential signals across these nets, you better have a good understanding of the signal integrity effects of the alternately encoded signals, and how pre/post filtering and equalization affects the error rate. It looks like this is the meat of their invention.
Had to laugh at the request that they open source it all. Yeah, and I wish Dell would send me free PCs, and I'll take a couple of Teslas too, while we're asking.
There isn't enough information here to make any predictions. What is described is a common serializer-deserializer (SERDES) approach. Whatever special sauce there might be isn't hinted at.
The typical case, though, is that the more bits per second you send down the link, within a given channel width, the less robust the link becomes. So for example, existing xDSL links are very much distance dependent. If you restrict the bit rate to 6.1 Mb/s, you can go up to 4 Km over the copper twisted pair. If you want 12 Mb/s, now you're limited to 1.5 Km over the copper cable. And so on. Using more copper twisted pairs in parallel, to create slower individual pairs which aggregate to a higher total bit rate, also allows longer distance over the copper cable, at the expense of needing more twisted pairs (including having to worry about synchronization among the twisted pairs).
The only meaninglful question to ask is, once again, does this SERDES promise to get closer to the Shannon limit than previous techniques? Or does it promise to violate Shannon's equation? That's always the bottom line. Modern coding techniques can achieve a couple of dB from the Shannon limit. That's the question that needs to be addressed.
A lot better understanding of what is being done, whether there are any limitations on the "ensemble" and what (if any) assumptions they require about the data. In addition I think we might like to see correctly operating IP as a requirement before deciding to use it.