This new coding technique uses the bandwidth more efficiently than differential signaling. The closest way of describing it is that Kandou is using spatial coding, whereas traditional FEC uses temporal coding. That is, Kandou introduces dependencis across the wires of an inteface, whereas channel coding introduces dependencies across time (from one clock cycle to the next). While temporal coding would be possible as well, it comes at the price of higher latency. Spatial coding, however, when properly designed and implemented, has close to no latency.
Back to the coding part, the goal is to pack more information on the wires than possible with differential signaling, while retaining the properties of differential signaling. The things that make differential signaling robust are common mode resistance (essentially the fact that the receiver rejects noise of equal phase/amplitude on the wires, and the fact that the signals on the wires sum up to zero), there is no simultaneous switching output noise (meaning that current draw from the source does not depend on the particular bits sent), reference-less receivers, and low EMI noise. All these can be captured as mathematical conditions on the codebook used by the communication system, meaning the set of all values that are simultaneously transmitted on the wires. This part requires mathematical analysis. But what is really important and unique about Kandou's coding techniques is that the concept of an efficient detector is embedded in the definition of the code, and is implemented by a (one-shot) network of generalized comparators -- components that are the bread and butter of any SerDes, and can be robustly implemented in any CMOS technology. This puts the design and analysis of the coding system squarely into the mathematical domain without any compromise in implementation and efficiency.
The comparators are such that they reject common mode noise, and are reference-less. The signals put on the wires sum up to zero, and draw the same current regardless of which particular code-word is transmitted. Moreover, the correct design of the codebook reduces EMI noise compared to differential signaling on the same number of wires and equal throughput.
Regarding the Shannon limit: I am not 100% sure what noise types to take into account to compute the mutual information and from that the Shannon capacity, since Shannon's definitions don't take into account the efficiency of the implementation, or the power the detector uses. Channels in this environment are largely deterministic, so given enough processing power can be inverted (=fully equalized). This leads to a very large throughput, and the capacity can be calculated easily. However, in practice this is the part that is hardly possible, so the Shannon bounds attained this way are much much better than achievable.
I think the only "differential" involved here is differential signal on two WIRES, not two nets. Here's the quote:
"I asked him what's a differential pair, and he said it's a way of using complementary signals across two wires. I thought this was so inefficient," says Shokrollahi, a professor at the Swiss Polytechnic in Lausanne.
Differential signals are used to reduce sensititvity to interference, along that link. For example, a noise spike creates a short-term voltage spike on a wire. The same voltage glitch will occur over each wire of the twisted pair. But if the twisted pair is used as a differential pair, where each wire carries the same signal as the other wire, but 180 degrees out of phase, then that "common mode" noise spike will be cancelled. The receiver won't detect any signal that is in phase, in the twisted pair. The receiver becomes more immune to ambient EMI.
Any number of modulation schemes exist already, that provide greater bandwidth at the expense of higher vulnerability to noise. It's always a balancing act. So there's nothing unique in principle, described in the article. The only question should be, how much closer to the Shannon limit is this getting us?
SerDes signals sent differentially across two nets will have a communications limit. But, if my understanding of this technology is correct, then one of the limiting factors is that you are only sending differential signals across these nets. You are limiting yourself to an encoding that isn't necessarily very efficient, yet it is the only one all of us have been using up till now.
Instead of considering it one channel, consider the two nets as two channels that influence each other. One way of taking advantage of the crosstalk between the two channels is to only send differential signals. But is that the most efficient approach?
Of course if you send non-differential signals across these nets, you better have a good understanding of the signal integrity effects of the alternately encoded signals, and how pre/post filtering and equalization affects the error rate. It looks like this is the meat of their invention.
Had to laugh at the request that they open source it all. Yeah, and I wish Dell would send me free PCs, and I'll take a couple of Teslas too, while we're asking.
There isn't enough information here to make any predictions. What is described is a common serializer-deserializer (SERDES) approach. Whatever special sauce there might be isn't hinted at.
The typical case, though, is that the more bits per second you send down the link, within a given channel width, the less robust the link becomes. So for example, existing xDSL links are very much distance dependent. If you restrict the bit rate to 6.1 Mb/s, you can go up to 4 Km over the copper twisted pair. If you want 12 Mb/s, now you're limited to 1.5 Km over the copper cable. And so on. Using more copper twisted pairs in parallel, to create slower individual pairs which aggregate to a higher total bit rate, also allows longer distance over the copper cable, at the expense of needing more twisted pairs (including having to worry about synchronization among the twisted pairs).
The only meaninglful question to ask is, once again, does this SERDES promise to get closer to the Shannon limit than previous techniques? Or does it promise to violate Shannon's equation? That's always the bottom line. Modern coding techniques can achieve a couple of dB from the Shannon limit. That's the question that needs to be addressed.
A lot better understanding of what is being done, whether there are any limitations on the "ensemble" and what (if any) assumptions they require about the data. In addition I think we might like to see correctly operating IP as a requirement before deciding to use it.