I think we're still not on the same page. Here are the fundamental points, as I see them. And feel free to bounce these off anyone you trust:
1. There are any number of schemes to send more bits down a given channel. There is no shortage of ideas on that score.
2. Shannon's equation, however, tells it like it is. Which means, the tradeoff in sending more bits/sec down a given channel (aka "channel capacity") is that you have to increase the SNR and/or the channel bandwidth. Otherwise, all you get at the receiver is errors. This SNR is not dependent on any one type of noise. It is for any and all types of noise, including noise inside the box, when we're talking about a backplane bus. So for example, crosstalk inside a box, receiver thermal noise, as well as channel gaussian noise, are all included.
where the S/N ratio is not in dB, but rather a ratio of watts of signal vs watts of noise. Again, noise is any type of noise. This equation still holds. No credible announcements have yet been made to say it can be violated.
4. State of the art receivers are already within a couple of dB of Shannon's limit. It's not like there's a huge room for improvement that modulation schemes can bring. For example, today's typical implementation of ATSC digital TV receivers is about 4.5 dB less good than the Shannon limit. BUT, a straightforward improvement of even the existing FEC scheme (using the viterbi code and reed-solomon code together rather than separately), without changing the transmission standard, can bring this to about 3 dB of the Shannon limit. In consumer grade equipment. In nitrogen-cooled equipment, receiver thermal noise can be reduced, and you can get that much closer. Also, low density parity checks or turbo code FEC are more efficient than the combined use of viterbi codes and reed-solomon, so they give you more net capacity for a given SNR. Any FEC takes up some channel capacity, but Shannon doesn't care. Shannon's capacity limit is for net data-carrying capacity, so you can trade off FEC for a lower mode modulation, for example. Or less FEC and a wider transmission channel.
AminS, you're right about MLT-3 not sending more bits per clock, but it certainly does reduce the width of the channel needed for a given b/s stream, i.e. measured in Hz. It does this by reducing the number of transitions needed, to send a given bit stream. Fourier series will demonstrate why this reduces the channel bandwidth requirement. The net effect is exactly the same, however. More capacity for a narrower transmission channel. As the French say, ça revient au même.
n-QAM can go as high as you want, as can n-PAM, n-ASK, n-PSK, and n-VSB. The European DVB-T2 goes up to 256-QAM, for example. The problem is always the same. The higher the mode, the more delicate the coding becomes. It's actually intuitively obvious. Higher modes require the receiver to perceive more subtle variations of phase or amplitude, or both.
Rick, I'm well aware that this type of discussion recurs. That's why I wanted to try to set some perspective here. There are any number of schemes out there, and always the same limit to what they can do, to cram more bits down a wire. And most importantly, we're already quite close to that limit. Some schemes handle multipath better than others, but require higher peak power. Some are better with pure gaussian noise, but are less immune to multipath. Some actually exploit multipath, to effectively set up multiple channels, but they depend on high decorrelation between the propagation paths. (So at long range, their effectiveness drops off.)
To make a valid claim that some new modulation scheme is better than existing ones, the only credible argument is to show how close this new modulation scheme gets to the limit, compared with existing schemes. Just explaining that more bits/sec are crammed down the wire is NOT convincing. That's why I keep responding this way. If the new scheme is substantially below, say, 3 dB or so of the Shannon limit, in consumer grade equipment, then maybe there's something truly innovative going on.
Said another way: The reason why some schemes are deliberately kept "inefficient," when "efficient" is measured only as channel capacity, is that the designer wants them to be robust. It's NOT that the designer is unaware of high modulation modes.
Even without standards they could be in backplane serdes chips, since those don't require standard compatibility. Could be a good demo before incorporating into a standard, and could buy them time until incorporating in a standard.
Thanks Bert. Good points! It is true that Kandou's scheme is more a modulation scheme than a coding scheme.
The main issues with modulation in chip-to-chip links are power and time (and area of course). Speeds are much higher than encountered in typical communication settings (up to 10's of Gbps), and the energy used for the complete delivery of a reliable bit is in the low pico-jule regime. This makes the use of ADC's quite challenging, even if the effective number of bits is low, and hence modulation schemes like n-QAM or n-PAM are out as soon as n is larger than, say, 4 or so. MLT-3 is also out because of the reduced margin (needs 3-PAM detectors but sends as many bits as differential signaling). Moreover, the noise types encountered in this communication setting are different than those encountered in normal comm systems: thermal noise (Gaussian noise) is very low, but ISI, crosstalk, SSO noise, etc are quite dominating. Hence the modulation scheme has to take these into account. Kandou's modulation scheme is developed to take all of that into account. In that sense, it is quite different from modulation schemes people normally use.
I guess my only comment was that there are multiple ways of cramming more bits per clock period. MLT-3 is one obvious choice, for example. Or in RF modulation, n-QAM. I'm not disputing that the technique used in this case might be clever, I'm just disputing that we're talking about a revolutionary idea.
You can also use MLT-3, as one example, either to send more bits per clock cycle, or to reduce the clock rate while not reducing the bit rate. All with appropriate advantage or liability to marginal SNR requirement. Ditto with n-QAM. For a given symbol rate, 64-QAM sends 6 bits per symbol, while PSK only sends one bit per symbol. But look at the marginal SNR requirements of each.
Or for DSL lines. If your house is at the distance limit for a given bit rate, it's because you've reached the limit wrt noise. A technique that sends more bits per clock cycle isn't going to change matters, unless you simultaneously improve other aspects of the system, such as perhaps the FEC, to more closely approach the Shannon limit. And as far as I know, no one has yet violated Shannon's limit.
Possibly, this Kandou is more "power efficient" than the competition, in practical encoders and decoders. I wouldn't know. However if that's the case, it wasn't clear reading the article. Meaning, no comparisons were made.
Just looking at the trasceivers (line) side, a diff pair consists of lines A, and A*(=B) to transmit 1 equivalent common mode signal. When adding another CM signal to the transmission, what if I only add a signal C which is a differential signal type complemetary of either A or B depend on the logic value of C. Same approach if I wan to add more such as another signal D. The total number of lines in this case would be 4 diff signals that can reference the others for actual 3 CM signals, instead of 6 generally required when using diff pairs. Not sure how it can be implemented in circuits.
I wonder if what I am thinking here is similar to what AminS describes ?
Here is one example of a signaling technique developed by Kandou called "ENRZ". It uses 4 wires, and the signals that are simultaneously transmitted are either permutaitons of (1,-1/3,-1/3,-1/3) or permutations of the negative of this vector. 8 code-words in total, so 3 bits can be encoded into this codebook. A small digital circuit + a generalization of LVDS across 4 wires jointly drive these signals on the wires.
One receiver could consist of three comparators: the first one compares the average of wires 1,3 against the average of wires 2,4. The second comparator compares the average of wires 1,4 against the average of wires 2,3; and the last comparator compares the average of wires 1,2 against the average of wires 3,4. These comparators all reject simultaneous common mode on all 4 wires (but not common mode on adjacent two wires). Moreover, they uniquely determine the codeword sent, provided that the signals have been somewhat equalized before they go into these comparators.
The interesting thing about this coding is that its resistance to intersymbol interference (ISI) is the same as the resistance of differential signaling to ISI at equal clock rate. But at equal throughput, ENRZ uses a lower clock frequency, hence has the same resistance to ISI as differential signaling at 66% of the frequency. This typically yields much larger margins at same throughput, and can make communication possible in cases where differential signaling would completely run out of steam.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.