Design Con 2015
Breaking News
News & Analysis

Startup Promises Bandwidth Boost

2/18/2014 07:25 AM EST
30 comments
Page 1 / 2 Next >
More Related Links
View Comments: Threaded | Newest First | Oldest First
rick merritt
User Rank
Author
Your take
rick merritt   2/18/2014 11:07:15 AM
NO RATINGS
What will it take for you to adopt the Kandou approach into your chip interface?

JmS0
User Rank
Rookie
Re: What will it take...?
JmS0   2/18/2014 12:00:56 PM
NO RATINGS
A lot better understanding of what is being done, whether there are any limitations on the "ensemble" and what (if any) assumptions they require about the data. In addition I think we might like to see correctly operating IP as a requirement before deciding to use it.

DrFPGA
User Rank
Blogger
Re: Your take
DrFPGA   2/18/2014 2:10:30 PM
Standards and royalties- seems like an unlikely mix to me....

Anyone have some current data on standards that require royalty payments?

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/18/2014 4:09:31 PM
Anyone have some current data on standards that require royalty payments?

MPEG-2 compression, H.264, ATSC 8T-VSB, for sure. There must be plenty of others too.

rick merritt
User Rank
Author
Re: Your take
rick merritt   2/18/2014 9:52:07 PM
@DrFPGA: Standards essential patents is a huge and controversial area these days.

Etmax
User Rank
Rookie
Re: Your take
Etmax   2/19/2014 7:46:29 AM
NO RATINGS
There are lots, the only requirement is that the proprietary IP used in the standard be available for a fair and non-discriminatory license arrangement. That's vague I know, but it is commonplace

rick merritt
User Rank
Author
Re: Your take
rick merritt   2/19/2014 2:34:36 PM
NO RATINGS
@ETmax: Indeed there's a lively debate about how to handle standards essential patents in a way which encourages both patents and standards.

I'm working on a story that mentions this issue, so if you have inputs, fire away!

Etmax
User Rank
Rookie
Re: Your take
Etmax   2/20/2014 11:53:31 PM
NO RATINGS
@Rick, I think standards should be patent free or almost free to the extent that uptake isn't discouraged.

At the end of the day patents are a monopoly granted by governement to reward innovation and to help recover the investment in the development and for the benefit that society receives. When a patent forms part of a standard then the uptake is generally in the millions so the patent fee should be very small to reflect the user volume, otherwise the end user pays a ridiculously high cost because the patent holder is more interested in making 100 units a a million dollars each rather than a million at $100 each. and the end user receive little benefit.

If the patent is for a low volume product then ok, there must be some return on investment so a higher patent fee but for volume products somewhere between a few cents to a few $$.

Lets say someone invented a new SMT package that offered some benefit then <1% of the package cost would be fair (because of the volumes) if it's a standard because let's face it, millions weren't invested.

I'd also say because the constitution says that patents are granted to an inventor for a time to benefit society as much as the inventor a patent should be terminated if it's been shelved (eg. to exclude competition) as otherwise society has received negative benefit.

When patents result restrictions resulting in loss of life, government should step in and decide patent income (thinking medical patents mostly) as again people dying is not society benefitting.

This is just a quick off the cuff (not deeply thought out) suggestion of what needs looking into, and I do welcome comment. Sorry I diverted a bit from the "standards" part of your request. I'll sit down and think more along those lines and get back to you.

 

AminS
User Rank
Rookie
Re: Your take
AminS   2/21/2014 5:56:55 AM
NO RATINGS
@Etmax, the discussion about patented technology in standards is probably as old as standards themselves. On the one hand, people are interested in the best possible solutions, and on the other hand, they would like it for free. This can work well for device manufacturers since they will make their margin on the devices they will sell down the road. They have an incentive to offer their solutions to standards bodies for free.

IP companies have a different model, of course. They create technologies, and want remuneration for the work they put into the development, and profits down the road. For every successful solution, there are numerous other solutions that the companies invested in and did not work (be it technically or otherwise), so the cost associated to a successful solution is not just the development time of that solution alone. There are numerous examples of patented technologies in standards. MPEG is a prominent example that comes to mind, but many other examples also exist in wireless standards. Patents are therefore not  a show stopper IF the economics is clear to all parties beforehand and IF it is made completely clear what the patent landscape is.


To me royalties and licensing fees for a patented technology is not unsimilar to what we see in the entertainment industry: a good movie can generate revenue infinitely many years after it was created. The  bell rings every time it is rented, watched online, bought, or even if parts of it are used in other films. The market will decide how much profit a film makes. No one can force a termination of the royalty stream. Though patents are at a slight disadvantage because of their finite protection horizon, by and large they are subject to a similar economics.

Etmax
User Rank
Rookie
Re: Your take
Etmax   2/21/2014 10:09:09 AM
NO RATINGS
@AminS, I hear what you are saying, but I don't believe most IP companies create IP, they buy it and monetise it often in industrial blackmail type situations. Most patents are expressions of the bleeding obvious or a slight variation on an existing theme and are worth nowhere near what people want for them. SW patents are a case in point, often being only a written (in source code) version of something that is already done.

SW in the US is covered by patents and copyright, which is just ridiculous.

Re copyright and the length of it, to have copyright extend to 70 years past the end of the author's life is unconstitutional when you think that the purpose of copyright is to encourage works to be created, if you can live off one work for a ridiculous amount of time how can that encourage you to do more? Most countries aren't as silly as the US on this.

There are also instances of patents where a pharmacutical/chemical company discovers a natural compound used by poor natives and then patents it and forbids the original user from making use of it. Some seed in India used as an insecticide was affected by this. Then there is Monsanto creating gene spliced crops which eventually contaminate surrounding farms then Monsanto sues the natural farmers for using their seed.

Then there's the issue of public money funding a lot of research that then gets patented by someone working at the Uni and the public has to pay for what they paid to have developed.

The list really just goes on and on.

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/18/2014 4:01:33 PM
There isn't enough information here to make any predictions. What is described is a common serializer-deserializer (SERDES) approach. Whatever special sauce there might be isn't hinted at.

The typical case, though, is that the more bits per second you send down the link, within a given channel width, the less robust the link becomes. So for example, existing xDSL links are very much distance dependent. If you restrict the bit rate to 6.1 Mb/s, you can go up to 4 Km over the copper twisted pair. If you want 12 Mb/s, now you're limited to 1.5 Km over the copper cable. And so on. Using more copper twisted pairs in parallel, to create slower individual pairs which aggregate to a higher total bit rate, also allows longer distance over the copper cable, at the expense of needing more twisted pairs (including having to worry about synchronization among the twisted pairs).

The only meaninglful question to ask is, once again, does this SERDES promise to get closer to the Shannon limit than previous techniques? Or does it promise to violate Shannon's equation? That's always the bottom line. Modern coding techniques can achieve a couple of dB from the Shannon limit. That's the question that needs to be addressed.

tb100
User Rank
CEO
Re: Your take
tb100   2/18/2014 4:35:01 PM
SerDes signals sent differentially across two nets will have a communications limit. But, if my understanding of this technology is correct, then one of the limiting factors is that you are only sending differential signals across these nets. You are limiting yourself to an encoding that isn't necessarily very efficient, yet it is the only one all of us have been using up till now.

Instead of considering it one channel, consider the two nets as two channels that influence each other.  One way of taking advantage of the crosstalk between the two channels is to only send differential signals. But is that the most efficient approach? 

Of course if you send non-differential signals across these nets, you better have a good understanding of the signal integrity effects of the alternately encoded signals, and how pre/post filtering and equalization affects the error rate. It looks like this is the meat of their invention.

 

 

Had to laugh at the request that they open source it all. Yeah, and I wish Dell would send me free PCs, and I'll take a couple of Teslas too, while we're asking.

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/18/2014 4:46:40 PM
I think the only "differential" involved here is differential signal on two WIRES, not two nets. Here's the quote:

"I asked him what's a differential pair, and he said it's a way of using complementary signals across two wires. I thought this was so inefficient," says Shokrollahi, a professor at the Swiss Polytechnic in Lausanne.

Differential signals are used to reduce sensititvity to interference, along that link. For example, a noise spike creates a short-term voltage spike on a wire. The same voltage glitch will occur over each wire of the twisted pair. But if the twisted pair is used as a differential pair, where each wire carries the same signal as the other wire, but 180 degrees out of phase, then that "common mode" noise spike will be cancelled. The receiver won't detect any signal that is in phase, in the twisted pair. The receiver becomes more immune to ambient EMI.

Any number of modulation schemes exist already, that provide greater bandwidth at the expense of higher vulnerability to noise. It's always a balancing act. So there's nothing unique in principle, described in the article. The only question should be, how much closer to the Shannon limit is this getting us?

AminS
User Rank
Rookie
Re: Your take
AminS   2/18/2014 5:18:34 PM
This new coding technique uses the bandwidth more efficiently than differential signaling. The closest way of describing it is that Kandou is using spatial coding, whereas traditional FEC uses temporal coding. That is, Kandou introduces dependencis across the wires of an inteface, whereas channel coding introduces dependencies across time (from one clock cycle to the next). While temporal coding would be possible as well, it comes at the price of higher latency. Spatial coding, however, when properly designed and implemented, has close to no latency. 

Back to the coding part, the goal is to pack more information on the wires than possible with differential signaling, while retaining the properties of differential signaling. The things that make differential signaling robust are common mode resistance (essentially the fact that the receiver rejects noise of equal phase/amplitude on the wires, and the fact that the signals on the wires sum up to zero), there is no simultaneous switching output noise (meaning that current draw from the source does not depend on the particular bits sent), reference-less receivers, and low EMI noise. All these can be captured as mathematical conditions on the codebook used by the communication system, meaning the set of all values that are simultaneously transmitted on the wires. This part requires mathematical analysis. But what is really important and unique about Kandou's coding techniques is that the concept of an efficient detector is embedded in the definition of the code, and is implemented by a (one-shot) network of generalized comparators -- components that are the bread and butter of any SerDes, and can be robustly implemented in any CMOS technology. This puts the design and analysis of the coding system squarely into the mathematical domain without any compromise in implementation and efficiency.

The comparators are such that they reject common mode noise, and are reference-less. The signals put on the wires sum up to zero, and draw the same current regardless of which particular code-word is transmitted. Moreover, the correct design of the codebook reduces EMI noise compared to differential signaling on the same number of wires and equal throughput.

Regarding the Shannon limit: I am not 100% sure what noise types to take into account to compute the mutual information and from that the Shannon capacity, since Shannon's definitions don't take into account the efficiency of the implementation, or the power the detector uses. Channels in this environment are largely deterministic, so given enough processing power can be inverted (=fully equalized). This leads to a very large throughput, and the capacity can be calculated easily. However, in practice this is the part that is hardly possible, so the Shannon bounds attained this way are much much better than achievable. 

 

 

AminS
User Rank
Rookie
Re: Your take
AminS   2/18/2014 5:29:48 PM
Here is one example of a signaling technique developed by Kandou called "ENRZ". It uses 4 wires, and the signals that are simultaneously transmitted are either permutaitons of (1,-1/3,-1/3,-1/3) or permutations of the negative of this vector. 8 code-words in total, so 3 bits can be encoded into this codebook. A small digital circuit + a generalization of LVDS across 4 wires jointly drive these signals on the wires. 

One receiver could consist of  three comparators: the first one compares the average of wires 1,3 against the average of wires 2,4. The second comparator compares the average of wires 1,4 against the average of wires 2,3; and the last comparator compares the average of wires 1,2 against the average of wires 3,4. These comparators all reject simultaneous common mode on all 4 wires (but not common mode on adjacent two wires). Moreover, they uniquely determine the codeword sent, provided that the signals have been somewhat equalized before they go into these comparators. 

The interesting thing about this coding is that its resistance to intersymbol interference (ISI) is the same as the resistance of differential signaling to ISI at equal clock rate. But at equal throughput, ENRZ uses a lower clock frequency, hence has the same resistance to ISI as differential signaling at 66% of the frequency. This typically yields much larger margins at same throughput, and can make communication possible in cases where differential signaling would completely run out of steam. 

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/18/2014 6:50:34 PM
I guess my only comment was that there are multiple ways of cramming more bits per clock period. MLT-3 is one obvious choice, for example. Or in RF modulation, n-QAM. I'm not disputing that the technique used in this case might be clever, I'm just disputing that we're talking about a revolutionary idea.

You can also use MLT-3, as one example, either to send more bits per clock cycle, or to reduce the clock rate while not reducing the bit rate. All with appropriate advantage or liability to marginal SNR requirement. Ditto with n-QAM. For a given symbol rate, 64-QAM sends 6 bits per symbol, while PSK only sends one bit per symbol. But look at the marginal SNR requirements of each.

Or for DSL lines. If your house is at the distance limit for a given bit rate, it's because you've reached the limit wrt noise. A technique that sends more bits per clock cycle isn't going to change matters, unless you simultaneously improve other aspects of the system, such as perhaps the FEC, to more closely approach the Shannon limit. And as far as I know, no one has yet violated Shannon's limit.

Possibly, this Kandou is more "power efficient" than the competition, in practical encoders and decoders. I wouldn't know. However if that's the case, it wasn't clear reading the article. Meaning, no comparisons were made.

rick merritt
User Rank
Author
Re: Your take
rick merritt   2/18/2014 9:54:31 PM
@Bert: Good point. here's one comparison, another startup pushing bandwidth increases, this one over networks rather than interfaces though, and using a new modulation scheme:

http://www.eetimes.com/document.asp?doc_id=1320414

AminS
User Rank
Rookie
Re: Your take
AminS   2/19/2014 3:41:19 AM
Thanks Bert. Good points! It is  true that Kandou's scheme is more a modulation scheme than a coding scheme.


The main issues with modulation in chip-to-chip links are power and time (and area of course). Speeds are much higher than encountered in typical communication settings (up to 10's of Gbps), and the energy used for the complete delivery of a reliable bit is in the low pico-jule regime. This makes the use of ADC's quite challenging, even if the effective number of bits is low, and hence modulation schemes like n-QAM or n-PAM are out as soon as n is larger than, say, 4 or so. MLT-3 is also out because of the reduced margin (needs 3-PAM detectors but sends as many bits as differential signaling). Moreover, the noise types encountered in this communication setting are different than those encountered in normal comm systems: thermal noise (Gaussian noise) is very low, but ISI, crosstalk, SSO noise, etc are quite dominating. Hence the modulation scheme has to take these into account. Kandou's modulation scheme is developed to take all of that into account. In that sense, it is quite different from modulation schemes people normally use.

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/19/2014 4:38:06 PM
NO RATINGS
Amin and Rick,

I think we're still not on the same page. Here are the fundamental points, as I see them. And feel free to bounce these off anyone you trust:

1. There are any number of schemes to send more bits down a given channel. There is no shortage of ideas on that score.

2. Shannon's equation, however, tells it like it is. Which means, the tradeoff in sending more bits/sec down a given channel (aka "channel capacity") is that you have to increase the SNR and/or the channel bandwidth. Otherwise, all you get at the receiver is errors. This SNR is not dependent on any one type of noise. It is for any and all types of noise, including noise inside the box, when we're talking about a backplane bus. So for example, crosstalk inside a box, receiver thermal noise, as well as channel gaussian noise, are all included.

3. Shannon's equation is simple:

Channel capacity (b/s) = Bandwidth (Hz) * logbase2(1 + S/N)

where the S/N ratio is not in dB, but rather a ratio of watts of signal vs watts of noise. Again, noise is any type of noise. This equation still holds. No credible announcements have yet been made to say it can be violated.

4. State of the art receivers are already within a couple of dB of Shannon's limit. It's not like there's a huge room for improvement that modulation schemes can bring. For example, today's typical implementation of ATSC digital TV receivers is about 4.5 dB less good than the Shannon limit. BUT, a straightforward improvement of even the existing FEC scheme (using the viterbi code and reed-solomon code together rather than separately), without changing the transmission standard, can bring this to about 3 dB of the Shannon limit. In consumer grade equipment. In nitrogen-cooled equipment, receiver thermal noise can be reduced, and you can get that much closer. Also, low density parity checks or turbo code FEC are more efficient than the combined use of viterbi codes and reed-solomon, so they give you more net capacity for a given SNR. Any FEC takes up some channel capacity, but Shannon doesn't care. Shannon's capacity limit is for net data-carrying capacity, so you can trade off FEC for a lower mode modulation, for example. Or less FEC and a wider transmission channel.

AminS, you're right about MLT-3 not sending more bits per clock, but it certainly does reduce the width of the channel needed for a given b/s stream, i.e. measured in Hz. It does this by reducing the number of transitions needed, to send a given bit stream. Fourier series will demonstrate why this reduces the channel bandwidth requirement. The net effect is exactly the same, however. More capacity for a narrower transmission channel. As the French say, ça revient au même.

n-QAM can go as high as you want, as can n-PAM, n-ASK, n-PSK, and n-VSB. The European DVB-T2 goes up to 256-QAM, for example. The problem is always the same. The higher the mode, the more delicate the coding becomes. It's actually intuitively obvious. Higher modes require the receiver to perceive more subtle variations of phase or amplitude, or both.

Rick, I'm well aware that this type of discussion recurs. That's why I wanted to try to set some perspective here. There are any number of schemes out there, and always the same limit to what they can do, to cram more bits down a wire. And most importantly, we're already quite close to that limit. Some schemes handle multipath better than others, but require higher peak power. Some are better with pure gaussian noise, but are less immune to multipath. Some actually exploit multipath, to effectively set up multiple channels, but they depend on high decorrelation between the propagation paths. (So at long range, their effectiveness drops off.)

SO:

To make a valid claim that some new modulation scheme is better than existing ones, the only credible argument is to show how close this new modulation scheme gets to the limit, compared with existing schemes. Just explaining that more bits/sec are crammed down the wire is NOT convincing. That's why I keep responding this way. If the new scheme is substantially below, say, 3 dB or so of the Shannon limit, in consumer grade equipment, then maybe there's something truly innovative going on.

Said another way: The reason why some schemes are deliberately kept "inefficient," when "efficient" is measured only as channel capacity, is that the designer wants them to be robust. It's NOT that the designer is unaware of high modulation modes.

alex_m1
User Rank
CEO
Re: Your take
alex_m1   2/19/2014 5:27:35 PM
NO RATINGS
@Bert: there are 2 kinds of categories of innovation in communication systems: optimizing for a given channel(where shannon's theory is very useful), and building better channels.

Fiber optics and Standard differential signaling are  examples of building a better channel.This current method is similarly mostly about building better channels.

Bert22306
User Rank
CEO
Re: Your take
Bert22306   2/19/2014 5:37:17 PM
NO RATINGS
Alex, I agree completely with your first point. In the two most recent cases, we're talking about improved modulation schemes in existing channels, which is what prompted my reactions. It's not a matter new type of fiber, or new type of copper cable, or something like quantum communications.

Possibly, implementing this modulation technique in practical circuits results in significant power savings at the transmitters and receivers, compared with the alternatives. That was only alluded to, but not demonstrated.

alex_m1
User Rank
CEO
Re: Your take
alex_m1   2/19/2014 7:09:58 PM
NO RATINGS
@Bert : You don't have to have a new type of cable to get a new channel. By using the physical properties of noise, diffferntial signaling taskes two wires and combines them into a single analog  communications channel with superior noise properties. That's one way to look at differential signaling. Than using a simple demodulator(comparator) , you get quite a good performance.

In a similar fashion, this method takes X wires and turns them into a multi channel with superior noise and power properties. There's a white paper in the company web site which i found describes this method quite well.

winstongator
User Rank
Rookie
Re: Your take
winstongator   4/5/2014 7:53:39 AM
NO RATINGS
Bert,

You are missing a huge part of the 'efficiency' equation - energy consumed per bit transmitted (pJ/bit).  Data converter interfaces are one place where this efficiency is huge.  You have faster converters whose internals are getting more efficient, however, the interfaces want less wires.  This means faster serial interfaces whose power consumption can approach the consumption of the core converter.

Power consumption is a huge issue in backplane situations too, as the large number of channels in a confined space makes routing the heat out of the system difficult.

Amin presented at ISSCC this year.  I would imagine there is a good explanation in the paper.

Bert22306
User Rank
CEO
Re: Your take
Bert22306   4/6/2014 6:03:33 PM
NO RATINGS
Winstongator, I did mention that potential aspect of "efficiency" in my previous post:

Possibly, implementing this modulation technique in practical circuits results in significant power savings at the transmitters and receivers, compared with the alternatives. That was only alluded to, but not demonstrated.

The article very clearly discussed the supposed inefficiency of differential pairs, implying that the main thrust here was to develop something that transmits more bits per symbol, or more bits per sec per Hz.

In the article, mention of energy savings was made in these two instances only, the rest being about b/s/Hz "efficiencies":

The 40nm demo chip sends 12 Gbit/s per wire at less than 4 picojoules/bit, dispersing eight bits across eight wires. Parts of the chip's technology could be adopted for use in memory interfaces or on 2.5D chip stacks.

And,

Using 12 taps of decision-feedback equalization, it will drive signals a meter through a Megtron-6 board and connectors, consuming 9 picojoules/bit or less

If you re-read the article, I think you'll agree that the main emphasis was on spectral efficiency, not energy per bit per second.

alex_m1
User Rank
CEO
Re: Your take
alex_m1   2/20/2014 10:24:35 AM
NO RATINGS
@Rick: i've noticed GDDR is missing from kandou bus's list. considering that GDDR is the fastest memory bus used currently and could greatly benefit from kandou , why is it? 

AminS
User Rank
Rookie
Re: Your take
AminS   2/21/2014 5:19:34 AM
NO RATINGS
@alex_m1: Kandou's tech covers all chip-to-chip links, and is applicable to all memory standards, be it any flavor of {G,LP}DDRx, HMC/HBM I/O, inside the HBM stack (with a different type of driver), or between a tower of stacked DRAM devices and a controller via an interposer (silicon or organic). The missing reference to GDDR is an omission.

zewde yeraswork
User Rank
Blogger
unique startup
zewde yeraswork   2/18/2014 12:02:59 PM
NO RATINGS
This sounds like a very different, very unique sort of company. It's preferred means of offering bandwidth grows out of a particular idea--one I hadn't heard anyone express until now.

jeepman0
User Rank
Rookie
Licensing ...
jeepman0   2/18/2014 2:14:49 PM
How about a royalty free open source license for starters? :-)

NoviceMan
User Rank
Rookie
Just a Novice Thought.
NoviceMan   2/18/2014 6:36:05 PM
Just looking at the trasceivers (line) side, a diff pair consists of lines A, and A*(=B) to transmit 1 equivalent common mode signal. When adding another CM signal to the transmission, what if I only add a signal C which is a differential signal type complemetary of either A or B depend on the logic value of C. Same approach if I wan to add more such as another signal D. The total number of lines in this case would be 4 diff signals that can reference the others for actual 3 CM signals, instead of 6 generally required when using diff pairs. Not sure how it can be implemented in circuits.

 

I wonder if what I am thinking here is similar to what AminS describes ?

alex_m1
User Rank
CEO
standards
alex_m1   2/19/2014 8:31:30 AM
NO RATINGS
Even without standards they could be in backplane serdes chips, since those don't require standard compatibility. Could be a good demo before incorporating into  a standard, and could buy them time until incorporating in a standard.

Radio
LATEST ARCHIVED BROADCAST
EE Times Senior Technical Editor Martin Rowe will interview EMC engineer Kenneth Wyatt.
Top Comments of the Week
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Flash Poll