It seems like a major challenge that companies have to face to have to standardize their communications technology before they cand eploy it and to have to reveal through disclosure much of their advantage to their competitors in the process. It is worth aiming for smaller markets in that case.
This sounds very interesting, but this is a long road with the corpses of a lot of startups alongside of it. Until they go public with details it is going to be hard to see whose ox is going to be gored by this and if it will actually make it out into the world, and if it does, in what form. There are a lot of players in spectrum that have the capability to buy and bury inconvenient technologies or the political wherewithal to block them.
Reminds me that UWB (Ultra Wide Band) fiesta back in 2002-2005, more than 20 modulation schemes are proposed from a number of startup companies with pretty smart people, but sadly, practically none of them survived. The "most popular" UWB scheme was so-called MB-OFDM, similar to WiFi but using faster keying speed (312.5nsec VS 4usec of WiFi), thus wider subcarrier bandwidth. Anyway, even MB-OFDM UWB did not make breakthrough as proposed Wireless USB standard.
It would be interesting indeed. However, we do have Shannon's equation to compare it to, and we do know how close to the Shannon limit other existing techniques come. And we also know that MIMO, under conditions in which multiple propagation paths are very uncorrelated, can give the appearance of violating the Shannon limit, but actually does not.
So, when we get more details, a fair comparison can be made. Anything that is 10 dB better than existing possibilities, aside from MIMO, sounds like it violates the Shannon limit, to me. Aren't we capable of only a couple of dB from Shannon already?
It would be a true breakthrough if it did legitimately violate Shannon's limit, but that wasn't mentioned.
Frii's equasion is C=Blog2(1+S/N), while C=data rate (bit/sec), B is bandwidth (Hz), S/N is simple signal-noise ratio (not in dB).
While is is easy to define B, but assuming S/N is trickey part. Typical WiFi usecase is signal level around -70dBm, while background noise floor will be around -90dBm, so SNR is about 20dB.
Based on 802.11ac standard, 256QAM MCS9 datarate is 200Mbps per stream with 40MHz channel, short GI, coding rate 5/6. Raw (pre-FEC) rate is 240Mbps.
So it proves 802.11ac MCS9 is within typical usecase, even though 2dB margin is pretty low. Of course "typical" usecase could be varied - you'll get -50 - -60dBm signal if your PC is close to AP (within 10ft) so MCS9 will be much more practical.
Theoritically, 4096QAM (64x64 constallation) is 16bit/symbol so it should have x2 datarate than 256QAM (16x16 constallaton, 8bit/symbol). Thus, we can assume 802.11ac 4096QAM must have x2 datarate than 256QAM MCS9.
It shows "4096QAM WiFi" will be inpractical (so I don't think WiFi will adopt more-than 256QAM modulation / stream). Even if their claim of +10dB advantage is correct, 26dB SNR will be still tough to find in public wireless networ (WiFi or LTE).
However, it will make sense to backhaul, where dedicated frequency band is used with much higher TX power and highly tuned directional antenna.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.