Rick, what the paper describes is otherwise known as MIMO. Each AP transmits a signal on the same frequency channel. As long as the receivers can decorrelate the propagation paths from the different APs, they can reconstruct the desired signal.
In traditional MIMO, each transmitter sends multiple beams in different directions, and each receiver would combine the bit streams from all of the propagation paths. In this DIDO, it looks like each receiver is only interested in one of the propagation paths, rather than aggregating the signals from all of the paths. The net effect is the same, though.
These are clever techniques that APPEAR to violate Shannon's limit, but in fact they don't. They depend on decorrelated propagation paths, much as you would have if you used multiple separate cables in parallel. If the signals paths become more correlated, you will lose that spectral efficiency. For example, bring the APs physically very close together compared with the distance to the receivers. That sort of thing makes it difficult to decorrelate the different propagation paths.
Well...it appears from the white paper that the data center does channel estimation (with the test signal) on all the AP and receiver paths. Say there are N AP's and M receivers. So the data center is looking at complexity of O(N*M) in channel estimation.
After doing the channel estimation, the data center looks at the necessary "clean" waveform that it wants to get to each receiver, inverts the N*M matrix of impulse responses, convolves that with the desired waveform vector, and the result is the transmission vector.
When the transmission vector passes through all those channels, it is distorted by that N*M set of channels, just as expected from tha channel estimation. After distortion, the original desired signals all arrive at the receivers. Slick.
A couple of things I'd be interested in learning more about. First, the O(N*M) complexity problem, is there a way around that? Second, you'd probably want to update your channel models quick enough to stay within the coherence time. But that will eat away at your data rate, unless they're estimating on the fly (which is possible).
I'm a skeptic with "we broke Shannon's Law" schemes, having seen a few (hey, spiral modulation anyone?), but this looks pretty slick, on the face of it.
Mike, if that is indeed what they're doing, it doesn't sound very promising for tracking rapidly changing channel characteristics. Not that the processing power couldn't be brought to bear to do it in real time, but I would suspect the drain on the battery would be substantial.
A static demo is one thing, real-world deployment and satisfied users is quite another.
With regard to battery life issues: In the architecture described, the real heavy lifting is done at the data center. The handset could participate in the channel estimation by sending 'pings'. But it would be better to do it on the fly, with reference carriers or something. Not enough detail to understand at this level, but I'm still at the "it's pretty slick" point.
One thing seems clear: the handset uplinks are synchronized at the symbol level, as are the AP downlinks. If a handset transmits its symbol at a random timing offset to the others, no good comes of it.
And a comment from my 14-year-old son: "It all depends on the data center doing its job properly. If that doesn't work, the whole thing falls apart."
With that in mind--what if the channel model matrix cannot be inverted at the data center? I have no idea how frequently this can happen in real life, but I assume the fallback is to induce noise until it works...?
Very interesting stuff, and an entertaining thread. Good luck to the Artemis Networks team.
It's not just about the data center, there's plenty in the channel model matrix that has to be performed outside of the data center. The requirements for this kind of thing are massive. It's a major undertaking.
Note that any MIMO system also has to do channel estimation, and especially if the MIMO is used by mobile devices, the channel estimation has to be done quickly. So even this aspect is not new.
I think the main differences between this DIDO and MIMO are that the different APs may be a lot further apart with DIDO than your typical single MIMO antenna, and the propagation paths are used individually by each client system.
Here is a good viewgraph tutorial which explains how the signals are decorrelated in MIMO. I think you will note the similarities between the two schemes.
Interesting. I would have said that the main difference is their reliance on aggreagating multiple APs signal to make the desired signal. This could be considered MIMO, I suppose, but only very loosely. It would be feasible to have this and MIMO operating together, if I understand the system.
I'm puzzled by this technology - but very much appreciate the need for strong signals in our local areas and the frustrations so many of us experience with local "dead" areas. The photograph shows a single "head" broadcasting to 8 iPhones yet the text mentions that there is a centimeter sized zone at the cell phone (which would imply that positioning of the phone is critical). Late in the article, there is mention that 350 transmitters could cover San Francisco. Are there regional transmitters and then local repeaters? Do the transmitters synthesize a small region of interest for each phone? At what speed can the transmitter maintain the connection with a moving mobile phone?
Sounds pretty ambitious for a startup consisting of just eight full-time engineers. The fact that it is supposed to support different latencies and accomplish so much in so little time is a testament to Perlman's entrepreneurial spirit.
@Rick, this is true. Generally these days you have to scale up to about fifty people before you can sell out to Facebook for $16B or so...
I'm not sure exactly where this technilogy will fit. Cellular as we know it today is one place, but there is also the possibility that there could be something new growing from this. High speed in the local link is only one part of the equation. What kind of backhaul infrastructure would be necessary to support this? Is this really cellular technology (i.e. highly mobile) or potentially the basis for a (finally competitive!) new ISP technology?
@DrQuine: The way I understand it, the user devcie can move wherever the user wants and the system tracks it--but this capability has yet to be proven at mass scale.
The radio heads can be placed somewhat randomly and don't have t create traditional radio cell coverage areas since they apparently work by using overlap and interference, but here my understanding gets a bit fuzzy.
I'll ask Steve to jump on and answer your question.
What is the processing load to handle handsets that are moving? Say at 25 MPH on a bus. Do you have to have an overlay network with traditional towers handling clients moving above a certain rate and this network handling more stationary clients?
The technology will probably be better for static devices such as TV live streaming either through dedicated hardware or through some kind of mobile device. The spectrum loading is already difficult with poor network in moving devices. Also in congested areas the technology will be difficult to implement as there will be too much interference.
It will be interesting to see how this compares to dense deployments of small cells operating under SON, particularly with carrier aggregation and LTE over unlicensed bands. I think the band width density on that kind of deployment will challenge this, with no change to existing mobiles, no change to exiting protocols. The deployment will require more difficult backhaul engineering, but even that, using microwave aggregation is not that difficult. Small cells are small enough to build into building facades or into street lighting fixtures, so siting them is not an issue as it is with conventional towers.
I still think that what Artemis has achieved is very interesting, I just don't know if it makes commercial sense. Small cells have evolved rapidly under LTE, which was engineered to take small cells and Self Organizing Networks into account. I think you are going to see the results of that in 2015, particularly in regards to Heterogenous networks.
What we see here offered is a distributed MU-MIMO technology, similar to LTE-A's centralized CoMP. And thus it will inevitably face the same problems, e.g. cell edge performance, low mobility, unfavorable channel correlation, etc. Despite of this the claimed spectral efficiency gains and the promised scalability in my opinion do not hold for the LTE latency requirements and this will be sharpened further with 5G.
But one truly intriguing aspect of this product is the opportunity to provide a low cost solution to one of the Telco's problems, namely how to cover temporary events attracting huge amounts of users. A comparison of the power consumption, size and price of the offered AP to a state of the art base station should ring the bells for a nice niche market opportunity.
Anyway the future of beamforming in celuar is more than bright and the guys deserve kudos for the work they do. Regarding the statements made in the press... well, we all know that maketing doesn't go with engineering very well =)
I think it should be fair to call it "heavily distributed multi-user MIMO". In fact they are forming the beam by relating to the multiple AP as a dezentralized antenna array, a concept that should be well known in radar applications and acoustic beamforming.
Btw. an interessting press release about NSN's centralized-RAN solution:
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.