@docdivakar. Thanks for your links! I do hope you're right about that 100G will come soon in datacenters. I saw these multi-vendor 100G (4x25G) demos in the OFC of past two years. Everything is supposed to put together with the upcoming IEEE802.3bm standard.
At the end of the day, it's all about cost and power consumption. Do you know any mature CMOS or BiCMOS 25G analog ICs (driver/TIA)? I know most people uses III-V based ICs which may be a signifcant part of cost.
@a.sun: I do agree the 100G advances needed many improvements in the ecosystem. As I commented earlier in this thread, much of it has been addressed already. If you look at the third link in my comment (Plugfest), there are multiple choices of ASICs for 25G/28G including one by IBM. Granted these are not production-ready and power-optimized Silicons, they more than prove the viability of 100G.
I would argue that the migration to 100G in datacenters will come sooner than it did for the 10G. It took more than 5 generations of Silicon to get power consumptions down in 10G and it is still not low enough to make it to desktops and laptops.
I don't think there is any question that the whole ecosystem needs to be upgraded, yes, of course...what is the msot interestsing for me is power and cost comparisons between 10 Gb/s and 25 Gb/s lanes...anyone can provide some datapoints? Kris
My understanding is going from 10G to 25G per lane is not just about the optical transceivers but the whole ecosystem. Is all the things like NICs, PCBs, connectors, etc. ready for 25G? Many Telecom systems are still using power hungry 4:10 gearbox to translate 25G optics to 10G electronics (I don't know if recently hot selling 100G DP-QPSK uses a gearbox though it comes with another power monster, DSP). In his histroy datacom technlogy always fell behind telecom due to its demands for low cost (and of course lower specs as tradeoff). Will 4x25G emerge in datacom/datacenter anytime soon? I highly doubt it.
Putting 4x25G directly on board and close to ICs may be a way to get around the whole ecosystem upgrade (like Intel is pushing, but for a different purpose :-). However, IBM's latest blue water supercomputer still adopts Avago's 12x10G optical engines on server boards. 25G has a long way to go.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.