@rick.merritt @Neo1: about five years ago, majority of the datacenters started wiring with 10Gig interconnects with the strategy of future-proofing themselves even though the active equipments were years away in the making. So there has been a gradual decline in the new installations for Gigabit Ethernet ever since.
But the article only makes a passing reference to the big elephant in the room: power consumption. It has to come below 5.0W not 10.0W! The 10Gig FCoE ports consume ~1.0W with much smaller cables but they don't have the length reaches that base-T enjoys.
Dr. MP Divakar
First, why is Gigabit Ethernet on decline? Next, the graph looks good but it assumes so much that we can ignore those numbers. Of course the communications bottleneck is getting improved when we move up on the interface speeds but saying that this will make our internet experience blazing is is like saying adding a bigger gate to the offices will improve my commute times!
What is very disappointing is quality of EE times allows someone to simply expresses his assumptions without any due diligence on the accuracy!
If Rick had at least bother to attend IDF just few weeks back, he wouldn't have to GUESS wrong that Intel is using Teranetics PHY in their 10GE LOM solution....
The Ethernet speeds are increasing in geometric proportions. Why not grow these speeds horizontally also by having byte communications ?( over 8 wires instead of one) . Ethernet physical link layer protocols may be able to easily accommodate such byte level transfers. In today's times such byte transfer cables will not be as bulky as the parallel interface cables ( Centronics) of the printers of yesteryear's. Just trying to think laterally!