For the 40G/100G standard, the PAR was approved in late 2007, and the standard ratified in mid 2010. So, about two and a half years. 100G switching units are only selling in small numbers today, three years later.
Some of that may have been the economy throttling capital investment. But to some extent it might also be because streaming media (a big consumer of bandwidth) wasn't so popular back in 2010. All that is changing and the pace is sure to pick up.
As to what is fast track, the bandwidth study indicated that 400G would be needed by 2015. I would say that two years to get a standard and first products out would be the fast track.
Yes, it's a lot of data. A lot of it is on optical going any kind of distance, but inside the central offices wired Ethernet is also used. 40G PHYs for copper are for the 1 to 10m range, so rack to rack. I think 100G is strictly optical, but not certain.
My opinioin is that we better get the hell on it as quickly as possible. But how do you define fast track? What's the normal timeframe and process to establish standards? What are the obstacles to speeding up the process?
That's a staggering amount of data flowing around there. Is that all optical or are there wired systems capable of some of those data rates? I haven't paid a lot of attention to the newer generations Ethernet standards, so I'm not really sure where the cut over from copper to fiber occurs.
Blog Doing Math in FPGAs Tom Burke 22 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...