There is a fundamental utility limit of the network entering/exiting a server -- that's memory bandwidth. 400Gb/s is 50GB/sec, which is on par with achievable total memory bandwidth of common servers. When your network becomes faster than memory speed, you have to wonder about practical utility of that extra unusable capacity. I would argue that the practical limit is closer to single memory channel speed, not aggregate system memory bandwidth. Today, a 2100MHz DDR3 Channel is 17GB/sec ~ 136Gbps.
1) Andy's talking about the server space. I think he's bang on. The adoption curves look to be very similar to past networking tech uptakes.
2) He's not talking about the desktop space and/or wireless space. GigE is now pressed down on el-cheapo DIY motherboards, but you don't really need greater than 1GbE in your house for the next 1/2 decade) Hence, you cannot necessarily make the same volume argument for bringing down the price of 10GbE at the same rate.
That means that the absolute price of 10G won't drop as quickly as it did for 1G, but it will get there. Of that, I have no doubt. Afterall, as people rely on network (buzzword cloud) services, the number of servers continues to rise. And Ideally you keep constant BW/core as you deploy big systems. As core count rises, so do your network bandwidth needs.
That "100,000 chips or it ain't worth it" has been the mantra for decades - nothing new.
100Gb/s FPGAs have been around for a couple of years now. Again, nothing new.
No mention of 160Gb/s, or 400Gb/s, which are upcoming. Sounds to me like he's trying to solve procurement channel problems in his investment portfolio, vs being a visionary driver of industry.