I don't know if "shame" is the right term. I assume you're talking about the service providers here, who have been steadily updating their core networks and last mile connections, over the years. It's labor-intensive, and therefore expensive work. But for example, DOCSIS 3.1 theroretically can deliver 10 Gb/s downstream and 1 Gb/s upstream, and it was approved as a standard earlier this year.
I am curious to see 802.11ac in the real world. I have 802.11n at home, and find that in the 5 GHz band, the bit rate will vary from the high 100 Mb/s or low 200 Mb/s ranges, to the AP's maximum of 270 Mb/s. My assumption is that the 2 X 2 or 4 X 4 MIMO used is not all that dependable, maybe even affected by people walking around the house. A system that depends on 8 X 8 MIMO will most likely exhibit this same behavior.
I have been tracking 802.11ac for a while now, and have even upgraded my home router to the standard. I haven't run real-world throughput tests on it, but it seems like it is getting close to the Gigabit Ethernet speed that is used as the backhaul for it in most cases. This, if nothing else, would seem to be a real limitation on further development (not to mention a little embarassing). Even some lucky cable customers with 1 Gb fiber could potentially be in the situation of having that be the bandwidth limitation.
Do you think that 802.11ac could shame the wired Ethernet guys into upping their game?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.