I can't help you much on the ASIC side, Rick. It seems to me that the increasing complexity of modern network architectures around SDN and increased security consciousness plays much more into software platforms, especially where routes cross domain boundaries. People will (and should be) concerned about the security implications of this, but increased complexity and flexibility definitely plays to software's strengths in those environments.
Cisco uses EZchip NP-3 and NP-4 NPU's for Layer 2 and 3 support in its ASR edge routers - not ASICs. The ASR1000 was initiated with QuantumFlow, but added the NP-4 when it became available. The ASR5K and 9K were lanuched with EZchip.
The Xeon is commonly used in the control plane, but with DPDK software it can be used in the data plane. I've not studied the data from Brocade, but I suspect when evaluated in real-life situations, it will prove to be about an order of magnitude slower than the EZCH NP4 - more or less in line with the data produced by Adlink and Lanner.
I am well-acquanted with some of the better-known ASICs that are used in switches, tough I do not represnet any vendor.
1. SW forwarding by CPU is not new. 10Gbps/Core (even if it is a real number and not, as I suspect, a special case under synthetic lab conditions) is not really exciting.
Current ASIC all support 500Gbps easily per single chip, and most vendors are getting close to the 1Tbps per chip. So "Softwaree beats ASICs" title is questionable.
ASIC also give fixed performance even when you enable additional features. IPv4/Ipv6, L2-Switching, MPLS, GRE, Security ACL's, Metering/shaping, multicasting,... I could go on and on (just look at any user manual of a run-of-the-mill switch/router, including Brocade's). Software based packet handling gets slower everytime you want to add a feature.
2. 200G for 2-slot server means 10-core Xeons (and I have a feeling that to get the 10G you need the top-of-the-line versions - which is to say the most expensive). This is "cheap hardware"? Especially after you add all the supporting chipsets a Xeon needs?
Anyway, the cost of edge-routers etc, is really not about HW cost.
The HW cost for an ASIC based switch/router is lower than a 2-slot xeon-based server. When you buy a Cisco router, most of the price buys you IOS, not the ASIC. Brocade can sell the Vyatta cheaper becaue it charges less for "Brocade-OS", not because of the HW.
3, Note that even in the article it says that Brocade will continue to build ASIC-based systems. If SW is so much cheaper, why?
Don't take me wrong - I do not seek to attack Software (I *AM* a Software-engineer) or claim that Software-based swithing/routing is a bad idea.
I am sure SW-based switches/routers will do well. They will surely get better and cheaper, and they offer many advantages (flexibility, Openness to 3rd party modifications, ability to run applications on-board, etc. etc.) but I think this artcile has an ovedose of Hype, and that for the next few years at least, ASIC/NPU vendors are still going to do well, and will not be replaced wholesale by Vyatta, or any other Software-only solution.
While I like DPDK and what it enables at a relatively low price, it does still have its limits.
While one can do IPv4 forwarding of small packets at 10Gbps, as processing gets more complicated, other cores will need to handle it. I haven't seen IPv6 performance, if IPv6 ever gets real deployment. When Intel's 40G NIC comes out, it will be interesting to see whether this approach still works at that speed. The early drivers for that NIC have been accepted into the stream for an upcoming Linux kernel, but no hardware for sale yet.
For other comparisons, firewall rules or IPsec VPN processing would be added to the mix, to see how many links can be handled per core while doing that other work. From what I have read about DPDK, cache misses are a killer, so anything needing large tables won't work well. It only works at 10Gbps because the packet can be DMA'ed directly into the cache, starting with Sandy Bridge.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.