I think this is a big deal. They have designed in the TM, QM funcionality into what was traditionally the NPU (packet processor). Previously, interface asics would stamp packets with preanalysis headers and send them to iTM (ingress traffic mgr) and from there it went to the NPU and or replicator engine for multicast. The amount of real estate on board was huge and it seems like they have significantly reduced the board real estate. That would allow them to put multiple such packet processing "pipelines". Seems this is huge for integration.
Knowing Cisco asics, I would not doubt that the operations in TM/QM/PP are probably as complete as you can imagine. What would be interesting is understanding the the number of queues they support for their TMs and some other TM details.
The folks who find this underwhelming probably don't understand what is going on here. This is not Z80s and LAN Controllers!
thank you @ajaycm...I understand this ASIC does deep level packet processing, it is not a switch...still there are other packet processing ASICs out there, how is Cisco ASIC tsacking against them? Kris
Just comparing bandwidth is not telling us that much.
What can be done to each packet at 400 Gbps/300 Mpps? That is much more interesting to compare. Number of SRAM/DRAM/TCAM lookups, what type of QoS, number of queues, type and number of schedulers etc.
And the comment that 300 Mbpps it too low. I see very little reason to design for linerate 64 byte packets. That does not exist in the real word... It can be done but it will negatively affect what can be done for each packet...
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.