SAN JOSE, Calif. Network processors pose one of the biggest verification challenges in the industry, according to panelists at the Network Processors Conference (NPC) Wednesday (Oct. 24). Users and vendors said it is difficult to verify silicon that provides both high performance and programmability.
The complexity of the verification challenge was summarized by George Apostol, vice president of engineering at Brecis Communications Corp. "This is a verification nightmare," Apostol said of a "multiservice" network processor his company is designing, featuring a three-processor subsystem. "Verification tools just don't have the speed or flexibility we need."
The Brecis chip has 20 clock domains, making it necessary to model a number of asynchronous interfaces a difficult task for today's simulation tools, Apostol said.
Apostol also said that intellectual property (IP) blocks "never work," and he said designers need a way to integrate IP testbenches into the system environment. He also said there's a need for better coverage tools, so designers can get a better handle on when verification is done.
Because there's "nothing like the real thing," Apostol said, his company developed its own FPGA-based emulation system. It runs at 25 MHz, has independently controllable asynchronous clocks, and offers "real-world" connectivity.
Gregg Donley, vice president of engineering at Network Virtual Systems, discussed a security processor his team is designing. The chip uses "wire-speed" programmable processors, has high data rates and needs interface compatibility to work with other vendors' network devices all of which make simulation difficult. "Verification is still the thing that's difficult, and it looks like it will continue to be for some time," he said.
Donley's team uses a testbench with XML-formatted vectors, a stream generator to check protocol streams, and an SPI 4.2 model that interfaces to the testbench. They don't have an accelerator, but they use the Verilog programming language interface (PLI) and sockets to take advantage of parallelism, running testbenches on separate machines when needed.
Verification needs to take both a top-down and bottom-up approach, said Niteen Patkar, vice president of network products at Silicon Access Networks Inc. He spoke of the difficulty of verifying a chip with multiple, high-speed interfaces that still needs to provide flexibility and programmability for the end user.
"No single approach works," said Patkar. "You need to work at the block, chip and system level." His group runs system-level verification with C models, which are later mixed with RTL models. They also use exhaustive testbenches at the block level, with both directed and random tests, as well as protocol checkers.
Hardware emulation allows millions of packets to be processed quickly, Patkar said but he noted that "it's a lot of work to get all this into the emulator."
One of the vendor representatives on the panel was Amr Mohsen, Aptix chairman and CEO. He said that traditional verification methods all run into problems with network processors. What's needed, Mohsen said, is a "unified platform" that both hardware and software developers can use, with the ability to run a lot of traffic through it.
Mark Gogolewski, vice president of engineering at Denali Software Inc., emphasized the prevalence of memory within network processors. He called for "data-driven verification" that lets users view and manipulate data at the system level, as opposed to bits and bytes.
Denny Scharf, strategic marketing manager for LSI Logic's broadband networks division, talked about the trade-offs involved in choosing between a network processor and an ASIC implementation. "For high performance and feature rich applications, the hardware ASIC approach still has the advantage," he said.