One wonders where the breakpoint is between using general purpose processors vs packet processors, given Intel's talk about handling 40 Gbps at IDF last month. Its certainly a lot less investment to stick a pair of 40 Gbps NICs into a off-the-shelf machine than to design a network processor into a custom package. The network processor will be more energy efficient, and likely cheaper with volume, but at what volume? With network processors, one is forced to keep moving from vendor to vendor as either old solutions are discontinued or not updated.
I had a product based on the Motorola/Freescale C5, there were lots of plans for a successor, but the C5 didn't sell well, and after the Freescale spinoff, the successor was killed. Even if the successor hadn't been killed, we weren't seeing enough volume to afford to design a new board and migrate. I have another more recent design based on Cavium, but again, the volume isn't high enough to justify migrating to newer technology. General purpose processors look better and better, as lots of people make them, with design cycles more closely matching new device availability.
The line between "general purpose" processors and "embedded" processors is bluring. CPUs inside these new embedded processors are fairly high performance (at least in netlogics case - the cavium cpus have generally been lower performance but more in number). The embedded folks have more offloads and accelerators built in for specific applications which does result in improvement in performance. But this usually comes with some cost in customizing code and this then becomes sticky (great for the embedded guys).
As designs move from one ISA to another, the embedded customers may start demanding higher performance CPUs and not get tied into a specific accelerator (onloading vs. offloading). It is much easier to recompile code from MIPS to ARM e.g than having to port your security driver from one custom MIPS implementation to another.