The formation of a focus on SoCs (systems on chips) here on EE Times should come as no real surprise to anyone, and yet, as an industry, we are being slow to react to the implications of the SoC revolution.
A Wilson Research study conducted during 2012 for EDA company Mentor Graphics, and referenced at Mentor's website, shows that 79 percent of all non-FPGA designs now contain one or more processors, and even 56 percent of FPGA designs have a processor in them. What’s more, over 50 percent of the non-FPGA designs are multiprocessor chips having two or more software-programmable processors in them. The study showed a significant rise in the number of multiprocessor designs.
As an example, consider that in 2007 almost 40 percent of designs contained just one processor, and about 12 percent contained two. In the latest study, 28 percent contained two processors, and only 22 percent were uniprocessor designs.
As an interesting aside it appears that three processors has always been an unpopular number. Also, it should be taken into account that the survey is skewed towards the advanced companies, with the average process geometry adopted being 45 nm. If we were to consider all designs it would probably be a node or two behind this, and the processor adoption might be closer to the 2007 numbers.
Tools for processors
So, with most advanced designs containing multiple processors you would think that we would find lots of design tools that focus on the processor, or the interaction of the processor with the other aspects of the system. But this is far from the truth. Consider for a moment the principle verification strategy in use today. It is based on RTL simulation, and it is fed with stimulus that comes from a constrained random pattern generator. This is fueled by the SystemVerilog language and is encapsulated in methodologies, the latest of which is UVM. But where is the support for the processor in there? There is none, and in fact all processors have to be removed from the design before this strategy will work. Mentor is trying to work around this to some extent, with the company's inFact tool, but many of the problems remain.
Alternatively, could this be an acknowledgement that the processor is a block of IP that can be trusted and does not need to be re-verified in the context of the rest of the system. If this were true, it might be encouraging in that we have started to conduct a layered approach to verification where things do not need to be verified multiple times. The fact that I have not seen it happen for other blocks leads me to believe that this is not the case.
Mentor was probably the most visionary of the EDA companies in this respect and started to buy its way into the embedded processor and software markets many years ago. But to this day, most of its tools remain highly segregated. If the EDA companies are not developing tools to address this, does it imply that tools are not required? We have certainly seen system-level tools that attempt to measure performance or power of a potential architecture, and the whole area of prototyping to support early software development is growing -- hence another area of focus supported in the makeover of EETimes.
So, is the processor anything more than a fancy piece of control logic that happens to run software? This would seem to be the natural implication of the lack of processor-centric tools. Are we seeing the emergence of a layered approach to verification? Do we even need tools that take the processors into account?