I have commented a few times on the blog series by Harry Foster of Mentor Graphics. These blogs contain lots of really valuable information about trends in functional verification, and the studies he discusses are very useful in helping me track how the industry is evolving. Being able to answer questions such as how much time engineers spend performing verification and which languages are being adopted the most can ensure that engineers get the right tools for the future.
However, Foster's blog this week immediately raised my eyebrows -- not because of what it said, but because of what it didn't say. This chart from the 2012 Wilson Research Group study shows adoption trends from 2007 and 2012. One would think that technologies such as code coverage, functional coverage, and assertions were being adopted rapidly. Oops. That's not quite the case.
In the other blogs in this series, Foster had been comparing results from the 2012 study with results from 2010. To me, the switch to a comparison with 2007 results seemed highly suspicious. Unluckily for Foster, the Internet is persistent. This graph shows results from a 2010 study.
Let me turn those two charts into one.
Code coverage dropped 2 percent in two years. The use of assertions dropped 6 percent, and use of functional coverage dropped 6 percent. Mentor claimed the overall confidence level of the 2010 study was 95 percent with a margin of error of 4.1 percent. For 2012 study, the overall confidence level was 95 percent with a margin of error of 4.05 percent, so the differences between 2010 and 2012 are basically in the noise.
Rather than dealing with the small but declining percentages as signs of a maturation and potential saturation of the industry for constrained random test-pattern generation, the blogs attempted to paint a rosy picture of adoption.
This flattening of adoption is an important trend and actually supports the types of things that I hear real engineers telling me. I hear them talking about the increased difficulties associated with creating functional coverage points, not being able to use constrained random for SoC-level verification, and frustration with assertions. These numbers indicate, not mass migration, but that all is not well, and that EDA vendors need to be looking in other directions for their next generation of tools.
Is your use of any of these technologies changing? Perhaps you know of other reasons why these numbers have become flat.