Hi Brian, As Mark Twain said: There are three kinds of lies: lies, damn lies, and statistics. ;-) Concerning the Wilson Research Group 2010 results, one thing we learned after the study while working on the new study is that the make-up of the survey pool was heavily biased towards more advanced, or leading-edge participants. This was not the case for the 2007 FarWest Reseach study. Effort went into ensuring that the make-up of the 2012 study was more balanced--and not biased towards a group of mature or less mature participants. This was done by increasing the number of lists that the study pool was drawn from across multiple market segments, which was more in line with the approach taken for the 2007 FarWest Research Study.
With that said, I am always cautious about referencing absolute numbers within a study. It is the "general" trends that are important.
I do agree with your observation that there has been a slowing down in adopting some advanced functional verification techniques within the past few years. I believe that the industry is coming to terms with what to do with all these techniques, that is, how to effectively integrate them into a flow and how to make sense of the results. I have seen a number of projects with engineers who claim that they are using advanced techniques--such as funtional coverage. Yet when I press for details, I learn that they are not effectively using these techniques. Some projects have even abandoned techniques that they adopted on prior projects since they were deemed ineffective. Unfortunately, we do not have a good way of quantifying results in terms of effective adoption of an advanced technique versus ad hoc adoption.
With all due respect Harry, you used the results from 2010 everywhere else with no mention about the results having been skewed. Information such as this should have been presented in the introduction where you talk about the sampling and error rates.
Hi Brian, There is no intent to mislead. My confidence level between 2007 and 2012 is high. As I previously stated, the results for 2010 are optimistic based on sampling bias we uncovered after the fact. For example, the growth in adoption of certain languages was actually greater than what i revealed between 2010 and 2012 (due to this bias), which means as reported the results are conservative. I have planned in my series an Epilogue to point out a number of potential problems where the data could be misinterpreted and potentially misused (not just as a result of the optimistic results from 2010). I welcome the discussions and questions the data raises.
Sorry Brian, Statistics does work. It is sad that this chart doesn't provide the information that you were looking for. But that is not the mistake of statistics, that is the mistake of the person you created the chart.
"People first decide what they want to say", or maybe people don't want to come across as sounding like they are behind the times, so they are more inclined to say "yes, we're using assertions" or whatever the latest thing is in verification, even if perhaps they did that on a project a couple years ago and found that it didn't really buy them much and was a big learning curve, man-hour investment, etc. If, for example, they decided for the current project to just run old fashioned simulations of all the functional modes and not worry about how the thing responds to random stimuli, they might or might not be willing to admit that in a survey about how much they are using the latest approaches to verification.
What else could you ever expect to hear from the representative of a company that sells some variety of software? Really? If they ever said that the demand for their product was flattening out, or that the number of users was not getting bigger every day, they would be out the door, or at least bereft of the years bonus. Smart marketing dictates that one always claim that everybody else loves the product and that the rest are in line to buy it. Any other assertion acknowledges that there may be some reason to not rush out and spend the money.
So it is not that anybody was deliberately misleading us with falsehoods, just that the truth was being stuffed into a nicer wrapper than it should have had, and some folks were just not presented with ALL of the truth. But, that is how marketing works, so we all need to understand that in advance, and adjust our interpretation of the claims apporpriately.