Why do semiconductor organizations benchmark product development productivity? Two reasons. The first is obvious—to determine how their product development competitiveness compares against the industry. R&D prowess is a matter of long-term survival. Second, measuring their productivity enables reliable forecasting of engineering headcount requirements when planning new IC projects. Accurate forecasts equate to both on-time schedule performance and high schedule predictability. It's a matter of competitive advantage.
Creating consistently reliable project plans requires a solid grasp of the R&D organization's development productivity. That's because productivity dictates how many engineers a project needs to finish on time. Too few engineers and the project slips schedule—a common occurrence. Organizations measuring their productivity calculate exactly how many engineers projects need.
Quantifying productivity also enables organizations to create better product roadmaps. R&D managers calculate how many chips (across the target design complexity range) the organization can develop annually.
Organizations whose productivity is higher than the industry quickly find they're able to allocate fewer engineers per project than competitors. That yields a big competitive advantage—lower headcount translates to lower development cost. High productivity also means the R&D organization has additional resources that otherwise wouldn't be available.
They use those "additional" engineers in a number of ways. First, the organization typically develops more products than competitors. Second, it often develops products that have higher functionality/performance and therefore more value-add, which brings higher profit margins. Third, they frequently "overstaff" critical projects to reduce development time and time-to-market. Larger project teams almost always exhibit higher development throughput, or output per unit of time.
Lastly, they measure productivity to get greater visibility into the bottlenecks and inefficiencies in their development processes. That enables them to isolate and fix problems more quickly than competitors. As the saying goes, "you can't fix what you don't measure."
Organizations failing to rigorously benchmark their R&D performance often fool themselves into thinking their productivity is improving. It might be improving, but the question is whether it's outpacing the competition.
Semiconductor organizations ignoring productivity measurement and benchmarking do so at their peril. Most will eventually cease to exist because they consistently miss schedules and have fewer new products coming to market. Know any?
Using transistors to measure design complexity is wholly inaccurate -- I couldn't agree more. I wrote an article on how my firm does it, which is a production-proven approach applied to several thousand IC projects. Here's the URL
Productivity indeed is a function of skillset, corss functional teams, etc. (e.g. tools, methodolog and management).
Not sure what you mean by hard rules.
Benchmarking is NOT subjective -- my firm has been doing it for over 15 years on over 3,000 IC and embedded SW projects.
Wrong. Productivity will consistely decline as team size increases. I think what you meant to say is that increasing manpower by 10X will not result in a 10X increase in throughput. That's absolutely correct. The increase depends on the quality of the management of the team (and organization and company). The name of the game is to minimize the decline in productivity as team size increases.
Mythical man month is alive and well -- to a point. Adding people does not increase througput linearly, but it does increase it. The issue is determining the point of diminishing returns. But most R&D organizations don't get bogged down in such questions, because lack of resources does not give them the luxury to think about the possibility of adding too many engineers to a project.
1. It is irrefutable that larger project teams have higher throughput than smaller teams -- but there is certainly a law of diminishing returns. I have observed this first hand on hundreds of projects.
2. Adding people in the middle of a project, as opposed to at the outset, may or may not increase throughput. It really depends on the particular project and situation. In most cases it will, but the impact could easily be minimal. Again, depends on the project/situation.
3. Engineering resources are not fungible. Not sure why you think the article implies they are.
4. Across the industry companies are increasing team sizes because the rise in design complexity is outstripping productivity improvement. Again, irrefutable fact -- based on data.
Be careful what you measure and how the measurement incentivizes or disincentivizes your engineers.
I remember many years ago a project in which management decided lots of productivity metrics should be collected. For IC designers, the metric was transistors per hour, much as for software engineers it was lines of code per hour. Heaven help you if you were doing RFIC design, where your entire design is a relatively small number of transistors!
But for us digital guys, especially those of us with a lot of DSP/datapath content, the metric was hilarious.
Watch me code up tens of thousands of transistors in seconds:
wire [31:0] x, y;
wire [63:0] z;
always @ (x or y)
z = x * y;
Voila! A 32x32 bit multiplier in about 15 seconds. Ok, I haven't synthesized it yet, but synthesis will complete in less time than it takes me to go get a cup of coffee.
Since DSP designs require many multiply and add operations, I can blow up the die size in no time by dropping down hardware multipliers every time an algorithm requires a multiplication.
Yeah, but we all know that's not the optimum way to architect an algorithm-on-silicon. But hey, look at my transistors per hour metric!
In the words of one of my favorite Dilbert cartoons from years ago, in which financial incentives were tied to productivity metrics: "I'm going to code me a new minivan!"
Sometimes the productivity comes from more complex factors such as skillset, cohesion of cross functional teams, etc. There are no hard rules for this, and of course benchmarking or measuring is very subjective right?