Discussions about R&D return-on-investment (RoI) among semiconductor industry executives often turn to engineering productivity. They're often surprised when I assert that productivity isn't that important—at least as far as R&D performance metrics are concerned. A far more important metric is engineering throughput.
Throughput measures rate of output and therefore quantifies how fast you develop products. Productivity measures how efficiently you develop them. Throughput is about cycle time. Productivity is about cost. The more productive your teams, the fewer engineers needed to develop products—hence the lower the development cost. But what's more important, cost or time-to-market?
Throughput's dimensions are "output per week." For example, output quantified in "Design Units" yields Design Units per Week. Unlike productivity, throughput ignores the amount of manpower the team expends to create that output. It simply quantifies the output and divides it by the project's duration (concept to release-to-production), measuring output per unit of time.
The distinction between throughput and productivity is important because time-to-market usually trumps all else. I'm not suggesting efficient development isn't important, but throughput, and therefore time-to-market, is what usually generates the most revenue and profits.
R&D organizations can increase throughput on their projects in four ways: (1) by raising the average productivity among team members, which means executing tasks more efficiently and therefore expending less effort; (2) by increasing the number of hours in the standard work-week—not particularly popular among engineers, but one that often finds favor with management; (3) by eliminating low value-add activities that consume project resources, thereby increasing engineering resource utilization; and (4) by increasing the project's staffing level.
The first two—increasing productivity and encouraging longer hours—rarely yield competitive advantage. That's because nearly all R&D organizations pursue them (to survive), enabling them only to keep pace with the industry norm.
The latter two – increasing utilization and staffing—are the opportunities for differentiation and competitive advantage. Most companies are reluctant to pursue these insidious root causes of low throughput. That's because eliminating non value-added tasks can be contentious and politically unpopular—nobody likes to hear their job has little value add. Likewise, increasing staffing level means taking on fewer projects, which can also be quite contentious and riddled with politics.
Ronald Collett is president and CEO of Numetrics Management Systems, Inc. www.numetrics.com.
You have hit on what I believe is one of the most insidious problems in the semiconductor industry -- the conflict of interest facing engineers and project managers. They accept projects that they know will not finish anywhere close to the target schedule. That's because the staffing-level allocated to the project is inadequate, given the design's complexity. What goes a long way toward solving the problem are fact-based estimates of resource requirements. When the organization has reliable estimates, far better decisions are made.
Thanks for the comment.
"a lot resources are wasted on projects that should never have been started in the first place"
Indeed, but shall a project manager refuse a project he thinks would be a waste of the company's money, or shall he accept such a challenge?
The choice is easy the first attitude would be detrimental to his career while the second would be beneficial unless it brings the company to bankrupcy.
Generating consistently reliable estimates of resources and schedule requires an empirically calibrated model whose inputs include (1) a description of the technical attributes of the design, (2) the likely performance range of the development team (best case to worst case), and any constraints on the project (i.e. schedule, staffing). The key to creating a good model is to have a wealth of project data. In the IC space, our models leverage an industry database of 1,600 production IC projects comprising 30,000 circuit blocks. Its predictive power is strong. The coefficient of determination is 0.93 (i.e. R-squared value) when we plot Estimated Project Effort vs. Actual Project Effort. Top semiconductor companies across the industry can vouch for its accuracy and efficacy.
Thanks for the comment.
Ron, we have probably all experienced examples of Brooks' Law -- which applies just as well to hardware projects as to software projects.
The difficulty is how to accurately estimate the correct number of engineers and specific engineering skillsets to allocate to the project and when to add them.
Increasingly, project managers today either do not come up through engineering, or else their hands-on engineering development days were so long ago they are no longer relevant to today's NPI processes and methodologies.
It seems to be an extraordinarily difficult task to make the correct assessment. The consequences of underestimating the resourcing needs is a missed schedule, which might mean a complete waste of an investment, and the consequence of overestimating the resources is excessive R&D cost.
These days, almost any manager will take the cost-conservative approach and assume that engineering will find a way to meet the schedule and make up for any project management/staffing shortcomings. Translation: They will also fall back to your Option 2 -- expect engineering to work longer, harder and smarter.
I agree. What I think we're talking about determining the primary opportunity for engineers and scientists based in the U.S. -- to create products or offer services that are heavily differentiated from those that can be easily replicated in low-cost, off-shore regions of the world. How can it be otherwise? These are the "new technologies" to which you're referring. They are the ones comprising higher value-add, and therefore they command a price premimum, which justfies the higher labor cost. In short, the U.S. engineering community must be able to differentiate itself from the engineering community in low-cost regions.
Also, small clarification to your point about a "high productivity house" -- I'm sure you'll agree that offshoring does not necessarily equate to high productivity. In fact, it's probably the opposite. Instead, teh promise of offshoring is higher development-throughput, which is the the amount of output created in a given time period. The availability of low cost (off-shore) labor means more bodies can be thrown at the project, which usually (but not always) results in higher throughput.
Thanks for the comment.
As a member of a small design house that specializes in doing what others couldn't... (just so my bias is clear) In 30+ years of engineering experience, sometimes as observer and sometimes as participant, I have come to believe that the incremental improvements are merely necessary for survival and maintenance of the status quo, while "pioneering new technologies" are what keeps jobs and income where the best engineers are. If all I do as an engineer is fine-tune a product BOM for cost, you might as well find a lower cost engineer in China to do the same job (as long as you don't mind the learning curve as he substitutes parts that are too "cheap"). On the other hand, if you need a solution that has evaded in some cases generations of engineers that might be enabled by new technology and/or a lifetime of experience of a carefully observing practicing engineer, sending the product to a "high productivity" house will fail every time.
Just when the new Millennium started, I was party to an innovative R & D project where my company after discovering that the product being developed had potentially a very revolutionary effect on the market, spent all its efforts in getting worldwide patents for the technology being developed but failed to provide enough R & D resources to bring the product to a level where it could be shown to have commercial viability. The company got the patents but could not get advantage of them. Millions of dollars were just wasted in the whole process. All the industrial giants who had shown interest in our technology could not get the sample prototypes in time and hence turned their back to us. Eventually the company went bankrupt with the patent power and the technology remaining only on the paper!
With SoC development costs routinely ranging from $50M to $100M, it's no surprise that few companies want to take on additional risk. Moreover, the consequence of such skyrocketing costs is that SoC product revenue must be much higher than in the past -- to justify the investment. Sales must reach $250M to $500M, because the R&D RoI must be at least 5X. Achieving such revenue targets means the total market size must be at least $750M to $1.5B, because one must assume a maximum of only thirty to forty percent market share. There aren't that many markets of that size. Bottom-line: participating in the SoC business means there is little if any margin for error. It requires picking the right market opportunities and hitting development schedule targets.
Not sure that the TI, Qualcomm, National and Lucent would agree with you that they merely follow or just refine existing product lines -- I'll let them weigh in on that if they care too. Notwithstanding that debate, your overall point is well taken -- that pioneering new technologies, products and applications takes vision. It also takes money. The two go hand in hand.
On the money front, I believe that one of the core problems is that a lot resources are wasted on projects that should never have been started in the first place. This is money down the drain. Resources get spread too thin when companies take on too many projects. One of the primary reasons they take on too many is because they think they can finish them with fewer engineers than is possible. In sum, there is a misalignment between the design's complexity and staffing level. I observe this first-hand every day of the week.
Thanks for your comment.