What's the optimal team size for a given IC design project? It's a question I hear often from engineering managers and senior executives. What they're actually asking is whether they're over-staffing projects and therefore wasting resources. Implicitly, they're also asking "what's the fewest number of engineers I can put on a given project and still finish on time?" They're important questions directly impacting R&D ROI.
Projects demand a threshold number of engineers to meet schedule targets. Yet, there's a point at which adding resources yields little, if any, additional development throughput—
the exception is when a project desperately needs a particular kind of expertise or specialist. Although most R&D organizations lack the infrastructure to reliably quantify the number of engineers a project needs (which is why many miss schedule), managers instinctively know there is a point of diminishing return. Additional staffing increases overhead, including communication, coordination and consensus-building. That bleeds development time, lowering average productivity among team members. Each additional resource reduces team productivity—throughput increases, but the issue is by how much?
How should engineering managers determine the point of diminishing returns and therefore the optimal team size for a given chip design project? The answer lies in knowing the relationship between team size and productivity of their organization. Each R&D group has its own 'productivity vs. team size' curve (visualize an x-y plot in which the x-axis is team size and the y-axis is development productivity). As team size grows, productivity declines. The steeper the decline, the less of a boost in throughput with each resource added..
Best-in-class semiconductor R&D organizations suffer minimal degradation in productivity as team size grows—their "curves" are almost flat across a wide range of team sizes. That's one of the reasons they're best-in-class.
When an R&D organization calibrates itself in terms of productivity versus team size, it gains a huge competitive advantage. Knowing the optimal number of resources to put on projects ensures they are neither over-staffed nor (grossly) understaffed.
These top-tier groups often staff projects based on the assumption that their teams will achieve best-in-class productivity, which means projects are staffed with the absolute minimum number of resources. To hit schedule targets, the development teams' productivity must be best-in-class.
In sum, adding staffing increases throughput but average productivity declines. The fundamental struggle centers around understanding the tradeoff between lower productivity and higher throughput—how much does productivity fall and throughput rise with each additional resource. The optimal team size for a project depends on the particular R&D organization, the target design's complexity and the project's schedule constraint.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.