You might have thought that processor design was dying out, but think again. The architecture wars rage on between the majors, and plenty of smaller companies still find it worthwhile to develop proprietary architectures (or enhance existing ones) for niche markets. Although fundamental concepts in processor design evolve slowly, supporting technologies are advancing with great enthusiasm.
The growth of these supporting technologies is due, at least in part, to the sheer complexity and dynamics of industrial projects. Powerful algorithms and procedures, dealing with all common obstacles, are tried, tested and generally available. But even still, verification crises such as delayed signoff or even dead silicon are common.
• 61% of new processor designs require a re-spin [IC Insights 2009]
• 48% of total processor development cost are verification related [IC Economics, 2007]
• 55% of all processor designs are delivered late [IC Insights 2009]
This situation means that processor verification is still a major activity in the semiconductor industry, and that reliable and predictable processor verification outcomes remain important, if elusive goals. To reach these goals, the industry must make the inevitable shift toward embracing IP. Without prejudice to existing verification infrastructure, specialized processor verification IP can free engineers from historical development and maintenance commitments. This liberated time and energy can then allow a renewed focus on verification quality and turnaround times.
Simulation-based processor verification essentially consists of the comparison of a reference model (including an Instruction Set Simulator, or ISS) and the processor RTL using a common set of tests (figure1). The quality of these tests – their coverage of all interesting behavior in the minimum amount of time – is essential to project success. In addition, multiple types of code coverage are performed. Examples of this include RTL code coverage and functional coverage of both the architecture and implementation.
Fig 1: Processor verification testbench
In spite of the structural simplicity of the processor testbench, its complexity often exceeds that of the design under test. The two major contributors to this are the ISS and the random test generators. The complexity of the ISS is a direct reflection of the complexity of the processor architecture. A random test generator, on the other hand, must scale not only with the processor architecture, but also with its implementation.
Our approach with the commercial software we are developing is to deliver a reusable architectural verification environment just for SPARC-V9 designs. We don't concern ourselves with any implementation-specific features of the DUT, only the ISA. We treat the DUT as a black box that is simply required to obey the architectural manual. We stimulate it with SPARC instructions and observe architecturally visible state only. Of course, the processor project will require product-specific tools too for other areas of the verification space. But our software is intended to be used off-the-shelf to find architectural bugs in any DUT that aspires to be SPARC-V9 compliant.
Alan M. Feldstein, Owner / Architectural Verification Engineer, Cosmic Horizon
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.