CHICAGO Beyond the flood of compilers, debuggers and controller cores that will roll out at the Embedded Systems Conference Spring next week, a fresh vision of the embedded future is beginning to take shape. A project still in the research stage at Hewlett-Packard Co. promises to catapult beyond the patchwork-quilt approach of systems-on-a-chip, and for the first time make it economically feasible to quickly roll custom processors, in low volumes, for specialized, deeply embedded applications.
Moreover, experts believe it could be the hardware foundation of the emerging "post-PC" world.
The pace-setting effort which started out as a "think tank" project has been three years in the making at HP's prestigious research labs in Cupertino, Calif., and Cambridge, Mass. It is led by the same engineers Bob Rau, on the West Coast, and Josh Fisher, on the East who are the world's leading VLIW (very-long-instruction-word) architects. Both played a major role in helping Intel Corp. define the IA-64 Merced architecture.
This is a bold bid by Hewlett-Packard to take embedded computing to the next step: beyond off-the-shelf processors, beyond system-on-a-chip and into uncharted waters where custom processors are architected by an automated hardware/software codesign process. Such chips would be churned out in very low volumes for specific embedded applications.
A major impetus for the venture is a perceived need to supply the burgeoning demands of smart embedded devices such as Web processors, car navigation systems and other new-age consumer-electronics devices being suggested by ventures such as Sun Microsystems Inc.'s Jini distributed computing concept.
HP's custom chips would not be low-level microcontrollers with puny compute capabilities, but full-fledged explicitly parallel-instruction computing (EPIC) architectures. EPIC is the powerful VLIW-like platform that forms the foundation for Intel's upcoming Merced microprocessor. HP, which first tipped the idea of using EPIC in embedded late last year, is now beginning to detail a broader picture of its vision and technology.
"Embedded has a bad image we think of it as microcontrollers and we don't get too excited," HP's Rau told an audience of engineers at a recent IEEE conference. "But embedded computing has very challenging requirements much more than we're used to in the workstation and desktop spaces. So pretty much the entire complexity of computing is going to be in the embedded arena.
"Smart products will require complete embedded computer systems," Rau added.
To turn complex embedded requirements for low power consumption and high Mips into custom chips, HP has already developed a prototype way to design such circuit, which it is calling the PICO Architecture Synthesis System. (PICO stands for program in, chip out.)
The technical goal, according to an HP presentation obtained by EE Times, is the "automatic synthesis of application-specific parallel/VLIW ULSI microprocessors and their compilers for embedded computing."
In plainer English, HP wants to be able to design a custom system architecture in one week, and tape out an embedded processor in four weeks.
A similar approach is being taken by Tensilica Inc. (Santa Clara, Calif.), a startup with a 32-bit configurable processor core and architecture, with a tool kit for specialized applications.
Specifically, HP's process involves the automated hardware and software co-synthesis of the embedded chip. PICO will take as its starting point a basic EPIC processor and then figure out exactly how many registers the target custom embedded device needs, how those registers should be partitioned and which functions will be supported. In addition, the chip's instruction set can be optimized. For example, a processor aimed at multimedia applications would be tuned for different instruction-set needs than one supporting a cell phone.
L1 and L2 caches could also be configured differently for various applications.
"By customizing, you can get tremendous benefits in terms of cost, power and performance," Rau said.
Cores and peripherals
If a custom approach is so clearly beneficial, why hasn't the embedded world already embraced it? The answer is that is has begun to, in a crude way, with system-on-chip.
Indeed, system-on-a-chip approaches have attempted to address the economic challenges of deeply embedded computing by enabling designers to build semi-custom chips with a selection of peripherals mated with an off-the-shelf core. However, there are still problems. Namely, system-on-chip requires a huge investment of intellectual property in a specific architecture. The semiconductor vendor must then sell enough core/peripheral configurations in enough quantities to recoup its engineering costs.
In market terms, system-on-chip is becoming the next big era in embedded. Companies like ARM and MIPS are embracing it on the hardware side, with cores that can be combined with on-chip peripherals to form customized systems.
Paradoxically, the key to taking such devices and making them work in embedded applications will be a software infrastructure that's still being formed. For example, most real-time operating system vendors are staking out strong positions as providers of tools to help developers grapple with core-based offerings and thus bring their deeply embedded apps to market in a timely fashion.
But system-on-a-chip core-peripheral combos can't always address the specific power and functional requirements of embedded designers, since by definition they are fitted with a laundry list of features that can't be removed.
"Such chips become in effect rather custom, because it's unlikely that the next application will require exactly the same set of electronics," Rau explained.
But the most significant impediment to profitable deployment of embedded silicon is the huge non-recurring engineering costs, which require immense sales volumes to support. Along with the cost of designing the chip, Rau pointed to the massive fabs and process-development costs.
"Moore's second law says that the cost of this goes up by a factor of two every four years," Rau said. "Which means that your volume had better go up at least that much." Otherwise a company must amortize higher costs across a smaller volume of chips an untenable situation.
HP's custom computing approach attempts to mitigate such problems. Perhaps its key conceptual contribution, which pushes beyond system-on-a-chip, is to set forth a much more methodical approach toward stitching intellectual property into working chips.
Whereas today's system-on-chip technology lets designers choose from a Chinese restaurant's menu of fixed options, HP's approach is more of an all-you-can-eat buffet with far fewer restrictions on the feature sets engineers can pick.
Initially, the HP approach, which is still in the research phase, will probably be more successful in reducing the cost of chip design than in addressing fab-related cost issues. Yet even a cut on that side of the equation would be of great industry benefit
Nevertheless, HP recognizes that it won't be an easy trick to crank out numerous varieties of custom, embedded CPUs. As a result, HP noted that "the economics of custom design requires the automation of architecture." In other words, the chips will be designed by software that has access to a library of on-chip intellectual property.
A second key component addressed by HP is time-to-market, as evidenced by the objective to go from requirements to tapeout in four weeks.
Because HP's PICO-designed chip will be more specific to the application in mind, Rau said, it will be easier to integrate with a product's software and other components. "If you use a standard processor, it's designed without the specific application in mind," he said, and includes wasted resources.
Rau added that, for a desired performance level on a given application, the PICO-designed processor will have a smaller die size than a standard processor. Thus, manufacturing costs will be lower. Enumerating the benefits of PICO-designed processors, Rau pointed to a correct mix of functional units, a minimal set of op codes, exactly sized caches and statistically optimized instruction formats as elements that enable die size to be kept in check.