A TI Fellow, Reid Tatge leads the development of TI's compiler technology infrastructure and products in the software development groups and gives his take on state of SoC development in the year 2020. Hint: Don't bet on 'magic' tools.
Editor's Note: Welcome to the third installment of our 2020 Vision series, courtesy of Texas Instruments. This time we look at tools in the context of SoC develoment.
Customers need chips, tools and software that match the specific needs of their application. In addition, they need everything to be simple to design into a product, easy to program efficiently, ultra-low power and ultra-low cost, available early in the end-product's preferred life-cycle and broadly supported by third parties. In 2020, I expect customer requirements to stay pretty much the same, but the underlying technology and how we develop it will be vastly different and far more complex than what we provide now.
From a hardware perspective, these future systems on chip (SoCs) will bring multiple digital signal processor (DSP) and general-purpose processor (GPP) cores together with custom hardware accelerators into a heterogeneous architecture loosely coupled with an asynchronous interconnect. In addition, these devices will have a non-uniform memory architecture and be designed for ultra-low power consumption.
Designing such complex architectures from scratch will no longer be feasible, both from a cost and time-to-market standpoint. Instead, devices will be designed using an iterative approach that relies on reconfigurable system modeling tools, such as G3-type compilation tools that support a rapid design methodology. Specifically, the topology of the SoC will be tunable according to the particular application domain at the individual processor node and memory subsystem level. The chief advantage of this approach is that it leads to completed SoC designs in months, not years.
The next challenge for SoC designers will be making them easy to program so developers can view the system as a loosely-coupled network of processors and be able to access the variety of available processing capabilities without having to involve themselves in all the low-level details that arise in multiprocessor architectures. Additionally, developers will need to be able to program in a HLL (high-level language) while achieving high-performance efficiency. Development tools for these SoCs will support program partitioning, system visualization, multi-core compilation and pre-hardware simulation as well as provide a reliable OS designed to manage the unique characteristics of multi-core architectures.
While it is difficult to anticipate exactly how these SoC devices and development tools will manifest, I can eliminate a number of "promising" possibilities:
- Large, monolithic, mega-CPUs: These fantastically complex architectures take years to define and tune, and then, at least another year to design. Their development environments are closed not for any proprietary reasons but because only the architects can program them.
- Symmetric networks of commodity GPPs: While an outside contender, these architectures solve difficult problems with more of the same. Eventually they collapse under the weight of messaging, data stitching and other forms of overhead.
- Any architecture dependent upon "Magic" tools: It would certainly solve many problems if development tools could generate great code for any arbitrary CPU as well as automatically partition inherently serial programs onto a network of processors. Such a panacea will not be available by 2020.
A TI Fellow, Reid Tatge leads the development of TI's compiler technology infrastructure and products in the software development groups. He also works closely with TI and customers to define new DSP architectures.