SAN JOSE, Calif. Researchers at the University of Illinois have detailed their agenda for defining the parallel programming capabilities needed for tomorrow's multicore processors. In a white paper posted online, they provided one of the most specific plans to date for next-generation multicore CPUs they plan to build in tandem with new parallel programming tools.
The paper was issued by the so-called Parallel@Illinois lab, one of two research centers launched this year with a total of $20 million from Microsoft Corp. and Intel Corp. to tackle the thorny problem of how to program future multicore processors. The other center, at the University of California at Berkeley, outlined its research agenda in March. A similar center at Stanford, backed by a handful of computer companies, started work in April.
The 50-page Illinois white paper says it describes "an ambitious research agenda that aims to make client parallel programming synonymous with programming." It includes work "in programming languages, compilers, runtime systems, hardware architecture, tools and formal testing methods, along with research in programming patterns and application
domains that are expected to trigger the killer applications of the future."
Specifically the group will plow the ground for a new class of "disciplined explicitly parallel languages" and domain-specific environments. It also wants to define a next-generation compiler that can take data from multiple sources including language annotations and runtime environments at different times in the life of a program.
But perhaps the most interesting section of the white paper is a detailed discussion of a new microprocessor designs to be built from the ground up with the new tools and ease of programming in mind. Indeed, the researchers said ease of parallel programming is likely to supplant performance and power as the leading design focus for tomorrow's microprocessors in a world of many-core chips that need to scale in performance but not complexity.
"We now have a unique opportunity to rethink the entire system stack and develop hardware that is better aligned with the needs of modern software," the paper said.
"Over time, we need a fundamental rethinking of concurrent hardware, including how to express and manage concurrent work units, communication, synchronization, and the memory consistency model, in tandem with our rethinking of the best practices for concurrent software."
The group will actually pursue two hardware programs. The first, called Bulk Multicore, is defined as "a flexible substrate with scalable cache coherence, high-performance sequential memory consistency, and an easy-to-use development and debugging environment."
The Bulk Multicore design executes groups of related instructions at a time using an emerging style of atomic transactions described by various researchers including some at Microsoft. The Illinois effort will use so-called hardware address signatures, a set of Kbit-long registers that "contain an accumulation of hash-encoded addresses through a Bloom filter. The hardware automatically accumulates the addresses read and written by
a chunk into a read and write signatures, respectively."
The approach gets away from the complexity of global cache snooping mechanisms used today, the paper claims.
Project DeNovo and a vision for tomorrow's PC
The paper provides less detail on a second hardware effort, known as the DeNovo project. DeNovo "rethinks concurrent hardware as a co-designed component of our disciplined programming strategy, both for enforcing the discipline and for exploiting it for simpler and more efficient hardware."
The DeNovo design uses a virtual instruction set architecture, probably implementing type instructions and an integrated runtime environment. "DeNovo seeks a fundamental rethinking of concurrent hardware, given the assumption that most future software will be disciplined for better dependability," the paper said.
The group also sketched out some of its planned software projects. They include work on "a general-purpose disciplined shared memory language that builds upon modern object-oriented languages with strong safety properties, providing the programmer with all the familiar ease-of-use facilities of modern sequential languages in conjunction with disciplined parallelism."
The group also hopes to create techniques and tools to let domain experts design their own domain-specific environments. Such frameworks could optimize use of parallelism in specific fields.
A planned compiler will "support explicitly parallel deterministic languages, including
domain-specific languages." The group will also work on runtime engines that can virtualize programs across a wide range of heterogeneous processor environments.
Many of the projects are based on work by researchers at Illinois and elsewhere. The paper cites more than 100 published works in the field.
Twenty Illinois researchers authored the paper including the lab's co-founder Marc Snir, a former IBM researcher manager who helped design some of Big Blue's supercomputers and Wen-mei Hwu, a widely published researcher in multicore architectures.
The parallel computing lab at Illinois includes about 16 faculty and 25 students. The lab gets $2 million in funding a year for five years from Microsoft and Intel plus additional funds from the university.
That's enough to create proof-of-concept designs, said Snir. The group should be able to report progress on some of its milestones within two years, he added.
Despite years of work in parallel programming, researchers have not been able to find a model easy enough for use by mainstream programmers. Nevertheless, the Illinois paper sets an optimistic tone for its ambitious goals.
"Commonly used programming models are prone to subtle, hard to reproduce bugs, and parallel programs are notoriously hard to test due to data races, non-deterministic interleavings, and complex memory models," the paper notes in its introduction.
Nevertheless it concludes that "our work is driven by a foreseeable future where all client applications will be parallel, and the primary consumer feature that will drive the economics of future client software development will be the quality of the human interaction."
The researchers describe a vision where significant amounts of computing will continue to be handled on end user systems. Future massively multicore processors using new parallel methods will be able to run applications that enable virtual environments, remote immersive environments and natural language processing, according to the paper.