SAN FRANCISCO--Like the race to put man on the moon back in the 1960’s, the race to achieve exascale computing is becoming a pressing, global ambition.
With governments, corporations and academics world-over hankering for more system speed to address critical challenges from climate change to cancer cures, the high performance computing industry is being pushed like never before to exceed the exaflops barrier before the end of the decade.
Reaching exascale effectively means computers should be able to perform a quintillion calculations per second, a number considered science fiction until not long ago.
While the goal is clear and the purpose of achieving exascale is underscored by the urgency of dealing with some of Earth’s primary problems, the challenges and pitfalls on the path to exascale are numerous. From limited power budgets, to floor space limitations, to the reliability of monster systems, the road to successfully achieving exascale is a long and difficult one.
Recently at the supercomputing show in Seattle, EE Times asked several of the biggest players in high performance computing to outline what they saw as the main challenges to Exascale. The following video gives you their answers.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.