PORTLAND, Ore.—Last year, market research firm International Data Corp. (IDC) reported a virtual tie between Hewlett-Packard Co. and IBM Corp. in supercomputers, with each company hold 29 percent market share. The worldwide supercomputer market—also called high-performance computing (HPC)—however, is spilling over into the high-performance server market.
High-end server models, called cluster computers, are melding with supercomputers. For instance, IBM's Watson system which beat human champions on the quiz show Jeopardy last month is actually a cluster of commercially available Power 750 servers with 2,880 Power 7 cores. Last month, IDC reported that IBM was leading overall in servers, but that HP was ahead in industry-standard x86 servers.
According to Marc Hamilton, HP lead for high performance computing in the Americas, HP plans to expand its stake.
"HP is not building proprietary supercomputers, but using industry standard servers and graphics co-processors," said Hamilton. "Whether you call it a cluster computer, a supercomputer or a high-performance computing system, today everything is being simulating on them before it is built—whether it's a new airplane, a car design or the antenna for a new cell phone."
According to IDC, HP was No. 1 in the server blade market with 53 percent share last year, while IBM held a 28 percent share. And in the x86 server market, HP has over 38 percent share, with Dell at 21 percent and IBM coming in third 19 percent, according to IDC.
By combining advanced servers with high-speed interconnects and graphics processors tuned to specific applications, high-end applications can be configured to deliver supercomputer performance with souped-up servers.
"HPs high-performance computing uses industry standard components like X86 processors and Nvidia Tesla graphics coprocessors—eight in our SL390 server—but with a super-sized communications interconnect for the GPUs that still uses standard chip sets—what we call converged infrastructure," said Hamilton.
To simplify configuration of server-based supercomputers, HP's Factory Express service assembles built-to-order HPC cluster computers made to meet customers' specifications, but using HP's experience in how to create the necessary high-speed interconnects using standard chip sets. The finished cluster supercomputer is then totally integrated and pretested at HP, "to assure it comes up running on day one and stays running for the life of the system," said Ed Turkel, HP's worldwide marketing lead for high performance computing.
"The biggest barrier to the growth of HPC is a combination of affordability, power consumption and the complexity of putting together such very, very large computer systems," said Turkel. "By breaking through those barriers, we hope to increase levels of performance with industry standards by adding the specific components needed to give our systems a competitive advantage."
EEs who are actively designing know that we can never have enough compute power. Time is money as the saying goes. Whether EM fields analysis or circuit simulation or circuit routing or graphics processing we need much much more power. We really need tens of thousands of times speed up coupled with better software. Microsoft's monopoly and the resulting stagnation of software architecture and software development has hurt the computer industry very badly and set it back at least a decade.
When I compare the first computer I bought (IBM PC-XT) to what I have today, I have a super computer. Most of the machines we use today are way more powerful than we actually need. I just wish Microsoft wouldn't use up all of the bandwidth just to run their operating system. It makes me want to run Linux, but then so many of the websites are only tailored to IE.
In days of yore when supercomputers were giant CPUs built from discrete parts to outperform microprocessors, servers were just telecomm accessories. Today, however, the microprocessors servers use pack supercomputing punch. Custom built supercomputers will always be better performers for specific applications, but with the converging trends of moving computer power to the clouds and the use of multiple standard GPUs for acceleration, the time is right to pack supercomputer performance into server farms.
Cost and time to market are the major factors in this trend. I suppose these things go into phases, and there will be another phase in the future where cutom-built platforms will become yet again the norm in HPC.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.