EEs who are actively designing know that we can never have enough compute power. Time is money as the saying goes. Whether EM fields analysis or circuit simulation or circuit routing or graphics processing we need much much more power. We really need tens of thousands of times speed up coupled with better software. Microsoft's monopoly and the resulting stagnation of software architecture and software development has hurt the computer industry very badly and set it back at least a decade.
When I compare the first computer I bought (IBM PC-XT) to what I have today, I have a super computer. Most of the machines we use today are way more powerful than we actually need. I just wish Microsoft wouldn't use up all of the bandwidth just to run their operating system. It makes me want to run Linux, but then so many of the websites are only tailored to IE.
In days of yore when supercomputers were giant CPUs built from discrete parts to outperform microprocessors, servers were just telecomm accessories. Today, however, the microprocessors servers use pack supercomputing punch. Custom built supercomputers will always be better performers for specific applications, but with the converging trends of moving computer power to the clouds and the use of multiple standard GPUs for acceleration, the time is right to pack supercomputer performance into server farms.
Cost and time to market are the major factors in this trend. I suppose these things go into phases, and there will be another phase in the future where cutom-built platforms will become yet again the norm in HPC.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.