EEs who are actively designing know that we can never have enough compute power. Time is money as the saying goes. Whether EM fields analysis or circuit simulation or circuit routing or graphics processing we need much much more power. We really need tens of thousands of times speed up coupled with better software. Microsoft's monopoly and the resulting stagnation of software architecture and software development has hurt the computer industry very badly and set it back at least a decade.
When I compare the first computer I bought (IBM PC-XT) to what I have today, I have a super computer. Most of the machines we use today are way more powerful than we actually need. I just wish Microsoft wouldn't use up all of the bandwidth just to run their operating system. It makes me want to run Linux, but then so many of the websites are only tailored to IE.
In days of yore when supercomputers were giant CPUs built from discrete parts to outperform microprocessors, servers were just telecomm accessories. Today, however, the microprocessors servers use pack supercomputing punch. Custom built supercomputers will always be better performers for specific applications, but with the converging trends of moving computer power to the clouds and the use of multiple standard GPUs for acceleration, the time is right to pack supercomputer performance into server farms.
Cost and time to market are the major factors in this trend. I suppose these things go into phases, and there will be another phase in the future where cutom-built platforms will become yet again the norm in HPC.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.