Why GPU? Why not something that specially made for parallel processing? GPU seems a little odd in terms of computing but interestingly some people changes its use to become a faster processing tool. Want to know more about the technology behind.
I am intrigued by the thought of GPU based supercomputers, but wonder if the compilers are up to the task? Is this the start of something that is a redo of older technology? I am thinking of the original number crunchers like the Cray and other array processors that were hardware specialized machines with fast execution times built on pipelined architectures. Seems to me that GPUs being specialized cores for graphics with pipelined for performance constructs are not that different from the "old technology" of array processors. I wonder why it took so long, maybe it is related to my original question about compiler technology?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.