As computing is shifted more and more to the devices closer to the sensor this will also translate into a need for more and more research work to happen at embedded level. If I look in the world today, most processors closer to the sensors interfacing to the external world implement relatively simple algorithms, maybe just some FIR filter or such and leave the decisions to other stronger APs, or in this case the cloud.
What I see is that usually the main processors (or, let's talk about the cloud now) are based on some architecture which supports programming in very abstract ways, while the processors closer to the sensors are usually used still in a very "bare-metal" way to squeeze the maximum optimization possible in terms of power usage, speed etc.
So, this shift in locating the processing power I think leads to either the researchers of today having to abandon part of the abstractions they were used to and get closer to the metal or, alternetively, we might be at an era of a new strand of development towards improving tools for embedded in general, which would enable research labs and universities to program embedded devices at the same abstraction levels they would be used to.
The interesting problem I see with the latter is that while main processors tend to usually stay within one or two famillies, which enables development of their tools to be allocated to scores of teams around the world, the processors closer to the sensor are a lot more diverse. Many of them having dedicated instruction architectures therefore requiring separate toolchains in principle. In practice though we are seeing the rise of tools which try to abstract the architecture away to some extent like LLVM compilers are for example. So this new trend of switching processing closer to the sensor will, I think, open new doors for such portability tools.
In any case, the future will be interesting to witness.
It makes sense. Smart devices from our pockets get geared up with more and more sensors. In parallel, the Internet of Things herald a new tsunami of data. This won't fit all in the netwrok bandwidth on the way up to the cloud and it would not be intelligent to push it that way either. On the other hand, the natural push would be to make the smart devices really smarter.
What would make a device smarter if not extra intelligence at a reasonable cost? A reasonable cost meaning also a reasonable power consumption.
Is AI the way to go? What neural network architecture/solutions will make this happen? What would be the HW - SW breakdown to minimize latency but provide as much intelligence as possible to cloud? Is this a new era for data mining too?
In the end I like this parallel to human brain: "Latency is also an issue for the human brain, where distributed processing powers our reflexes without involving the frontal cortex." Yet another dot connected! How many dots left to connect until we replicate the human brain? How about a uber-cloud overseeing other brains replicas 'socializing" with their data?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.