I agree...for wearable devices the 3 most important factors are: power, power and power...ideally the power should be low enough that energy harvesting provides all needed power so you never have to charge your device
As the article stated "also low energy". Power generally refers to peak or typical operating or idle power. The energy cost of waking and sleeping can be significant, but is usually not considered in power measurements which usually concern a steady state.
Since integration is a significant factor in energy-efficiency, such would presumably have a significant effect on what memory technologies are appropriate. Not all memory technologies are friendly to the manufacturing processes commonly used for logic.
Excluding costs associated with technology licensing and specialized manufacturing, I suspect a variety of memory technologies would be best. Some content is rarely written and so could benefit from persistence and low non-destructive read energy even if write energy costs were relatively high. (With energy harvesting, energy-expensive operations might be scheduled to exploit times of abundant energy. Flash page clearing is an obvious case for this, but even with something like phase change memory scheduling data/firmware updates might be beneficial beyond the networking-related energy use.)
Some types of data do not live long and so might benefit from using DRAM or 4T-"SRAM" (the former having destructive reads, the latter having reads that refresh state).
For FIFO buffers, there might even be a way to save a tiny bit of energy by using an indexed collection of pairs of latches, one for reading, one for writing, where reading pulls the data from the write latch into the read latch. (This is just wild speculation. I am not an EE, but such seems like it might work.)
Read and write granularity are also significant with respect to memory technologies. Exploiting block-oriented spatial locality could be a worthwhile energy-efficiency optimization which may not be normally considered for cacheless systems. If the memory is designed to perform block reads at 75% of the cost of a more flexible systems reading of the same memory chunk, even modest non-use of the data in a block would still reduce energy use.
That reminds me of two somewhat related papers on using ECC for cache energy-efficiency.
"Reducing Cache Power with Low-Cost, Multi-bit Error-Correcting Codes" (Chris Wilkerson et al., 2010) concerned reducing refresh rate for a DRAM-based cache and compensating by extra ECC protection.
"Energy-Efficient Cache Design Using Variable-Strength Error-Correcting Codes" (Alaa R. Alameldeen et al., 2011) reduces voltage applied to SRAM, again with extra ECC handling the extra error rate.
It is interesting that there is such idea reuse but a bit sad that communication between different domains and recognition of conceptual similarities are lacking.
The abstract's mention of analog decoding reminded me of the analog-oriented ECC handling designed by Lyric Semiconductor (now part of Analog Devices). (Based on the abstract, I suspect I would not really understand or appreciate the paper.)
Thinking out loud: Using excessive ECC might also be attractive for fire-and-forget or at least send-and-sleep (where only an explicit NAK would cause retransmission). Being able to hand-off reliability to a nearby (low NAK delay) less energy-constrained system might be useful to facilitate earlier entry into a deeper sleep state. The bandwidth issue with latency and TCP's acknowledgement window is commonly cited, but latency could also influence energy-efficiency even for a UDP-like protocol.
(I also seem to recall reading that voice communication combined compression with ECC, such that loss could be from the compression or from transmission errors with a recognition that not all bits are equally important. Images might also benefit from such a merged compression/ECC mechanism.)
Great insights Paul...would you be interested in presenting a talk on this topic at emerging technologies sympsoium in Vancouver in 2015? preliminary program at www.cmosetr.com, I serve as a technical chair, firstname.lastname@example.org
If I'm going to have so many mW devices in my home monitoring 24/7, it's going to impact energy conservation in the long run. The point is we are adding energy consumption in devices we don't turn off. I definitely think this is the wrong way in terms of going green. There's also SAR limits to consider if these have wireless uplinks.
@Resistion, the power (and cost) additions of the intelligence in these smart devices need to be outweighed by the savings generated. There have been some significant improvements in energy consumption brought into consumer devices, in some cases using techniques first used in phone systems. In some cases, you will see devices running with microwatts of power in standby mode until being woken up by a critical event. So I think all will be good on that front.
I fully support the smarter use of power/energy enabled by these devices. Almost certainly, we will use these to turn on/off lights, A/C, heater, etc. I guess it's just a matter of how far you want to go or how many of these processing devices you want to use. It might even be no batteries would be needed anyway, if these can take wall-plug power. Only portables/wearables really need batteries, but their interaction with other "things" would be quite limited to internet uploading anyway.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.