Indeed there is little to no info or implementation of super-low-power Zigbee beacon networks to be found. I have looked very hard. So the momentum is with Time-slotted Channel hopping as discussed.
Also agree with Jonas Berde that IoT/IoE will very possibly only use 6LoWPAN at the netework edge at one (or more for said redundancy) gateways and the rest of the WPAN may be another protocol running on truly very inexpensive hardware or other implementations/standards.
Nonetheless, I believe chip/module/firmware vendors (talkin' ta you, Kris/Linear) and standards authors would do best to make it as easy as possible for developers to get their heads and their keyboards around else it will go the way of the Do-Do bird and SOAP (Simple Object Access Protocol ... which, as they say is neither simple nor object oriented). The physics of these ephemeral networks alone makes them difficult to develop with. BTW, kudos on the TSMP whitepaper, and thanks.
Case in point: I contacted Linear/Dust trying to get application help on when one might use (in today's world) Smartmesh IP vs Smartmesh WirelessHART. It fell on deaf ears (never answered) so I am successfully using another vendor's micro-chip (ehem) hardware to get my prototypes going with an ad hoc protocol and ultra-low power.
Finally, my perspective is that the cost of node HW will have to come down a half order of magnatude or so to make the standard and technology gain the momentum of ubiquity. Guess Moore's law might take care of that. But frankly, there will be stiff HW competition out there.
I certainly wasn't trying to be FUDtastic. Certainly 15.4 supports beacons, and some (all?) versions of the various Zigbee standards support beacons. I've just never heard of anyone using Zigbee beacon mode in an interoperable product. Googling "zigbee beacon mode" gets you a handful of academic papers, a lot of user forum posts asking why beaconing doesn't seem to work, and several chip vendors stating that their stack does not support beacon mode.
Maybe it works great, but I've never seen it used.
As Rick Merritt asked: "Is this true Zigbee-ites?"
A few details are left to the reader to understand. If you look at the 6LowPAN standard, you will see that it implements IPv6, but using a highly compressed header suitable for a limited size local area, or in this case, personal area network. It does this by circulating only the low order 16-bits of the entire 128-bit IP address. This makes room for up to 127 devices on a 6LowPAN segment. The Network Manager supplies the remaining bits of the IPv6 address. Since all traffic must be filtered via the Network Manaager, every device is IP addressable.
While Dust developed the protocol for Timed Sequence Channel Hopping that was used by ISA100 Wireless (ANSI/ISA 100.11a, IEC 62734) and WirelessHART (IEC 62591) it was also adopted into the IEEE 802.15.4 base standard by task group E in 2012. Dust has been selling IEEE 802.15.4e chips for several years.
Also not discussed is the expression "channel hopping." IEEE 802.15.4 devides the 2.4 GHz spectrum into 16 non-overlapping channels. This is the same spectrum that IEEE 802.11 (WiFi) divides into as many as 13 channels, but there may only be 3 non-overlapping channels. In IEEE 802.15.4e, as developed by Dust the transmission radio frequencies move sequentially through a list of channels in the order prescribed by a channel hopping table known at all nodes. The value of channel hopping in this pseudorandom order is to eliminate reflections causing multipath signal cancellation that is a major cause of signal loss in low powered radio in heavy industry with it canyons of steel building and pressure vessels. Relections at one channel are usually different at a different frequency. Since repeated signals will always be on a different channel, multipath loss becomes irrelevant in 6TiSCH.
To your last question about reliability: both ISA100 Wireless and WirelessHART specify mesh networks with inherent redundancy. Both also allow both a primary and backup Gateway device to remove any possibility of single points of failure. Not only that, but both networks also respond to Publish/Subscribe protocols to eliminate constant frequency polling when the host device has the ability to subscribe to data.
To date, Zigbee has not time synchronized their networks. Battery operated end nodes wake up whenever they like to send a message. A router node needs to leave its radio receiver on all of the time because it never knows when that message will arrive. Typical receivers burn around 20mA (Dust's is about 5mA).
Then what are Zigbee beaconed networks all about? Unless you're defining "time synchronized network" in some unique Dustian way, I believe that Zigbee has had provisions like this for years -- both synchronization and guaranteed time slots.
Your statement strikes me as somewhat FUDtastic. Please feel free to elaborate.
It's attractive to make our Things pretty smart about networking - time-synchronized, mesh-based, encrypted, etc. The problem is that these features involve quite a bit of firmware. More capability means more need to update firmware. How can the consumer feel comfortable with smarter, more networked devices?
Gateway from the sensor network to the IP-network and the Internet/Intranet can be redundant in critical applications. It is already available for WirelessHART today. However, most applications related to IoT are not process critical - it doesn't matter is the measurement is missing for a few hours or even a few days - because it is monitoring, not closed loop control. Often it is monitoring related to reliability/maintenance, energy efficiency, and so on. Therefore redundancy is rarely used even though it is available.
As I recall, the initial idea of SmartDust was to create a battlefield network.
In the same manner that the Internet was designed to take damage and still let the information hrough, was the idea for the SmartDust to allow battlefield communications.
I recall reading about it and "smart" artillary shells that would pick their own target at apogee (provided the electronics can survive the launch) back in the mid 80's when I had somehow gotten on a defense design magazine mailing list.
I agree. I believe when people are talking about IoT, they assume all devices are IP connected one way or the other. The device could have an IP address that will accept API for various features, e.g. data polling. Or an IP address resides in the gateway node that will be responsible to interface with all the sensor devices the gateway node is responsible of. There are pros and cons of either approach. IP enabled devices will create overhead in data transmission over wireless interface. An IPV4 address is already 4 octets. With both source and destination addresses, the IP header will likely longer than the load the packet is carrying. Longer packet means longer transmission time. Longer transmission time equates to higher power consumption.
The latter approach will introduce single point of failure. Once gateway node is dead, all sensors are "lost".
This is where mesh networking is critical. The network continuously monitors all of the available RF paths. When long range links are available, they are used to minimize latency and power consumption. When the packet delivery ratio changes on an RF link, network management will reschedule traffic on other paths.
If the *only* link to a node is over a single RF link (long or short), this will be identified by network management as a potential point of future failure. Then it's up to the application/installer/owner to add additional routing infrastructure.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.