Not so long ago, alarmists fretted about running out of Internet Protocol domain space. Then IPv6 opened up plenty of addresses for machine-to-machine (M2M) communications and the much-hyped Internet of Things. The challenge now becomes making sense of all the sensor data that will stream from our myriad connected devices to the cloud. And the opportunity becomes crafting the applications that address society’s problems today and anticipate the unforeseen needs of tomorrow.
The Internet addressing system conceived in 1977 at the U.S. Department of Defense by Vint Cerf, today chief Internet evangelist at Google, used 32-bit Internet Protocol (IP) addresses to connect people to people, providing more than 4.3 billion unique hosts for trusted user accounts. As the Internet began to be dominated by M2M connections, a revised, 128-bit scheme (IPv6) was adopted to allow for 18 billion billion hosts, accommodating more than 300 trillion trillion trillion secure devices.
Now there is more than enough address space, along with Internet Protocol Security (IPsec), to accommodate the universe of cloud-ready devices that IBM Corp. last year predicted would surpass 1 trillion nodes by 2015.
With its Smarter Planet Initiative, IBM anticipates the endgame for the Internet of Things (IoT). Its researchers envision a global electronic nervous system, with trillions of individual sensors monitoring the status of everything of interest to humans and streaming the resultant exabytes of data to cloud-based cluster supercomputers that extract the ultimate value from the data using analytics software modeled on the human mind.
Picture the Watson AI that last year beat human champions at “Jeopardy,” but on a planetary scale.
“The emergence of the Internet of Things has created such a flood of data that only state-of-the-art information technology can gather, filter, order and interrogate the resulting, massive data set, generically called Big Data, “ said Bernie Meyerson, an IBM fellow and vice president of innovation at IBM Research. “The ability to then employ analytics on Big Data in a given field—be that health care, transportation, energy or other Smarter Planet endeavors—promises new insights and routes to optimization benefiting everyone.”
Past technological revolutions have been based on timely innovations; the invention of the steam engine, for example, fueled the Industrial Revolution. But the Internet of Things isn’t based on a breakthrough technology; rather, it leverages micro- and nanoscale versions of established devices.
The engineering hurdles to the IoT center on solving the tough problems in security, standardization, network integration, ultralow-power devices, energy harvesting and, perhaps most important of all, perceived network reliability, so that people will rest assured the planet’s emerging electronic nervous system has their best interests at heart.
ZigBee technology can enable the connected home by letting devices such as lights, thermostats,
security sensors, smart meters and in-home displays communicate with one another to create safer, greener, more comfortable living environments.
According to an earlier EE Times article by Clive Maxfield, 7/7/11, the example stated for IPv6 addresses comes to mind:
"Well, as one simple example, according to calculations and estimations performed by the folks at the University of Hawaii (who obviously have far too much time on their hands), if we account for all of the beaches around the world, together they contain around 7.5 x 1018 grains of sand. Thus, the addressing space of IPv6 is sufficient to give each grain of sand its own unique IP address – and to do this for approximately 5 x 1019 Earthlike worlds – so I don’t think we’re going to run out of IPv6 addresses in the foreseeable future."
The practical data for M2M system may be much less: I did a sewage signal system last year and the real practical data is discharge volume at 12:00 am every day, and accident report, which seldom happen.
However the system send status report to the sewage treatment plan every 20-30 sec. Those data carry a lot of information about the system. i.e. the pump on vs. flow rate will be an indication of the worn and tear of the pumps, while changes of pump on time vs. discharge volume will be an indication of the pipe resistance change.
Transmitting exception data or changes has the potential to substantially reduce data transmissions as well as the need for storage at the receiving end. It also makes a lot of sense. At most, a brief signal could be sent indicating "monitoring successfully" to rule out the possibility that the sensor is off-line or failed. We've all been deluged at one time or another with floods of unnecessary data that essentially told us "no problem".
Interesting data point @ssoelberg, 1Mb is shocking low (I suspect it was low as per my post earlier but not that low)...BTW, would you be interested in presenting those findings at emerging technologies conference in Vancouver I am chairing? (www.cmoset.com), firstname.lastname@example.org
Colin, I’m glad to see you dig into the link between the Internet-of-Things and Big Data. However, candidly at KORE Telematics we have a slightly different take on it. It is easy to assume that trillions of sensors monitoring the status of everything will result in exabytes of data to process, but there’s actually a touch of fallacy in that assumption, in my opinion. We’ve done some fairly extensive analysis of the applications running on our dedicated M2M network and found that about 90 percent of cellular-connected M2M applications in the world today probably move less than one MB of data, collectively, in a month. Why? Because majority of actionable information to be garnered from the Internet-of-Things is “exception based” The last thing we need is to have our systems bombarded with information telling us everything is fine. When you discount for that data, the equation becomes drastically altered. We blog about it here: http://blog.koretelematics.com/2012/01/more-m2m-devices-obviously-means-more-data-to-process-right-not-so-fast.html
In existing networks a number like 12 is a practical network. Maybe they are referencing the newer 2.0 stuff that is not out yet.
The issue is routing, a device has to know about all other devices on the network. That is in conflict with Zigbee being a protocol for small, memory constrained devices.
Theory is great, but real world can be different.
@R_Colin_Johnson: the selective storage & use of data depends on the application. Elsewhere on EETimes I gave the example of Brazil Tags which monitors cars moving thru toll gates (electronic license plates, hard to follow for thieves if there is no license plate number in the car!!). It is supposed to be collecting several gigabytes of data in a day from one city alone. This can add up to terabytes in a few months and with all cities reporting, may reach many hellabytes in a couple of years!
The maximum number of nodes depends on the number of layers, children and routers in you application. The formula by which you can calculate the maximum is given in the following article, whcih gives examples iwth 861 and 55,000 nodes. The 10,000 number was just a ballpark. To calculate the maximum number in your application, see this article:
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.