@RickMerritt: there are already solutions in the market today to integrate the plethora of standards into IP-based portal like EnNET by GridLogix (which was acquired by Johnson Controls). Though this last example is predominantly in the energy & building management / automation space, a similar approach could work for IoT.
The biggest problem with Internet of Things is the redundant name, that makes people think this is something new.
Things can be internetworked to the extent that is necessary, from the point of view of the system design. A single factory can have its intranet, linking together the various machines perhaps. If a company has multiple plants, that company can tie them together, to whatever extent is required. The system designer makes this happen. If some of the machines use weird protocols, you install gateways to do the translation. IP is a great lingua franca, but that doesn't oblige everything that is internetworked to speak IP (although that's the general trend).
The car hacking hype. Same thing, right? A car becomes an intranet of things, and when they get tied together (even for simpole telemetry), they become part of what some might call IoT.
I can't say it enough times. Having done this for decades now, even before IP had become as common as it is now, I see the major roadblock as one of fundamental understanding of what IoT means. People seem to be creating inflexible religious dogma out of something that has been evolving gradually over time.
Just like the car hacking articles, security applies to anything on the Internet. Call it IoT or just your own in-home network. Makes no difference, except as a matter of degree.
I think IoT/M2m always will live on different networks and protocols and that we have to live with that. What we need are models and interfaces, I would like to call them abstractions, for things over different protocols to be able to talk to each other.
The situation we are at at the moment is much like how PC software development was carried out before windows arrived. We at that time made a small program that fitted well on one diskette and to be able to sell this program we had to supply maybe ten or twenty times the diskettes for hardware drivers for different printers, screens, keyboards, etc etc. The hardware abstracion layer in Windows all changed that with the boom in software revenue that followed.
So what we need is the abstractions also for m2m/IoT. VSCP (http://vscp.org) presents one such model (no its not just a protocol) it's a framework. And it solves some common problems
1.) A uniform way to discover devices. 2.) A uniform way to configure devices. 3.) A uniform way to present data from devices. 4.) A uniform way to update firmware in devices.
Never mind who made the hardware, who owns the presentation surface or what protocol that is used. IMHO all discussions on which protocol is the killer protocol is of no interest the points above must be solved and are the key to the successful growth of m2m/IoT.
I don't share the enthusiasm of many on this thread that IoT networks will just need to be connected...I think the IoT space is a total mess right now, and nothing really can be networked effectively as is...lots of work on standarization is required...I am not planning to re-boot my thermostats and my freezer every second day like my PC, nor I am planning to become IoT home network specialist to resolve all networking conflicts that will arise once my garage door opener starts taking to my gardening hose ;-)
@gdilla: I didn't mean to suggest that all IoTs need to talk to all other IoTs.
What I meant was there is not an interopera blke set of hardware and software choices available for people who build IoTs today.
You could, for example, build a nice Zigbee net today. But later if you decide you like the features of the 6WLowPAN products you are stuck with needing to forgo them or rip out the old Zigbee net or create a homegfrown bridge.
This is just one scenario where the lack of interoperability and standards raises its head. There are probably as many scenarios where tis comes up as there are deplyments.
It all depends on the scalability of the solution. If you trace the history of networking there were a number of protocols that were designed for local network segments because there would be no reason for the data to go any further than that. Novell, DECnet, Microsoft networking, and innumerable others were built to scale only to the size of at most a medium-sized corporate network. When TCP/IP came along it swept all of these aside because it was a truly scalable and uniform solution.
Many companies would argue that sensor networks are different, that the information should be limited to strictly local networks, but I sat in too many meetings in the '80s and '90s where the same class of people made the same arguments about their internal networks and came to the conclusion that they didn't really need this newfangled "Internet" thingy.
Better said than I! And companies can publish their APIs/SDK on managed services like Mashery and the like if they really care about letting people mash things up (or just for their own third party partners).
Blog Doing Math in FPGAs Tom Burke 15 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...