The issue described may or may not be a problem, but it has little to do with the big picture of IoT, the success of which will be dominated by the availability of effective bridging technologies.
Bridging will be used to link much more space and processor efficicient protocols than IPv? with the rest of the world. This is happening already and causes problems not because of memory leaks but because there are few standards and those that do exist are exceptionally poor at the job (Zigbee for example).
Much in the same way there never was an IPv4 crisis because it is a trivial thing for gateways to bridge between IPv4 and IPv6 allowing IPv4 addresses to be mapped and reused, so effective bridging will empower IoT.
Jack, you are right about bad programming having been there all along.
My concern now is the huge number of discrete addresses that will be assigned to inanimate things with short lifetimes, which will result in a bloat of addreses that will be taken but useless. And somebody is going to find some way to do something bad with them, although I have not figured out what just yet. But when we get to having a 256 character internet address it will probably be inconvenient and take up lots of memory just to store addresses. That is the biggest danger that I see.
This is true, but the article makes it seem like memory leaks are a new issue and they are not. It is something we have been dealing with for literally decades. Good programming practices deal with it, bad ones do not. There is no doubt lots of bad code in the public domain with memory leaks, and lots without. The article (blog) practically implies its a done deal it going to happen all the time, and I do not think that is true.
Jack, the problem that I see is that when there are huge amounts of memory available that bad code that constantly consumes memory will be aable to run for a long time before it crashes. The result will be that a lot of it will enter the real world public domain and be causing problems there. And unfortunately there is no mechanism in place to prevent this kind of problem.
Memory leaks have and always will be an issue with programming. The fact that a memory leak can occur in an IPV6 implementation is just that an implementation detail and we have to ask is it any worse than any other memory leak that causes a crash? I do not think so. There will always be mistakes made.
As pointed out, there are ways to encapsulate IPV4 within subnetworks and one has to expect that much of the time, this is exactly what will happen. I do expect a lot of upgrading of routers and the like.
Most of the time, if a device is implemented with an 8051, etc. it will just not be feasible to "upgrade" the firmware to support IPV6. That said, how many internet enable 8051 powered devices a) Exist and b) Can be remote/field ugraded. To that end, talking about the issue is almost a non-starter in most cases. The device will either be encapsulated by the router or upgraded to newer hardware. No other solution will exist.
The bad news is that a lot of legacy code will have to be rewritten. The good news is (also) that a lot of legacy code will have to be rewritten. If you look at modern versions of C/C++ and other languages mentioned in this article there have been tremendous advances in controlling memory leaks by default rather than depending on fastidious programmers to do so. The younger programmers in my group are much more comfortable with these new features, which is a good sign for the code quality of these rewrites in the future.
The real advantage is that we are now pretty much assuming that we are working with at least a 32-bit architecture. Trying to do this kind of code with a Z-80 or 6800 would be an exercise in futility. We are also generally working with megabytes of memory rather than kilobytes. One of the first systems I worked on back in the dark ages had 32 kbytes of RAM. The hardware engineer that I was working with made the comment that he couldn't imagine why anyone would need more than that. This was, of course, before Windows... :-)
The use of all three techniques is more than likely what we'll be encountering in the next several years. However, this will require that developers become aware of the new protocol and its different requirements/operation from that found in IPv4. Fortunately, most of the differences can be found in the set-up and tear-down of the sockets. Once they are configured, the actual sending and receiving of data across the sockets is identical between the protocols.
Where the use of IPv6 becomes more tricky is that the protocol itself is rather complex. Router and neighbor discovery, the lack of a broadcast capability (IPv6 uses multicast heavily) and even just the size of the addresses will put a strain on systems with limited memory. For example, you won't be finding an IPv6 stack that fits in 4K or even 64K of RAM like you can IPv4 implementations. This means that processors like the venerable 8051 need not apply if IPv6 is included.
Even with processors like the ARM Cortex M0/3/4, memory usage will be an important consideration. This means that when it comes to IPv6, we will need to go back to an earlier time when saving bytes in an program implementation may be the difference between an application that fits versus one that doesn't.
There are several transition techniques that already exist for bridgung from IPv4 to IPv6. First, there is a way to encapulate an IPv4 address inside of an IPv6 address. Additionally, there is a NAT 6-to-4 that allows for the use of IPv4 on one side and IPv6 on the other of a given router device. This is in addition to several tunneling techniques for encapulating packets of one type within the other.
Nonetheless, we will likely need to be able to support simultaneous v4 and v6 addressing in our applications for the next decade. So, unlike a a Y2K where there was a single "drop-dead" date, the transition to v6 will be more of a battle of attrition.
For networks that are completely closed, there's no need to convert other than to lessen long-term support costs. For networks that have Internet access though, the need to support V6 will likely be more important as v6 adoption increases.
The threats and worries discussed in the article is worth discussing, but simultaneously we will have to allow ample amount room to the new developments and advancements, this is not the first time some modifications is happening in the internet era. The entire code is always getting modified by a globally community of the programmers.
Also the Architecture of Network Protocol Suite is Modular and Layered so in someone is coming up with a perfectly running code at Network Layer, it will be getting accepted in all the variants of that particular OS.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.