I guess they are going after the billions of golf balls that don't yet have IP addresses. Just think if every golf ball could talk to each other and to all the golf clubs knocking them around. Lots of ways to improve your game with access to all that data.
What is the best way to avoid that sand trap? Just ask your golf ball to talk to all its friends. Maybe your club has some advice on your follow thru. I see a big market for the Internet of Golf Things (IoGT).
The hoopla failed to mention one thing. Does this tiny Arm processor have a MAC and a PHY? For that matter does it have NV data memory and FLASH for code storage? If it lacks any of those, then it isn't even close to being ready for IoT. As for 32 bits. There are a lot of really useful IoT products out there running just fine on 16 bit processors (and some even on 8 bit processors). This world doesn't need to migrate everything to 32 bit processors (or worse yet, 64 bit or 128 bit), especially ones that aren't particularly memory efficient. The number of applications that can benefit by a 32 bit CPU that will fit in the dimple of a golf ball with only 16 pins (read <= 14 I/Os) is fairly small, especially Iots that done need access to the internet. Let's see, how does one put an IP address into this?
That depends on a number of factors in the application, primarily on the nature of the data. A lot of IoT devices involve bit level control (the more bits in a word, the less effiecient it is to control them individually). They often read ADCs, which are generally less than 16 bit resolution. While some programmers immediately convert the results to FP, that is generally an unnecessary and inefficient thing to do, and the article didn't mention if this tiny wonder had an FPU. If it doesn't then FP is going to slow things down considerably whether it is 16 or 32 bit.
32 bit becomes useful if a lot of the variables are numeric and require > 32K resolution, or are large bit map fields. it can also be advantageous if there is a large amount of code or date (not likely on such a tine processor) that could benefit from pointers and address fields that are 32 bits wide.
Another thing that could sway the decision would be a large bit mapped display (LCD) with complex GUI or video or something. But I don't see that connected to a CPU with 14 I/O lines.
Finally 32 bit becomes useful if the programmer is writing using a lot of high level abstractions without knowing how inefficiently they are being implemented, or worse yet using one of the numerous popular interpreted languages that are very code inefficient and memory hogs.
I have a control processor with two Ethernet ports on two LANS (for redundancy). It communicates with up to 100 other Ethernet devices as part of a dedicated system (high reliability aerospace stuff) and handles complex event scheduling at the rate of 100 to 1000 events per second. It can process Ethernet packets at the rate of 1000 per second. The kicker--it is a 16 bit processor with 256 K bytes of total storage for code and data, including the protocol stack and Ethernet buffers! There are a few places where I could have doubled or trippled speed if I had 32 bit registers. But in general, porting it to a 32 bit processor would have resulted in less than a 10% performance improvement. Porting it to an ARM would have doubled the code memory requriements. (BTW, the 16 bit CPU had two MACs and an PHY on the die as well as four serial ports, four DMAs and the usual assortment of timers and interrupt controller, etc. It was 184 pins. Clearly far more hardware than 14 I/Os could use and yet still quite comfortably in the 16 bit realm).
>64k code size (eg most 'standard' protocol packages), without awkward compiler kludges.
Computationally intensive ops such as encryption, protocol processing, sensor processing etc.
The 'sell' is that using wide instruction/data processors results in less instructions per function, reducing power. However, the power compromised processes that most 32b processors are fabricated on (to reduce cost) result in higher consumption than the best-in-class 16 bitters at low-medium performance points. At higher performance levels, the 32 bitters are out in front.
As many IoT apps are battery powered, I'd say there is a long, successful life still ahead for 16 bit machines and some 8-bitters also.
While many applications of a tiny MCU like the KL03 could fit in an 8-bit or 16-bit processor, if your application is doing signal processing with multiply-accumulate the number of bits in your intermediate results could easily exceed 16 bits. Dealing with this is a pain on a 16-bit processor.
With a 32-bit architecture, you usually don't need to think about computations overflowing and producing strange results, so you can concentrate on algorithms. With an 8- or 16-bit architecture, you need to keep this in mind all the time.
The KL03 only 32KB of Flash, so you don't need 32 bits for addressing -- today. However, you may want to start with this chip but some day grow into a chip with far more memory, in which case you have to bank switch or go to a 32-bit architecture. It's expensive to switch architectures. So you might as well go with a 32-bit architecture now and save the hassle of switching over later, unless there are serious downsides.
How about those downsides? A 32-bit CPU core may be twice the size of a 16-bit core, but the chip area is probably dominated by Flash and SRAM, so CPU core area doesn't have much impact on price.
How about code density? ARM Cortex-M0+ uses the Thumb-2 encoding, where most instructions are 16 bits long instead of 32 bits. So code density is similar to an 8-bit or 16-bit processor.
How about ecosystem? Well, IMO ARM has pretty much everyone else beat in that category.
Dividing line? If you're sure your application isn't doing arithmetic that's going to overflow (e.g., you are mostly moving data around and not doing much arithmetic) and the application is always going to be small, 16 bits is a good candidate. Otherwise, you might as well go with a tiny ARM and spend your energy worrying about other things.
@Max: Going through the KL03 factsheet (link below), I saw it has three communication interfaces: (1) One 8-bit SPI module (2) One low power UART module (3) One 1Mbps I2C module; I could not find anything using which this could get connected on the Internet. Then how is it meant for IoT. Am I missing anything? Is there a latest version which is not published on the Freescale website?
Okay! Thanks for the information and the reference of ENC28J60. Well, 10BaseT would be sufficient for many IoT applications, where the data packet might get transmitted/ received intermittently and I was looking for such a tinydevice for a small home automation application.
I am a marketer at Freescale for Kinetis MCUs, and I couldn't help but want to jump in on the good discussion here on this article. There was a comment about how to put an IP address into the chip and I wanted to mention that there are software solutions that can reduce the complexity and overhead of key Internet and Web protocols. Also, wanted to point out that for those designers looking for an integrated solution for IoT connectivity, the Kinetis W series integrates class-leading sub-1 GHz and 2.4 GHz RF transceivers with ARM® Cortex® cores. This could be another option for those wanting a more integrated solution.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.