If you were designing a single product for today, then 8 or 16 bit may be fine. If you are starting out in the architectural decision stage of a project targeting Internet connectivity of some sort, then would you really want to have the potential to be hobbled by an 8 or 16 bit architecture?
Comparatively, a 16 bit architecture is going to offer limited code advantage over 32 bit ARM and with expected die shrinks over time, that pretty much goes away.
With 8 bits you will be quickly running into performance issues not to mention limited off the shelf support in terms of software IP (stacks, etc.)
Add in potential time to market advantages from common tool chains and you have what is likely the best architecture for the targeted market.
In my opinion " Why does it need ... ?" is the wrong question. I prefer the "Is it more economical to ... ?" pattern.
One big cost driver here in Europe and I think also in the USA and Japan are human resources. The big human resource specific advantage of the ARM eco system is, ARM M*, R* and A* all share the same ISA concepts and the same tool chain. As a result teaching/learning efforts are minimized.
I'm very interested in knowing the power consumption and more importantly, the operating condition. 32kB flash and 4kB SRAM do not sound a lot. I've the same thought as Daleste. For the application areas mentioned in the article, I feel like 16bits might very much be enough.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.