If you were designing a single product for today, then 8 or 16 bit may be fine. If you are starting out in the architectural decision stage of a project targeting Internet connectivity of some sort, then would you really want to have the potential to be hobbled by an 8 or 16 bit architecture?
Comparatively, a 16 bit architecture is going to offer limited code advantage over 32 bit ARM and with expected die shrinks over time, that pretty much goes away.
With 8 bits you will be quickly running into performance issues not to mention limited off the shelf support in terms of software IP (stacks, etc.)
Add in potential time to market advantages from common tool chains and you have what is likely the best architecture for the targeted market.
In my opinion " Why does it need ... ?" is the wrong question. I prefer the "Is it more economical to ... ?" pattern.
One big cost driver here in Europe and I think also in the USA and Japan are human resources. The big human resource specific advantage of the ARM eco system is, ARM M*, R* and A* all share the same ISA concepts and the same tool chain. As a result teaching/learning efforts are minimized.
I'm very interested in knowing the power consumption and more importantly, the operating condition. 32kB flash and 4kB SRAM do not sound a lot. I've the same thought as Daleste. For the application areas mentioned in the article, I feel like 16bits might very much be enough.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.