The situation with micro-controllers today reminds me of electric motors 100 years ago. They started as an expensive novelty of which you would have one per factory floor, and ended up ubiquitous and cheap everywhere. Just like counting microcontrollers, you could amuse yourself by counting motors around you: wristwatch, cellphone (buzzer motor), one or two in each disk drive, door locks, windows, timers, etc. etc.
It makes sense: wiring is expensive to make and install, and fault-prone: for instance, a squirrel chewed through half of the wires in my main engine harness.
It's cheaper and more reliable to run a serial connection everywhere (power, ground, data), implying a communication and execution nodes all over the place, including doors, windows and mirrors.
I read that some companies began using microcontrollers instead of timed fuses in individual firecrackers now.
GPUs do not really have that many cores. The "core" count is inflated by counting each SIMD/vector lane as a separate core. NVIDIA's terminology uses "Streaming Multiprocessor" for what I would call a core, and the Fermi GPUs provided 32 "CUDA cores" per "Streaming Multiprocessor" (with up to 16 SMs on a chip).
GPUs also use multithreading, which might be viewed as virtual cores, further increasing the number of contexts available. (Intel's SMT/hyperthreading does present threads as virtual processors. MIPS' MT ASE distinguishes between Thread Contexts and Virtual Processor Elements.)
(You might guess that I like reading about computer architecture!)
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.