The situation with micro-controllers today reminds me of electric motors 100 years ago. They started as an expensive novelty of which you would have one per factory floor, and ended up ubiquitous and cheap everywhere. Just like counting microcontrollers, you could amuse yourself by counting motors around you: wristwatch, cellphone (buzzer motor), one or two in each disk drive, door locks, windows, timers, etc. etc.
It makes sense: wiring is expensive to make and install, and fault-prone: for instance, a squirrel chewed through half of the wires in my main engine harness.
It's cheaper and more reliable to run a serial connection everywhere (power, ground, data), implying a communication and execution nodes all over the place, including doors, windows and mirrors.
I read that some companies began using microcontrollers instead of timed fuses in individual firecrackers now.
GPUs do not really have that many cores. The "core" count is inflated by counting each SIMD/vector lane as a separate core. NVIDIA's terminology uses "Streaming Multiprocessor" for what I would call a core, and the Fermi GPUs provided 32 "CUDA cores" per "Streaming Multiprocessor" (with up to 16 SMs on a chip).
GPUs also use multithreading, which might be viewed as virtual cores, further increasing the number of contexts available. (Intel's SMT/hyperthreading does present threads as virtual processors. MIPS' MT ASE distinguishes between Thread Contexts and Virtual Processor Elements.)
(You might guess that I like reading about computer architecture!)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.