I disagree. Look up Amdahl's law, it shows how the serial parts of the algorithm can become a large block. The stuff we know how to make parallel, like graphics, is embarrassingly parallel and the HW does that well. But just sticking 4 CPUs on a die is not "mission accomplished". A real world algorithm has to balance using multiple CPUs against competition for DRAM and cache (both of which are easily maxed out on mobile chips)and the additional complexity of synchronization. It often is the case that the small net gain is simply not worth the complexity and cost of development. On PCs the large wins, for consumer scenarios, have been running completely different tasks on the different cores. The chips have monstrous DRAM bandwidth and big caches, along with multi-watt budgets, that make that practical. In the mobile space that does not work nearly as well.
So for the HW engineer who figures 4 cores on a chip is job done, no, the work has barely started on figuring out how to make mobile chips that really perform on battery power. You will see those 4 cores, and the GPU, coordinate in a few specific high-value scenarios worth the very expensive software engineering to fit useful gains within the chip limits, but we are nowhere near done with figuring out the HW.
Nice video. Your final question made me think of actually what is happening in having multi-core processors today. Parallel programming is an on-going research topic. Having 2 or 4 or more cores on one single package does make it twice or quad times more powerful however, the software requires a great deal of effort to be designed in order to seize all that parallel execution capacity. So far, looks like software development is behind hardware in this regards, right?
Excellent! I'm also a thin crust kind of guy, and fairly minimalist with the toppings. Tomato, cheese, and basil are always good, and some olive oil helps too. The crust needs to be crisp. The edges of the crust need to be a little browned, some might even call it burned. None of that soft chewy stuff.
One problem with the constant downsizing is the same problem that occurs with space-based hardware. You get to the point that turning a transistor on and off becomes a stochastic event. In space, this is caused by radiation impinging on the transistors. With miniaturization, voltages have to be so low, to prevent arcing, and the traces are down to the atomic level, so it becomes more and more difficult to even know what it means to have current flow. Is it just leakage through the insulator?
About image processing. You mentioned separating out the moving objects in an image from the static objects or background, to save on transmission bandwidth. This is the basic technique used in all of the MPEG compression variants. On the one hand, reduced transmission bandwidth has to save power. But on the other hand, when you increase the efficiency of the compression algorithm, you also increase the processing power needed in both encoding and decoding. A two-edged sword, one would think.
Thanks for posting, Sylvie. But I want to make sure that Weili Dai, Marvell's co-founder, gets credit for using the "pizza" analogy for a while.
More on her thinking is explained in EE Tmes' one-on-one interview with Dai earlier this year:
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.