well, i would hate to say it, but this is what happens when a company "pre-announces" something -- prematurely. i can see MTK couldn't resist getting into the octa-core fray -- because there is so much buzz going on in the media -- and yet not being able talk about it in detail (until they have products later this year) sort of does disservice to the industry and also to itself. Speculation moves fast, and there goes the real opportunity for MTK to tell its own story in its own terms.
Yes announcing before they have a showable item is like shooting themselves in the foot, since they can't defend themselves against criticism. They don't have a product to showcase and tout results from. The criticism from qualcom came at an optimal time (for qualcom) because all MT can do is wait.
What issues will the octa-core bring to engineers designing with it? I know some embedded systems engineers were dubious about multicore. However, software developers should welcome the challenge, maybe....In this Feb 2013 article from Multicore Association, the association claims: "The multicore era shifts more of the responsibility for performance gains onto the software developer who must direct how work is distributed amongst the cores. In the future, the number of cores integrated onto one processor is expected to increase, which will place even greater burden on the software developer."
There are major issues for programming heterogeneous multicore processors.
The clustered migration approach adopted by Samsung in Exynos Octa 5 has the advantage of presenting a uniprocessor programming model to the software engineer.
When you get into the global task scheduling across big, little and graphics cores you need to have a task schedular that can - moment-by-moment - pay attention to workloads, available resources and what runs best where.
Not only do those algorithms need to be smart, prioritized, they need to be debugged to make sure that tasks don't end up blocking each other or getting into wasteful behaviors.
This software will normally sit somewhere near the operating system and so starts to be a non-SoC provider issue.
Like it all theses good things it requires team work.
My understanding is that ARM is uploading software for this for Linux through Linaro.
To MTK's defense, we do know the following. This is what MTK disclosed when it announced its quad-core AP for tablet using HMP.
While ARM enables HMP with its IP and software, MediaTek claimsit also added things like "an advanced scheduler algorithm, combined with adaptive thermal and interactive power management" to maximize performance and energy efficiency of the ARM big.LITTLE architecture. "This technology enables application software to access all of the processors in the big.LITTLE cluster simultaneously for a true heterogeneous experience," the Taiwanese company said.
Programming is an will continue to bean issue as the number of cores increases. Some would argue that they have already reached a point of diminishing returns. However, the programmingmodels will evolve, especially as these are used in mobile devices and other applications that are doing a wide variety of functions driven not just by the user, but also cloud services, sensors, and other background applications.
The real point of the new MediaTek processor is that this is an all-LITTLE solution, as opposed to a big.LITTLE architecture like the Samsung Exynos Octa. These Coretex-A7 cores are about a fifth or the size and power of the Coretex-A15 cores. The result will be a much small chip with much lower power consumption. This opens up new possibilities for lower power and lower cost devices.
@Jim: I agree. We have seen mainstream chips pretty quickly jump from one to two to four to eight cores. But at this stage we seem to be getting into a more hetero world where its a bunch of mixed cores for specilized jobs, in part because the existing tools and jobs only parallelize so much.
@Peter : It will be interesting to see if a line emerges as to what part of this is stuff Linaro will enable for everybody and what part, if any, becomes stuff SoC vendors try to use to differentiate their offerings. Clearly MTK is seeking an edge here this time around.
I think engineers won't reinvent the wheel so the major algorithmic approaches to core wake-up. going to sleep, task migration, cache handling, etc. will be developed once (by ARM/Linaro) and used off the shelf. The companies however, may need to tune that baseline to their use of core resources....if they are not on standard configs such as 4 x 4... or have reasons to insert special-case algorithms.
They will then need to test the overall effect as this will be key to power saving and getting the best performance for key applications out of the multicore SoC. And that may then invoke tweaking of algorithms or the inclusion of hardware accelerators etc.
I think the industry just moved up in abstraction.
why companies increase the number of cores by 2x? why 8? would it be easier to skip some steps and to design 64-core or 128-core processor? from my layman's perspectve it would be not much more work to do it...comments?
In terms of hardware design it would not be much more work.
And there are companies producing such many-cored processors such as Kalray and Adapteva and Intel (in research mode). But these tend to address particularly classes of problem for which they can be optimized.
In the more general case what would you use all the cores for, would they be homogeneous or heterogeneous, and how would you organize the memory?
There are some problems that can be easily parallelized and use all the cores efficiently. But many cannot.
A scheduling system that can remain aware of all the resources (cores) available, wake them up and retire them, know what runs best where, keep control of the memory, cope with interrupts and so on becomes more difficult as the core count increases.
The whole software industry needs to be led gently away from the uniprocessor programming model and towards something that can use the resources that hardware will be able to provide.
I think the issue is that at 4- and 8-cores the scheduling can be almost done manually. It is a tractable problem.
As you go from multicore to many core it requires algorithmic solutions that can cope with general circumstances and can be tested and shown to cope with general circumstances, which becomes non-trivial.
I think we are at that threshold.
As I said I think the industry is in the process of moving up in abstraction. The engineering frontier will be to worry more about these scheduling algorithms. Others and far fewer will worry about processor architectures and yet others and even fewer will worry about transistor structures.
Blog That A-Ha Moment Larry Desjardin 10 comments Have you ever had an a-ha moment? Sure, you have. The Merriam-Webster dictionary defines it as "a moment of sudden realization, inspiration, insight, recognition, or ...