It may be that the focus of attention will, and should be, in software scheduling.
That will then make decisions based on available resources at run time as to where tasks/threads etc. should reside.
Extensive workload simulation on virtual prototypes then tells you what resources it is best to put in the SoC. Although each time you strip a resource out in the interest of saving area/power you would need to resimulate to look for unintended consequences on particular peak performance requiring tasks.
Hardware always outpaces software - and multi-core has been around since the eigth day of Creation. So why haven't the toolmakers kept up?
Dunno - maybe it's too hard? Maybe multi-cores are nice but don't really work?
I don't think so - somebody's gonna make a lot of money if they crack this nut.
I sure would love to run a multi-core RTOS on my bipedal robot - one moment it's mild mannered Clark Kent, the next minute it's Stooperman - locked up and helpless in the presence of a new and improved bug. But it's still multi-core and very, very cool.
You are right Peter in that there are tools for the software portion of a single processor and some work related to multi-processor, but almost nothing on the hardware side. We are beginning to see things such as specific cell libraries optimized for processors, but we haven't yet got to the point where synthesis, and place and route can be optimize based on knowing it is a processor and thus the general structures likely to be seen. Also, nothing that woud help with things such as knowing which processor to use.
One of the issues is that while there is some tool support around specific processor architectures (compilers, debuggers, etc,) there is not much unified support for heterogeneous multiprocessing chips.
Still a couple of UK firms are trying to help out. I am thinking of Imperas and UltraSoC.