Shared memory multi-processing (circa 1970) was most likely the first "dual-core" and ran in a multi-tasking environment. Memories were 4 way interleaved so that 4 accesses could be overlapped.
There was about a 40% performance gain where the theoretical was 100%.
The "tasks" shared hardware resources, not like threads that can share data and may require synchronizing.
The interleaving would allow 2 tasks to be completely overlapped if the instruction and data addresses all accessed separate memory banks.
Then cache was invented based on the notion that a single instruction stream would repeatedly access a small region of memory so that a copy of a block of memory would be kept in high speed local memory.
Then 30 or so years later the multicore/multiprocessor was resurrected and tasks/threads can share data as well as hardware resources.
Same old stories: history repeats itself and we learn nothing from history.
Although 10 years may have gone by, what is the length of time that mechanisms existed to actually make this a reality? Yes, I understand that tools will be necessary to continue growth, but there is adoption. When did that begin to occur in your estimation?
Memory can be available as local, global and distributed in any system. With such loose terminology it is no wonder that 10 years have passed since multi-core came on the scene and the magic tools to make it work are still missing.
The notion of perpetual motion predates multicore and it never worked either.
Our customer’s are moving to multicore to address: 1) power; 2) performance; 3) separate functions for certification, as well as consolidation.
In a multicore system, memory is available as local, global and distributed. The contention issues and the available solutions change. One of the new features in Poly-Platform, helps designers and developers to identify the best uses of resources for their application.
As Atul noted, tools will go a long way to improving multicore adoption which is one of the reasons that IT and PolyCore Software have worked together to provide a seamless eclipse based development environment of Poly-Platform and Code Composer Studio.
Business Development, PolyCore Software
Being part of TI multicore organization, my view is more from the perspective of a provider than a user. I agree with Sven’s view and availability of quality software tools for both development and debugging will be key in driving multicore adoption. All our customers are very excited about designing-in multicore for next generation products. However, the biggest hindrances in rapid uptake that I hear are - (a) upfront investment needed to evaluate and benchmark existing application before deciding to move to multicore and (b) software investment needed in re-architecting portions of the current application in order to squeeze the right scale & performance. Widespread availability of tools that make these tasks easier (including benchmarking tools) will go a long way in improving the trajectory of multicore adoption.
Business Strategy & Development Manager, TI
It would be nice to have a clear definition of the problem that multicore addresses. Since it has been around for 10 years, isn't that a pretty good indication that it is a solution looking for a problem?
It seems to be based on adding cores to run when another is waiting for something and magically another core has everything ready to go.
The reason that the first is waiting is probably memory access conflict or latency and multicore does nothing to address that. Then there is the OS overhead of managing threads, apparently because any thread can run on any core which seems quite rational from a SW standpoint because all cores are expected to be 100% utilized. Brian Bailey wrote about the memory bottleneck. Multiple multilevel caches and block transfers are all based on clustering of memory accesses and still may not keep a single core busy, much less a dozen or more.
So the original problem was that a single core spent too much time waiting, so now we have many cores waiting.
CPUS should minimize memory accesses. Starting with a high level language, compiling to an intermediate CIL, MSIL, RTL, or whatever and then de-compiling to an ISA is self defeating.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.