Breaking News
News & Analysis

AMD's CTO talks heterogeneous systems architecture

1/31/2012 02:08 AM EST
9 comments
NO RATINGS
Page 1 / 2 Next >
More Related Links
View Comments: Threaded | Newest First | Oldest First
prabhakar_deosthali
User Rank
CEO
re: AMD's CTO talks heterogeneous systems architecture
prabhakar_deosthali   1/31/2012 6:44:19 AM
NO RATINGS
In the earlier generation computers there was a concept of bit-sliced processors and hardware time slicing. By this a single CPU computer worked like a multi core processor and the software developers could take advantage of this feature to write parallel programing applications with the required synchronization at some hardware buffers. Looks like similar thing is appearing in a new Avatar in these latest multi-core CPUs

goafrit
User Rank
Manager
re: AMD's CTO talks heterogeneous systems architecture
goafrit   1/31/2012 4:18:03 PM
NO RATINGS
ARM is a very innovative company that understands the model of the next industrial business. Focusing on building the basis and depending on others to plug and play will make them remain lean with capacity to adjust to market needs.

xorbit
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
xorbit   1/31/2012 7:21:11 PM
NO RATINGS
You mean AMD?

dirk.bruere
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
dirk.bruere   1/31/2012 6:53:21 PM
NO RATINGS
This has been a research topic for the past 30 years. No doubt they will be reinventing the same wheels.

NSK
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
NSK   1/31/2012 7:42:19 PM
NO RATINGS
I don't understand the new direct-hardware-access model. Seem like a shared harware resource is still going to need layering somewhere to assure ownership by one process at a time. Is this task somehow being pushed out to the hardware so that it looks transparent to the caller?

wmgervasi
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
wmgervasi   1/31/2012 7:42:33 PM
NO RATINGS
Remember that floating point math started out as a coprocessor to the x86 architecture before being integrated; in fact, I imagine that it still has an "escape sequence" in the binary to invoke the coprocessor function. If AMD is using a similar path for the future, it seems like a logical extension of an x86 feature that has been around since dinosaurs walked the earth.

TarraTarra!
User Rank
CEO
re: AMD's CTO talks heterogeneous systems architecture
TarraTarra!   1/31/2012 9:35:35 PM
NO RATINGS
How is this different from the GPGPU concept? nVidia has been at it for quite some time with CUDA and has success in a very limited set of applications - oil and gas exploration etc. I don't see what the innovation here is.

melonakos
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
melonakos   2/1/2012 3:14:43 AM
NO RATINGS
Sounds a lot like ArrayFire (which has both OpenCL and CUDA support), http://accelereyes.com/arrayfire

Hasmon
User Rank
Rookie
re: AMD's CTO talks heterogeneous systems architecture
Hasmon   2/2/2012 2:41:40 PM
NO RATINGS
There is some historical inertia in our whole approach to programming models. In the 1970s memory speeds were faster than CPU clock speeds (RAM access was on the order of 100ns but CPU instructions on 1 1Mhz clock took microseconds to execute.) So programming languages took care to optimize arithmetic operations but could get away with *ignoring* memory completely...since memory accesses took place almost instantly from the processors point of view. So C does not distinguish between fast and slow memory...all pointers are equivalent. If there is a delay in accessing memory, the language makes no provision for how to reduce that latency...it does not even explicitly acknowledge that as a possibility. All programming languages today have this bias towards ignoring memory I/O, as a legacy from the popular languages of the 1970s. Since then CPU speeds have gone up by an order of magnitude but memory speeds have only gone up slightly. And so hardware designers have used memory caches to try to manage memory invisibly to the programmer...and continue run software to run in a bubble where the conditions of the 1970s are imperfectly replicated--where memory accesses are fast and instantaneous. Since the bottleneck in CPUs and GPUs is now memory I/O, a new type of language is needed which, at the least, allows the programmer to explicitly make a distinction between the various layers in the memory hierarchy, rather than in the kludgy way it's handled right now. Something like http://sequoia.stanford.edu/

Flash Poll
Radio
LATEST ARCHIVED BROADCAST
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week