LONDON – Microprocessor vendor Advanced Micro Devices Inc. has declared that Fusion, its flagship processor project whereby it has combined x86 and graphics processors, will be CPU and GPU agnostic. The announcement was made as part of a keynote at the Fusion Developers Summit, being held in Bellevue, Washington, by Phil Rogers, AMD Corporate Fellow.
The early examples of Fusion have been based on x86 processor and GPU cores developed internally by AMD. However, AMD is clearly heading for a higher level of abstraction and believes it can do better by letting multiple hardware and software companies join with it as it tries to enable heterogeneous computing. It is effectively turning the Fusion marketing brand into the open Fusion System Architecture with a specification that enables chipmakers to combine multiple CPUs and GPUs and preserve an efficient programming model.
The development is likely to allow ARM cores to be used as part of the Fusion architecture although Rogers did not mention ARM explicitly as he laid out the open-platform plan for Fusion.
The main thrust of Roger's keynote was that AMD wants to create an architecture whereby different combinations of CPU and GPU processor cores operate as a unified processing engine that delivers both higher performance and lower power consumption compared with today's variants.
Having discussed the historical trends from single- to multicore and on to heterogeneous multicore computing it was about half-way through the talk that Rogers described the Fusion System Architecture as an "open platform" and added that this meant the virtual ISA specification, known as FSAIL, the memory model the despatch mechanism would be published.
Rogers said: "The Fusion system architecture is ISA agnostic for both CPUs and GPUs. This is very important because we're inviting partners to join us in all areas; other hardware companies to implement FSA and join in the platform; operating systems companies to fully embrace all of the features and deliver its full performance and quality of service; tools and middleware companies to provide the tool infrastructure to develop, optimize and debug the programs that will run on this platform."
He added that an FSA review committee would be formed to guide the evolution of the architecture and to allow all participants a voice in its direction.
Not going to try to fix all the issues in my original post :)
Happily, we seem to be in a period where there is a fair bit of healthy competition going on, so MS releases AMP++, probably aiming for windows based software being able to be the first to market with the sizable performance gains we should be seeing (vs Mac or Linux).
Likewise ARM and AMD are likely very interested in seeing OpenCL win out over less 'open' options such as DirectCompute or CUDA.
Hopefully these desires will result in tools that are easy for the software industry to embrace and employ.
In AMD's APP SDK, they expose an intermediate language that they call CAL. I've briefly looked at it, and it looks like a "portable assembly language". What I mean by "portable assembly language" is that there's a collection of simple, assembly-like, but non-machine-specific instructions for specifying different operations, such as a 4-wide floating-point vector add.
Their compiler for OpenCL/CAL, which appears to be based on LLVM, could be rapidly re-targeted for other architectures. Likewise, the scheduling/load-balancing runtime should be readily portable to other, non-x86/ATi GPUs.
To go back in time, we have to understand why CPUs and DSPs were separate. Cant CPU do what a DSP does (MAC-multiply:accumulate)? Obviously it can but it is NOT fine-tuned for DSP operations. DSP does it faster than a CPU. The same way DSP cant do what a CPU does because it was not fine-tuned for CPU operations. The case of CPU and GPU arose because of similar reasons. CPU can do what a GPU does, but performance will not be as good - especially if we take current day graphics and gaming consoles. Fusion approach remains to be seen - whether AMD/Intel is letting their CPU core do GPU work or GPU core do CPU work. Either way, performance goes downhill. Proof of the pudding is in the eating.
I think you are on the money. Software developers are used to write once, compile as necessary, run on many platforms. The advent of multicore roadmaps risks breaking that easy of migration and the software industry react well to it.
So while Fusion has in the past been an AMD-x86 only thing Fusion System Architecture will, in the future, be a high-level specification for heterogeneous multicore hardware including CPUs and GPUs of any flavor. And AMD says it wants lots of hardware companies to play.
I think that what AMD is referring to here is that the ability to create software which can seamlessly leverage OpenCL / GPGPU requires a development ecosystem which itself can be largely hardware agnostic (or atleast offer reasonable abstractions).
This would allow ARM clients to more performance / efficiency, much as AMD hopes to. The AMD Fusion platform is intentionally FP anemic due to their expectation that FP heavy workloads will be offloaded to the GPU component.
Much as with the lag (in consumer computing) from multicore availability to software which takes advantage of it, we are faced with the same issue with Fusion - and AMD is looking for partners to speed up that transition.
I'm struggling to interpret the specific meaning of this announcement. The statement about combining x86 and GPU being CPU and GPU agnostic does seem to be contradictory.
It sounds like perhaps this is a consumer computer architecture that would be resented as an alternative to the current PCI based system. I don't see how that could get any traction without participation of Intel. If it's a proposed architecture for mobile computing devices, then it would probably need the participation of ARM.
If it's a proposed standard architecture for embedded computing systems, then it starts to make sense.
This is a very confused pitch, that self-contradicts.
If you are shipping silicon with CPU & GPU, then clearly you are not "CPU and GPU agnostic" - that's as silly as Ford selling a car, then trying to claim it is Engine and Transmission agnostic!
Then they try to spin that "unified memory address space" is something new, but that's always been a cost/performance trade off.
GPU memory went separate for speed reasons.
Then they choose an acronym that rhymes with FAIL, what are they thinking ?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.