SAN MATEO, Calif. Nvidia Corp. rolled out its long awaited GeForceFX graphics processor at Comdex on Monday (Nov. 18). Built in a 130-nanometer copper process by Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC), the 125-million transistor processor outputs eight pixels per clock cycle and is billed as the most sophisticated chip of its kind.
"We're still far behind the Holy Grail of cinematic shading in real-time, but with this part we are getting closer. This is the first graphics processor where you can really bring the advantages of mathematical programmability down to the pixel level," said Steve Sims, senior desktop product manager at Nvidia (Santa Clara, Calif.).
"Nvidia has talked about the concept of the graphics processing unit like a CPU for graphics for some time, but with this part, that is really true for the first time," said Dean McCarron, a market watcher with Mercury Research (Scottsdale, Ariz.).
The GeForceFX implements a superset of graphics programming instructions defined in Microsoft Corp.'s DirectX 9.0 graphics programming interface. The part allows developers to essentially create complex models using Cg, a variant of the C language for graphics that Nvidia helped define. The chip can render and shade models automatically at the individual vertex and pixel levels.
Nvidia claims the part supports 65,536 vertex shading instructions, up from 128 in its DirectX 8 chip. It can also handle 1,024 pixel shading instructions and up to 16 texture maps; the earlier-generation part handled four instructions and four texture maps.
Nvidia's Cg compiler can output code for either the DirectX or OpenGL APIs. DirectX 9 is currently in a beta release, with final software expected to ship by early next year.
In terms of raw hardware, the GeForceFX supports both 64-bit and 128-bit floating-point color, providing four 32-bit values for each pixel. "That's impressive and it will matter a lot for people trying to get accurate visual representations," said Peter Glaskowsky, editor in chief of The Microprocessor Report.
Glaskowsky said the GeForceFX could be as large as 205 mm2 and consume up to 35 watts, plus 10 to 15 watts for its DDR-II frame buffer. Nvidia would not provide size, power or price figures for the part.
"It's a huge die," Glaskowsky said. "This is definitely not something most of us will ever mess with. It is probably the most expensive graphics chip ever made. For some people, it will be worth it."
The GeForceFX competes head-on with the Radeon 9700 launched late last summer by ATI Technologies Inc. Both chips are aimed primarily at a high-margin, low-volume market for $400 adapter cards for PC game enthusiasts.
Nvidia has an edge over ATI because it supports eight graphics pipelines, 128-bit color, a 32-bit internal data path and DDR-II memory. ATI implements six graphics pipelines, up to 96-bit color, a 24-bit internal path and DDR-I memory.
But ATI has been shipping its chip since August. The GeForceFX will not be in cards on retail shelves until February. Moreover, ATI is expected to have a 150-nm die shrink of its part available early next year.
Nvidia's delay to market stems in part from difficulties that foundry TSMC had with its 130-nm copper process, details of which the company would not discuss.
The GeForceFX uses a flip-chip package to preserve signal integrity at its 500-MHz frequency and to aid in heat dissipation. The part requires a special copper thermal unit to cool the processor and associated memory, utilizing an internal fan, heat spreader, cooper fins and heat sinks.
The device is expected to ship with 128 Mbytes of 500-MHz DDR-II memory using eight 4-M x 32 chips linked to the processor over a 128-bit bus. The card links to the system via an AGP8x interface.
Fujitsu-Siemens is currently the only announced top tier OEM design win for the GeForceFX, though about six other smaller OEMs and about seven adapter card makers have said they will use the part, including including Taiwan-based Asustech.
"This is not high-volume product, but you will see us fill out this introduction with a full product family," an Nvidia spokesman said.