SAN JOSE – Nvidia rolled out the latest graphics board geared for use in highly parallel technical and scientific jobs. It also described a graphics board supporting hardware virtualization and a dedicated video encoder to enable use in applications such as cloud-based gaming services.
The news comes at the company’s GPU Technology Conference where Nvidia aims to rally more than 2,500 attendees to use its graphics chips in a wide range of performance-intensive parallel apps. International Data Corp. estimates such apps will spend as much as $400 million on GPU boards this year, more than doubling by 2016 with Nvidia taking the lion’s share.
Nvidia packs two Tesla K10s on a single PCI Express Gen 3 board to deliver 4.58 teraflops of single-precision floating point performance and 320 gigabytes per second of memory bandwidth. The board is available now from vendors including Appro, Dell, HP, IBM, SGI and Supermicro with a double precision version, the model K20, coming in the fall.
Nvidia builds the Tesla boards itself, and sells them for prices ranging from $1,500-$2,500. They support ECC memory and parallel programming models such as the message passing interface (MPI) and are cooled by server chassis subsystems. By contrast Nvidia consumer graphics boards are built by third parties, sell for $99 to $1,000 and use their own fans but lack ECC memory and MPI support.
The Tesla chips are based on the same 28 nm Kepler core Nividia announced for consumer graphics chips in March. It sports 1,536 Nvidia proprietary Cuda rendering cores and 192 control logic cores on a GHz clock. That’s up from 512 and 32 cores in the 40 nm Fermi parts that ran at 772 MHz.
Separately, Nvidia aims to give a boost to cloud-based gaming services with its new VGX product, a board that packs four GPUs and a 16 Gbyte frame buffer memory. It supports hardware virtualization and a dedicated H.264 video encoder to help overcome network latency that could otherwise slow game data carried over long distances.
The Kepler chips support dedicated virtualization channels. Nvidia will create its own hypervisor for the VGX boards that will run with virtualization software from Citrix and VMWare. In future, Nvidia aims to support Microosft and Xen virtualization software on VGX.
Dell, Cisco, HP, IBM and Supermicro will sell systems using the VGX boards. Such systems aim to fuel the rise of cloud-base gaming services that provide access to remotely hosted games at native-like speeds. Companies including OnLive, GaiKai and Playcast already provide such services and will work with Nvidia.
Engineers representing a wide variety of high performance parallel apps will present at GTC. They include a group using Nvidia processors to calculate where to land a robot on the surface of the moon as part of the LunarX grand challenge. Others will talk about using GPUs in specialized algorithms to reduce from a week to an hour the time to search for a match in a vast database of fingerprints.
Engineers from other fields supplied supporting quotes for Nvidia’s Tesla K10 launch.
“My seismic application is 1.8x faster on the K10, compared with the Tesla M2090 GPU within the same power envelope,” said Paulo Souza, a developer at Petrobras RTM, speaking in a press statement. “This technology will accelerate our ability to find and reach new oil and gas reserves, as 90 percent of our computational power comes from the GPUs,” he said.
“The massive amount of video data being generated from security cameras and UAVs presents a new big data problem for the defense industry,” said Yiannis Antoniades, a director at BAE Systems, speaking in the press statement.
“We now have broad access to robust, high-quality video, but often we cannot analyze it quickly enough to generate actionable intelligence,” said Antoniades. “GPUs are being used to accelerate nearly every aspect of video analytics, enabling us to provide real, valuable data to the field quicker than ever before,” he said.
The Tesla K10 packs two Kepler-based GPUs and 8 Gbytes RAM.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.