News
EEtimes
News the global electronics community can trust
eetimes.com
power electronics news
The trusted news source for power-conscious design engineers
powerelectronicsnews.com
ebn
Supply chain news for the electronics industry
ebnonline.com
elektroda
The can't-miss forum engineers and hobbyists
elektroda.pl
Products
Electronics Products
Product news that empowers design decisions
electronicproducts.com
Datasheets.com
Design engineer' search engine for electronic components
datasheets.com
eem
The electronic components resource for engineers and purchasers
eem.com
Design
embedded.com
The design site for hardware software, and firmware engineers
embedded.com
Elector Schematics
Where makers and hobbyists share projects
electroschematics.com
edn Network
The design site for electronics engineers and engineering managers
edn.com
electronic tutorials
The learning center for future and novice engineers
electronics-tutorials.ws
TechOnline
The educational resource for the global engineering community
techonline.com
Tools
eeweb.com
Where electronics engineers discover the latest toolsThe design site for hardware software, and firmware engineers
eeweb.com
Part Sim
Circuit simulation made easy
partsim.com
schematics.com
Brings you all the tools to tackle projects big and small - combining real-world components with online collaboration
schematics.com
PCB Web
Hardware design made easy
pcbweb.com
schematics.io
A free online environment where users can create, edit, and share electrical schematics, or convert between popular file formats like Eagle, Altium, and OrCAD.
schematics.io
Product Advisor
Find the IoT board you’ve been searching for using this interactive solution space to help you visualize the product selection process and showcase important trade-off decisions.
transim.com/iot
Transim Engage
Transform your product pages with embeddable schematic, simulation, and 3D content modules while providing interactive user experiences for your customers.
transim.com/Products/Engage
About
AspenCore
A worldwide innovation hub servicing component manufacturers and distributors with unique marketing solutions
aspencore.com
Silicon Expert
SiliconExpert provides engineers with the data and insight they need to remove risk from the supply chain.
siliconexpert.com
Transim
Transim powers many of the tools engineers use every day on manufacturers' websites and can develop solutions for any company.
transim.com

Nvidia Rolls Volta GPU For ‘AI Revolution’

By   05.11.2017 0

SAN JOSE, Calif. — Machine learning is sparking a new era in computing, according to Nvidia’s chief executive, who hopes that his latest GPU, Volta, becomes its favorite fuel.

The Volta announcement was the centerpiece of a two-hour keynote at GTC on “Powering the AI Revolution.” The annual Nvidia event attracted a record of more than 7,000 attendees, thanks to rising interest in using an expanding array of neural networks across a broadening horizon of applications from agriculture to pharmaceuticals and public safety.

Nvidia’s graphics processors hold a strong position in training neural nets for machine learning. “Every single cloud company in the world has Nvidia GPUs provisioned for a cloud service,” said founder and CEO, Jensen Huang.

But it’s a hotly competitive field. More than a half-dozen startups are working on new architectures, two of them acquired last year by Nvidia’s largest rival, Intel, The x86 giant also bought established FPGA maker Altera, whose chips are used as accelerators in the data centers of Baidu and Microsoft.

Rival AMD also is accelerating its rollout of new GPUs with its Vega chip due soon. However, AMD has only recently added a strong focus on machine learning to its pursuit of the game market.

Nvidia is running as fast as possible to stay ahead. Its 815-mm2 Volta will pack 5,120 CUDA cores and 16-Mbytes cache to deliver 7.5 64-bit floating point TFlops. It is made in a 12-nm FinFET process at TSMC and is packaged with 16 GBytes of Samsung HBM2 memory running at 900 GBytes/second.

The Volta processors, called Tesla V100, can link to each other or to CPUs at 300 GBytes/s via an Nvidia proprietary NVLink. The chip also packs new instructions enabling 4×4 matrix operations that are at the heart of 640 new Tensor cores in the chip.

The net result is a 50 percent performance boost over the Pascal chip that the company launched a year ago and started shipping last fall. Volta delivers 120 Tensor TFlops, 12 times the performance of Pascal on training jobs.

With its Tensor cores, Volta is “no longer a general-purpose GPU architecture, so Nvidia cannot be accused of using its GPU hammer and seeing every problem as a nail,” said Kevin Krewell, principal analyst at Tirias Research. “Although Volta is more efficient [than Pascal] running deep-learning workloads, Nvidia didn’t compare it with Google’s TPU ASIC.”

Nvidia was vague on why it chose the TSMC 12-nm node. Mobile SoCs are racing into production with the 10-nm TSMC node, while the 12-nm node is based on a shrink of TSMC’s 16-nm process, Krewell noted. “It could be that the 12-nm node offered faster time-to-market, but Volta is also a huge die and is pressing the limits of die area.”

“I think [that] Nvidia will wait until late Q3 or early Q4 to bring out a graphics-only version of Volta,” said Jon Peddie, principal of Jon Peddie Research. “That would be an appropriate time to do it as AMD will be bringing out their highly anticipated Vega in Q3, and if it’s as good as many people think it will be, Nvidia can push back with its GTX-based Volta.”

Engineering managers from Amazon, Baidu, Facebook, Google, Microsoft, and Tencent released statements supporting Volta. They were joined by a technology leader from a U.S. national lab.

As with past generations, Nvidia will supply not only chips, but its own systems, packing up to eight Voltas. They include two 2.2-GHz Xeon E5 processors and 128-Gbytes memory and draw 3,200 W to deliver up to 960 TFlops of 16-bit floating-point performance.

Next page: Toyota taps Nvidia for ADAS

0 comments
Post Comment
realjjj   2017-05-11 15:34:56

"120 Tensor TFlops, 12 times the performance of Pascal on training jobs."

That's inference not training. EDIT: Nvidia seems to claim up to x12 for training and x6 inference.

 

"is still in PC game chips, but it's a market that is declining"

Nope, PC gaming is growing and pretty nicely actually as there is a trend towards higher FPS, higher res and VR pushing ASPs up. PC gaming will keep growing until discrete glasses are good enough. AMD was out of the high end (above 300$) for an entire cycle , they get back into it with Vega and that will impact Nvidia's share and ASPs but the market itself is growing.

Hasee.Gatsby.330   2017-05-11 16:08:18

Wrong it is 120 Tensor TFLOPS for training while it is 60 Tensor TFLOPS for inferencing:

"Tesla V100's Tensor Cores deliver up to 120 Tensor TFLOPS for training and inference applications. Tensor Cores provide up to 12x higher peak TFLOPS on Tesla V100 for deep learning training compared to P100 FP32 operations, and for deep learning inference, up to 6x higher peak TFLOPS  compared to P100 FP16 operations"

 

https://devblogs.nvidia.com/parallelforall/inside-volta/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.