REGISTER | LOGIN
Breaking News
Slideshow

Nvidia Rolls Volta GPU For 'AI Revolution'

Volta taps 12nm TSMC and Samsung HBM2
5/11/2017 01:21 PM EDT
2 comments
NO RATINGS
1 saves
Page 1 / 11 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
Hasee.Gatsby.330
User Rank
Rookie
Re: ,,,,
Hasee.Gatsby.330   5/11/2017 4:08:19 PM
NO RATINGS
Wrong it is 120 Tensor TFLOPS for training while it is 60 Tensor TFLOPS for inferencing:

"Tesla V100's Tensor Cores deliver up to 120 Tensor TFLOPS for training and inference applications. Tensor Cores provide up to 12x higher peak TFLOPS on Tesla V100 for deep learning training compared to P100 FP32 operations, and for deep learning inference, up to 6x higher peak TFLOPS  compared to P100 FP16 operations"

 

https://devblogs.nvidia.com/parallelforall/inside-volta/

realjjj
User Rank
CEO
,,,,
realjjj   5/11/2017 3:34:56 PM
NO RATINGS
"120 Tensor TFlops, 12 times the performance of Pascal on training jobs."

That's inference not training. EDIT: Nvidia seems to claim up to x12 for training and x6 inference.

 

"is still in PC game chips, but it's a market that is declining"

Nope, PC gaming is growing and pretty nicely actually as there is a trend towards higher FPS, higher res and VR pushing ASPs up. PC gaming will keep growing until discrete glasses are good enough. AMD was out of the high end (above 300$) for an entire cycle , they get back into it with Vega and that will impact Nvidia's share and ASPs but the market itself is growing.

Most Recent Comments
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed