Wrong it is 120 Tensor TFLOPS for training while it is 60 Tensor TFLOPS for inferencing:
"Tesla V100's Tensor Cores deliver up to 120 Tensor TFLOPS for training and inference applications. Tensor Cores provide up to 12x higher peak TFLOPS on Tesla V100 for deep learning training compared to P100 FP32 operations, and for deep learning inference, up to 6x higher peak TFLOPS compared to P100 FP16 operations"
"120 Tensor TFlops, 12 times the performance of Pascal on training jobs."
That's inference not training. EDIT: Nvidia seems to claim up to x12 for training and x6 inference.
"is still in PC game chips, but it's a market that is declining"
Nope, PC gaming is growing and pretty nicely actually as there is a trend towards higher FPS, higher res and VR pushing ASPs up. PC gaming will keep growing until discrete glasses are good enough. AMD was out of the high end (above 300$) for an entire cycle , they get back into it with Vega and that will impact Nvidia's share and ASPs but the market itself is growing.