While this will revolutionise the "Big Data" space, I don't think the idea is that revolutionary, more an idea who's time has come.
We used to think that the 68000 CPU was such an amazing thing compared to the 6800, but really when the 6800 was developed they couldn't really put many more transistors onto a die than would make a 6800. IBM was already making large clunkers with bucket loads of discrete logic chips and I remember around the time that the 8080 came out NEC shipped a 16bit dual processor with megabytes of memory the extent of which simply wouldn't fit into a chip.
All I'm saying is most of these things are "this is the amount of resources we have now, what can we do with it."
I mean we've known for decades where the bottlenecks are, we just couldn't do anything about it.
Don't get me wrong, kudos to NVidia for taking the first brave step but it's not unknown technology. We always cross the bridge when we come to it.