Up until now, (literally) you could use a single GPU for parallel processing and it did a fabulous job offering 10s to 100s of speedup for certain classes of problems.
But we always want more so the idea was developed (by Nvidia's Germany group that until 2011 was Mental Images the ray tracing company) that you should be able to gang clusters of servers with GPUs in them and get more power.
That cluster-gathering scheme for visualizing huge volumetric datasets is called IndeX.
But we still want more. So what if you could gang up multiple GPUs within a cluster?
Well, now you can with a GPU-to-GPU inter-linking connection that is called NVlinks. With NVlinks you get scaling of GPUs in a cluster, and scaling of clusters. Just image the bitcoin farming you could do — it boggles the mind.
For example, real-time visualization of volumetric data is essential for experts in a variety of fields; they use it to gain a visual insight. Dense, high-resolution, 3D images are used in medical examinations, meteorologists study the weather, and geophysicists use it to find oil deposits. This is the big, really big, data.
The Taranaki Basin dataset (Crown Minerals and the New Zealand Ministry of Economic Development)
However, the amount of data produced from a high-resolution simulation can be extremely large. It challenges traditional visualization methods -- and the researchers want more.
A typical geological subterranean survey is 80 to 120 km wide and long, and goes down another 8 to 10 km or more.
The geologists would like to get a resolution of at least 20 meters, which would yield 60 bytes per data point, and end up with 20 GB per shot. They take a lot of samples because one of the analyses they like to do is to make a movie.
If you look at the above image and notice the blue or orange slice, imagine either one of those slices moving back and forth to reveal the underground structure. These "movies" have to run at 30 fps.
The usage model in medical diagnostic using CAT or MRI scans has exactly the same issues and data sizes.
And both, medical and geophysical (to name just two) are critically important to life threatening. Now think about weather systems, simulated nuclear explosions, and simulations of cars crashing into walls and you get a feeling for enormous amounts of data that needs to be processed, and processed fast.
To try and wrangle this data under control and get the benefit of parallel processing using GPUs, Nvidia developed a scheme to put GPUs in a box they call a cluster, and then via a LAN gang up the clusters (the GPUs communicate with each other via PCIe or InfiniBand).
This design, which Nvidia calls IndeX, allows for scaling of one to n-clusters, and basically makes the solution a function of the checkbook of the researchers.
The IndeX software infrastructure contains scalable computing algorithms that run on a separate workstation or, more likely, a dedicated GPU-compute cluster.
Essentially, IndeX brings together compute cycles and rendering cycles in a single interactive system.
This is a big deal, in every sense of the word. Being able to leverage the compute power of a dedicated GPU cluster by means of a GPU rendering cluster is game-changing in interactive visual computing.
That's great, and systems are using it. But we want more, and faster; we always want more and faster. One way to get more, and faster is to stuff the clusters full of GPUs that could talk to each other more efficiently -- more bigger clusters.
At Nvidia's GPU Technology Conference (GTC) the company announced a new GPU code named Pascal.
Next page: Pascal