Oscilloscope memory depth is an often misunderstood concept. In fact, many designers don’t even know how much memory their scope has. This article discusses what oscilloscope memory is, why it is important and the benefits and trade-offs of memory in different oscilloscope architectures. In the end, the truth is that not all memory is created equally.
How much memory does the digital oscilloscope on your bench have? Not sure? Don’t feel bad; most people don’t know. But when it comes to oscilloscope memory depth, bigger is always better, right? As with many things, the answer isn’t as straight forward as it may seem.
Let’s start with what oscilloscope acquisition memory is, and why it is important. In an oscilloscope’s simplest form, they are made up of a front end for acquiring the analog signal; that signal is then passed on to an analog to digital converter where the signal is digitized. Once it is digitized, that information has to be stored in memory, processed and plotted/displayed. The oscilloscope memory is directly tied to the sample rate. The more memory you have, the higher you can keep the oscilloscope’s sample rate as you capture a longer period of time. The higher the sample rate, the higher the effective bandwidth of the oscilloscope.
So as we said before, the deeper the memory, the better the oscilloscope, right? In a perfect world, the answer would be yes. Let’s compare two oscilloscopes with similar specifications outside of memory depth. One is a 1 GHz scope with 5GS/s sample rate and 4,000,000 points of acquisition memory (we’ll call this a “MegaZoom Architecture”). The other is a 1 GHz scope with 5GS/s sample rate and 20,000,000 points of acquisition memory (we’ll call this a “CPU-based Architecture”). Table 1 shows common time base settings along with the sample rate. There is a simple calculation to determine the sample rate given a specified time base setting and a specific amount of memory (assuming 10 divisions across screen and no off-screen memory captured):
Memory depth / ((time per division setting) * 10 divisions) = sample rate (up to the max sample rate of the ADCs).
For example, let’s assume a time base setting of 160uS/div and a max memory depth of 4,000,000 samples. That would be 4,000,000 / ((160uS/div) * 10 divisions) = 2.5 gigasamples/second.
Table 1 shows the sample rates for two identical scopes with different memory depths at common time-per-division settings.
As Table 1 shows, the deeper the memory, the higher the sample rate will be as you move in to slower time/div settings. Maintaining high sample rate is important as it allows the scope to function at its maximum capabilities. There is a wide range of memory depths available today in scopes with 5GS/s sample rates, from 10,000 points (10Kpts) all the way up to 1,000,000,000 (1Gpts).
Deep memory is clearly beneficial when it comes to sample rate, but when would it not be advantageous? When it makes your oscilloscope so slow that it is no longer helpful in debugging a problem. Deep memory puts a larger strain on the system. Some scopes are setup to handle that well and remain responsive with a fast update rate; others attempt to make it a banner specification when it isn’t really usable and slows the update rate by orders of magnitude (see What is Update Rate?
on page 3 for a discussion on update rate).
Let’s look at those same two scopes from above. At 20nS/div (a fast time base setting), both scopes are near their maximums for update rate. And neither scope is using its full memory that it specifies in its data sheet. But what happens when you look at another time base setting like 400nS/div? The MegaZoom architecture oscilloscope automatically maximizes its memory depth to keep its sample rate maxed out – the scope will behave exactly as you would expect a deep memory scope to behave (it will keep its sample rate at 5GS/s and still have a fast update rate). The CPU-based architecture scope is still using its default memory depth to keep the scope responsive and isn’t keeping its sample rate as high as it should (and still has a slower update rate). What happens if we adjust the memory depth to keep the sample rate high? You begin to see the trade-offs of a deep memory scope that isn’t designed to handle deep memory – the sample rate is now at its maximum (5GS/s), but the update rate is 1/3 the amount of the MegaZoom scope, and it only gets worse as you look at slower time base settings (e.g. at 4uS/div the MegaZoom scope has an update rate 20 times faster than the CPU-based scope).
Table 2 compares update rates, sample rates and memory depths.