There was a time, not so many years ago, when an FPGA was used only for glue logic, connecting together the main elements of a system. The gate counts were low. Maybe the FPGA contained a small state machine to control parts of the system, but it was still fairly simple and the thought of having to simulate the FPGA design was well – crazy talk.
In those days, designing an FPGA was like writing software. If it doesn’t work then edit it, reload it, and repeat until it does work. Well, those days are gone, because the designs being placed in an FPGA today can be huge – much bigger than chip designs of just a few years back.
So most FPGA designers already use a lot of simulation before the design ever gets downloaded into a physical device for the first time – but there is still that temptation to do it much earlier than would be the case for an ASIC – and justifiably so. There is no large NRE costs associated with taping out early, and the round trip time is minutes or hours compared to weeks or months.
Now that the design is in the FPGA, what happens when something goes wrong? Well, you could just try and reproduce the conditions in a simulator, but that may be difficult or impossible. So there is a need to debug the problem right there in the FPGA – and just to add some difficulty to things, the problem may be associated with timing, which means you have to do everything at full speed.
This may also find problems that cannot be reproduced in a simulator without extreme difficulty. Simulators are wonderfully consistent – they produce the same results every time, and the stimulus is 100% repeatable. Not so in the real world where things can drift based on temperature or other physical factors, or unpredictable delays inserted into some part of the process. So this kind of verification is unique to the rapid prototyping world, and this is what we will concentrate on in this blog.
The first option, of course, is to view the activity on the board using an oscilloscope or logic analyzer – if you can get to the right places. But this can be way too limiting. You need to get inside the FPGA to really see what is happening.
One way to do this would be to route a few interesting signals to the external pins of the device, but there are two problems here. The first is that many designs are pin-constrained, and the number of interesting signals could be quite large. The second problem is that if the board has not been designed carefully, deciding which pins can perform this debug function, some of the time, and be accessible is not a trivial task. Of course, if they are used for this purpose 100% of the time, then that can be taken care of more simply. So instead we could think about on-chip instrumentation.
On-chip instrumentation is special purpose logic built into the device that can capture internal activity for replay at a later time. So, just like a logic analyzer, we need some triggers to decide when to start capturing the data and we need some memory to hold the data and a mechanism to be able to access that data. Let’s start at the end and work backwards:
Almost all systems these days contain a JTAG (IEEE1149.1) port, and it is fairly simple to connect this debug logic into the JTAG system. Through this port, the debug instrumentation can be configured, controlled, and used to stream the data out. In order to gain access to the internal data, the system is stopped and the scan chains fed out of the system along with the captured data from the debug system. So this is very static in nature. Set up the test – capture the data, stop the test, look at and analyze the data. There is no way to see live data.
Some companies created extensions, such as the one from Philips Research (Click Here
), that allow data to be streamed out at high speed. Another approach is a combination of the use of the JTAG for configuring the system, and using a set of muxes to select which signals should be fed out in real time, to a group of external pins dedicated to this function. An example of this was created by First Silicon Solutions (now part of MIPS) called FPGAView
The next issue to look at is memory. While FPGAs have quit a lot of memory there are many designs that use large percentages of this. So with limited memory the amount of debug data that can captured is also limited and this has to be a tradeoff between the number of signals captured and the depth of the trace. An external logic analyzer may be able to capture millions of vectors, but this will not be possible on chip. Using external memory will slow the process down such that it may not be possible to capture data in real time.
The internal logic analyzer is just some additional logic added to the design that allow certain aspects of the running system to be measured, recorded or checked for possible error conditions. The user is free to design any and all of this themselves, but it adds nothing to the overall design and it has to be developed and debugged itself. So most of the FPGA manufacturers provide a bunch of preconfigured cores that perform these functions. Examples are SignalTap
from Altera and ChipScope
These let you select what signals are of interest, triggering, and sampling logic and may also contain compression logic to help maximize the use of the available memory. While this takes extra logic, this is not something that needs to be there for every system that is shipped, so a lot of companies will put in a larger than necessary FPGA on a few debug systems and make the production versions with smaller FPGAs inserted.
So there we have a whirlwind tour around some of the debug systems available for FPGAs. This has only covered a tiny fraction of what is available or the ways they can be used, and perhaps I will come back to this topic in a future blog. Until then, I hope this was useful and if there are specific areas that you would like me to blog about – please let me know at firstname.lastname@example.org
Brian Bailey – keeping you covered