Some technologies see quick adoption, while others languish in the shadows. There are times when a technology that sees quick adoption dumbfounds me as to what people see in it, or why it should become so popular.
I am not going to list my idea of the dumbest technologies because in doing so I would surely offend all those who think it is really neat – or at the very least show my ignorance for the real utility that they have. On the flip side, there are technologies that have seemed to me to make a lot of sense that just never seem to take off, and sometimes I cannot fathom out why. I would like to discuss one of those very briefly here.
The technology in question is dynamic reprogramming. For those of you who have ever read one of my books that deal with taxonomies for ESL (ESL Design and Verification or ESL Models and their Application), you will know that I have an axis specifically allocated to this very concept – Configurability. Also, I have spoken about this subject in general in a Previous Blog.
Today, I want to talk specifically about the one extreme of the axis that I defined as dynamic configuration. By this I mean the ability to reprogram an FPGA, or at least part of it, while the system is running. Blocks of code would be pre-compiled and ready to be mapped into an FPGA with a fixed configuration. Standard interfaces would be used to communicate between the software and hardware such that the routine could continue to operate in software, if acceleration resources were not available, and would have an efficient connection to the hardware if that was to be used. Streaming interfaces and several memory access mechanisms would be available depending on the application needs.
It was probably 15 years ago that I tried to get a research program started on this very subject, and even though I was offering money to fund the program I couldn’t find a single professor interested enough to take it up. It seemed to me to be a logical extension to an operating system. To start thinking about FPGA resources as being just another limited and shared resource that had to be allocated to software that wanted to use it.
This would make the scheduling a little more difficult because of the time necessary to reprogram the FPGA, and even more complex if something was using the FPGA when a higher priority task came along and wanted to use the resources for itself. But by treating the FPGA as a sort of cache, such that functionality can be paged in and out, it would be a fairly controlled environment. I think part of the problem back then was it didn’t appeal to the hardware guys because it involved an operating system and it didn’t appeal to the software guys because too much hardware knowledge was required. Those days are gone in many Universities today with the EE and CS departments either fully joined or at the very least cooperating. I do find a few papers on the subject, but few that seem to make real progress.
Historically there were also some practical limiters such as those associated with global reprogramming and load time. By global reprogramming I mean that you could only reprogram the entire FPGA and thus a system would need several independent resources if it wanted to keep using one while another one was being prepared. The other limiter was the time it took to reprogram them. This is just a matter of the time it takes to write all of the necessary configuration data into the device. But it seems to me as if both of these limitations have been overcome, or at the very least improved to the point that they are no longer show stoppers. The configuration time is still significant, but it would appear to me that for many of the cases where this is to be used, the need to schedule an “algorithm” to be prepared for execution in an FPGA can be anticipated such that the load time should not be an issue.
So, I would love to hear your opinions as to why this is not a practice that is seeing more widespread usage. Is there still a cost issue such that the devices that could really make use of it cannot afford the monetary cost, the power budget, or some other factors? I do remember hearing concerns about verifying such dynamically configurable systems, but I also consider this to be a well controlled problem. Each block does need to get verified twice – once in software and once in hardware, and the mechanism to load the modules needs to be verified, so there is some incremental cost – but is this enough to kill the idea?
Brian Bailey (www.brianbailey.us) – keeping you covered
If you found this article to be of interest, visit Programmable Logic Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to programmable logic devices of every flavor and size (FPGAs, CPLDs, CSSPs, PSoCs...).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for my weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).