I see that this article is almost 4 years, but has been brought up recently by the author in a Linkedin discussion (http://linkd.in/GQCb5v).
Although using an FPGA with a soft processor makes sense in a lot of scenarios, hardware refactoring for cycle accurate binary compatibility with a 15year-old firmware is not one of them.
There are misleading arguments in the article, such as that it is an advantage to seek binary compatibility for old firmware, instead of doing firmware redesign. The main argument seems to be that doing this (an exact clone of an old processor) can be an investment protection, and can accomodate future changes.
Another aspect is that "code that worked perfectly for over 15 years" is equivalent to bug-free code. As much as managers may think that way, this is most certainly not true.
Another one is that it is implied that a customer can do this in a cost-effective way.
The realiy is that it is required a totally different engineering team to do IP integration with custom-written RTL in a FPGA, with different tools, and very skilled verification engineers to debug the Instruction Set compatibility, bus timing, and peripheral behavior with the existing binary.
In the process of verifying the RTL, subtle bugs in the code show up, and it is *very* difficult to sort out if the bug is in the firmware or in the RTL.
After a huge effort, you end up with a system that is exactly like the system designed 15years ago. Any "enhancement" in functionality will amount to a firmware refactoring, in the same old codebase, which probably will demand full regression testing on the RTL during debug, and defeating the original purpose to avoid touching the code. That is hardly future-proofing.
This approach seems to be targeted to project managers, which have shallow information on the real demands of the work, and a dim visualization of the tradeoffs involved.
Has anyone implemented FPGA approach for an application that requires very low power consumption? How much extra power does FPGA draw?
Also, anyone tried to use this approach for medical applications and passed it through FDA?
There may be reasons to replace a uP by an FPGA. But please not the long term availability.
The 68HC11 or the 8051 have been around for 20+ years.
If you target that core into an FPGA, chances are that this specific FPGA will be around for 5 years or so, until silicon switches to the next smaller technology or until the FPGA manufacturer decides that the part with that particular case you have chosen has not been successful and will be discontinued.
It is by far safer to jump on a popular ARM, AVR, H8x or PIC today and invest into this re-engineering effort.
I agree with another comment: this is also a good opportunity to do some spring cleaning in the code.
And then, you get integrated ADCs, lower power, better code efficiency, smarter software development tools, smaller footprints, etc.
Some people, obviously, have only experience of well-structured, well-written, "commented to death" and documented code. How lucy can you be!
I had to work on spagetti-code for a 8051 (dare not call it SW ) that is not maintainable, and the amount of time needed to untangle the funtionalty was immense.
I very much like the article, but I also have to agree that the premise is a bit unreasonable. That?s not to say that it hasn?t happened, mind you, just that it would not be a good idea in the situation given.
An FPGA implementation of an obsolete part requires just as much effort as, if not more than, reworking the code for an updated processor. I believe it would be a poor business decision to put engineering resources into such a project.
If, however, your objective is to supply substitute parts for obsolete processors, then the project makes perfect sense. It would certainly behoove any company to purchase a drop in replacement rather than reengineer a fully functional product.
I suppose there could always be the case where you fired your engineer and he took all the documentation, the dog ate it, or the code is pirated and nobody knows how it works.
I kinda have to agree with the first comment, attached and all as I am to my old 8-bit designs.
A modern single-chip processor will probably have more peripherals including ADC/DAC converters than required for the application. Replacing the complete circuit (even if on a board the same size, maybe to fit the existing case) may be a lot simpler and leave you a whole pile of empty space for new enhancements. In engineering terms it may actually be simpler and cheaper than building the drop in pin-compatible FPGA board (clever and all as that is).
From a software perspective the 68HC11 application may be relatively easy to replace with an application written in C/C++. Older 8-bit applications often look complicated because they were written in assembler and engineers had to be clever to fit everything in. Even when they were written in C, it was often a very convoluted form of C with lots of quirks to suit the 8-bit architecture. You might find the app simple enough to rewrite when there are no memory restrictions and better tools available.
Finally, another solution altogether, is a hybrid of the above approach (new everything) and the FPGA concept (old everything simulated in hardware). How about the idea of using a new processor with all the required peripherals (plenty of ARM variants spring to mind) and implement a software emulator (core emulators for most 8-bitters available in open source). That way you could run all your existing code as is but have a new hardware platform for future enhancements.
Author Responds to EDW:
Your analysis of the situation is very one dimensional. It is important to understand that
- The FPGA solution is based on actual customer experience with viable product.
- Project schedule, manpower/resource allocation and risk analysis have been considered before chosing an FPGA solution.
- Many design managers will not want to pull engineers off newer generation product to reengineer a legacy board. A mezzanine solution is fine for them.
- Not all designs are as power sensitive as yours.
- Production volume also impacts the customer decision.
- For a product that has been validated by thousands of cumulative years of operation without failure the binary compatible FPGA option is a serious contender.
In summary, customers who are chosing to utilize soft processors in FPGAs to replace obsolete microprocessors are not nuts at all. They are making smart business decisions after considering all their options.
I'm sorry but this is nuts. Who, with a real viable product, is running around instantiating a creaky old 68HC11 in a dedicated FPGA? On an expensive separate PWB? With external ADC no less? Too impractical, expensive, and power hungry.
Buy a new processor for $0.50 and reap the benefits of low-power cheapness. Direct all that time and effort you would otherwise spend getting the FPGA up and running into retargeting your code (and fix some things while you are in there).
If you already have an FPGA on the board and need some tiny processing then by all means instantiate a processor. But I certainly wouldn't go out of my way to do this, that's just looking for trouble.
This article is an unsuccessful attempt to address a problem that doesn't exist in the solution space.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.