It seems that von Neumann architecture is taken as a given (except for ARM with Harvard architecture). Then of course there's the compiler that takes the high level and breaks it down to the low level machine language that takes many cycles to perform a statement or assignment. There lies the problem: time and power wasted because the cpu architecture does not take advantage of dual port embedded memory blocks. The internal bandwidth of FPGA memory blocks is phenomenal. So put C statements in one block, expressions in another, variables in another so they all can cycle at the same time and you get performance much faster than conventional cpu's. (maybe close to an order of magnitude.), and use the power to cycle memory instead of driving huge buses and registers at maximum rate to achieve high fmax in a pipeline. Then the procedural code can be done in C and the SOC builder used to connect the peripheral IPs. There are hundreds of mem blocks and each cpu only uses a few. The data width can be parametrized to the desired data flow width, while the control width is according to the size of the code since they do not share memory.
Many simple embedded applications may not need devices like FPGA. Very simple uC with I/Os and few other interface suffice the job. Development with embedded processor FPGA takes 10 time more time to deisgn, develope and debug system.
What is more desirable is solution like Cypress PSoC where you have processor, some programmable logic and additional porgrammable analog parts. This gives unique single chip solution and is relatively simple to implement.
We wish to see more integration of power devices like MOSFETs and other HV devices.
Some wonderful comments and some of them are issues that I considered myself. For example Frank says that FPGA compete with ASICs and no processors. I see them as a continuum. We need custom logic because processors are not fast enough, or frugal enough with power, but many companies cannot afford the expense of ASICs, so FPGAs are an alternative for small volume products. But why cannot FPGA become more like processors? Part of it is, I think, as Dr DSP and KB3001 point out a matter of standardization and encapsulation. The independence is highly important in that processors enabled independence of task that we sorely need. I find it funny that I am accused of taking the software side, as I am a hardware engineer at heart and a developer of EDA solutions for many years, and now just someone who ponders what is wrong with the industry we work in. Thanks for the comments and keep them coming.
Brian, you're asking the wrong question. FPGAs compete with semi-custom designs, not with microprocessors. The processor these days is "just" an IP core -- perhaps one of many -- that goes into that semi-custom IC or FPGA design. It's a building block, much like the old 7400 series logic ICs you mentioned.
The end result is a hardware & software system, with emphasis on "system" rather than on either hardware or software. The microprocessor does have those positive attributes of Simplicity, Independence and Abstraction, but 25 years ago you could've said the same thing about those 7400 series logic chips.
I don't know how one goes about simplifying the usage model of that system. You seem to be taking the perspective of the software engineer who would like the hardware design to be more neatly packaged and abstracted so that software development could be smoother. We hardware engineers wish that software design could be more neatly packaged and abstracted so that hardware design could be smoother!
At least the software engineers have the advantage that hardware is always finished first (ok, almost always), so you have a stable system to work with. We on the hardware side have to consider the potential risks of the unknown and unwritten software and whether it could damage the system or cause other serious problems if it did something it shouldn't have.
There are tools for building SOCs that have standard peripherals and DSP blocks, so I have a new cpu architecture/design that runs C statements directly. The key factors are the number of clock cycles used and the size which is scalable to the number of statements, expressions, and variables.
Not von Neumann architecture, no compiling to a cpu instruction set, and YES it is practical to use dedicated engines to avoid RTOS/Multicore complexity.
Still a long ways to get to utopia, though.
It would help but the key is really in "standardisation". We need standard abstractions of FPGA hardware (in the same way as ISAs did for microprocessors) and we also need industry guarantees of backward compatibility of FPGA chips to leverage prior design/programming investment, in the same way Intel did for microprocessors. Technically, I have no doubt this is feasible but the real question is: who will drive this effort? Where are the Intels and Microsofts of FPGAs going to come from? You need a strong killer application for this to happen and that has yet to be found. Academia could play a role in starting the ball rolling but ultimately you need a killer application.
It seems like the solution to this problem is to create IP Cores at a high enough level that 'verification' is done by construction. What if you could just drop blocks onto the design, connectivity was correct by construction, and the functions of the design were pre-verified. Test and debug would be done at the block level not at the gate level. Any tools on the horizon for doing design this way? Object Oriented FPGA-bsed designs maybe?
Blog That A-Ha Moment Larry Desjardin 11 comments Have you ever had an a-ha moment? Sure, you have. The Merriam-Webster dictionary defines it as "a moment of sudden realization, inspiration, insight, recognition, or ...