While the origin of the expression “May you live in interesting times” is not really known, it is thought that it was meant as a curse. Right now I see this as one of the greatest opportunities that I can remember seeing in my long years within the EDA industry. I am talking about the convergence of several different little pieces of news over the past few weeks and the ways in which these can and should be brought together. When change happens, it usually hurts somebody, and that is why the expression is seen as a curse. I alluded to part of this in a blog a few weeks back when I said that the days of the RTL simulator are numbered. For them, the events that are unfolding certainly will be a curse, and it would appear that I ruffled a few feathers with those comments.
What I said was “RTL simulation is a technology of the past,” and I mean it. First off, I am glad that my remark has caused a few people to think about this in an objective way, because if they don’t, then I am not doing my job. I need to make people aware of changes that are happening in our industry long before they become critical issues, because that gives them time to think about it, plan, and look for alternative solutions. Of course there are some people who highly dislike my comments because it may affect their immediate business opportunities. To them I just have to say sorry, but if I don’t do it someone else will.
So why am I so certain about this? It really is quite simple. Logic simulation has barely managed to keep up over the past few decades. It has relied on the ever increasing performance of the processor, cheap and increasing memory available to a processor, plus a set of highly innovative optimizations. On the flip side, design sizes have been increasing such that the typical time spent performing verification has increased and most of that time involves a simulator. Companies have compensated by buying more simulator licenses per engineer and installing huge simulation farms. But this is coming to an end. Almost as long as the RTL simulator has existed, there have been efforts to speed it up using multiple processors. All of those efforts have provided limited success and have normally topped out at just a few processors. The reasons for this have to do with the randomness with which signals propagate through a design and the high costs in terms of inter-processor communications. This is unlikely to change in the near future. Emulators get around this because communications costs are small. The performance of a single processor has stalled and no processor designer is now trying to make a single processor run faster, so without being able to efficiently make use of multiple processors, the simulator performance has stalled and will effectively degrade with each new generation of processor and computer.
Then we hear about a project that Intel is working on in Europe to deal with hardware-software co-design. Now it is clear why Intel would want software-software co-design, because this involves the migration of a single application onto multiple processors, but hardware-software implies that there is some kind of accelerator available, and it wasn’t that long ago that Intel opened their fab to the production of some FPGA technology for Achronix Semiconductor. Peter Clark says “ILE started in 2009 with an annual budget of about 100 million euro (about $140 million) and 800 people and has now grown to nearly 1,200 people and an annual budget of 130 million euro (about $180 million).” Now that is a serious project. According to TechSpot“Intel is going to introduce a ‘fully configurable Intel Atom Processor’ codenamed Stellarton next year. Essentially, Stellarton is a dual die package consisting of a 45nm Atom E600 processor and a FPGA module.”
Then there are the new chips recently announced from Xilinx that combine a dual core processor tightly linked with the FPGA fabric. This is no longer just a processor floating on the same substrate, this is fully integrated and intended for mixed hardware-software systems. It won’t be long before Xilinx has to buy or develop some hardware-software co-design tools, and this is the root of my DAC 2010 prediction that it will be the specialty chip providers and the FPGA providers who are likely to have the first fully fleshed out ESL flows – exactly because they have constrained the problem to the point where it becomes solvable.
So why I am so excited? Remember back a couple of weeks when in my blog I talked about the need for a new FPGA structure to assist with prototyping? Well, I wouldn’t exactly say that this is what I had in mind, but it is a great step that will open up some new possibilities for prototyping, expand its market and start to squeeze both emulation and RTL simulation. I will go into more depth in future blogs, but in short by placing processing power on a chip, which has a high speed ability to communicate with the FPGA fabric, we can start to think about a lot more extensive on-chip debug capabilities. Another problem today is the turn-around time after a design change has been made. What about running a mini gate level simulator on the processor that can handle the changes rather than wait for those to be implemented in the FPGA fabric. Sure, it will run slower, but this can be used in the short term while the main compile is still running for several hours. So to me, the addition of a processor tightly integrated into an FPGA fabric is not just about making programmable devices more ubiquitous, it is making them available for new tasks, or for performing tasks in different ways.
The evolution of hardware design is analogous to the evolution of software, but is years behind. Well "So What?": It is not that RTL is bad because the gcc compiler translates to "RTL" as an intermediate step, then the back end converts to specific instruction sets for each different cpu. We need to look at what really has to happen when designing a system and draw on the analogies.
Things like reuse, encapsulation, abstraction, etc, exist in OOP. There are also data flow and control/decision functions. HDL modules can be modeled by OOP classes(control and data flow) then compiled and instantiated to run with the application being developed, rather than being run on an "RTL" simulator.
The point is to capture the function and integrate it with the software without using
antiquated technology. The compilation will also be much faster because the routing and placement associated with HDL is unnecessary.
Join our online Radio Show on Friday 11th July starting at 2:00pm Eastern, when EETimes editor of all things fun and interesting, Max Maxfield, and embedded systems expert, Jack Ganssle, will debate as to just what is, and is not, and embedded system.