I think it was probably 17 years ago when I started to attend co-design conferences. At that time they were very academic in nature and they concentrated mostly on the most optimal way to automatically partition a design written in C, Java, or some other language into hardware and software.
The interfaces were always the sticky area and most of the algorithms just assumed that communications was free both in terms of performance and resources. Some of the later ones did take this into account, but not in very flexible ways. We even started to see commercial tools in the area, not just from startups, but from Cadence – with the Felix, VCC – whatever name you preferred to call it by.
But all of this research and the tools went by the wayside. I think the primary reason was they all assumed a single thread, a single function, a single processor, a single bus – and the list could go on. Now designing and partitioning between hardware and software for such a system is not really that difficult and so the tools didn’t provide enough value to warrant their adoption.
I remember one person describing VCC as being a huge user interface on a spreadsheet. We were already beginning to see multi-processor systems emerge and it was clear that a single application solution was not going to cut it. Who wants a phone that can only make voice calls?
So things went quiet for a long time and it is only recently that I have started to see interest rising again. However, it is not the same as before, and I believe it is based on different needs, although some of the others may resurface later. What we do know is that the systems of today are a lot more complex and defy complete static analysis, such that the notion of an optimal partition or indeed anything automatic is not on the table. This creates both a need and a constraint as I have written about in the past.
So what is changing and what is needed? I think the thing that we all accept and has almost become a cliché is that software is now defining much of the functionality. That is a little scary when software is usually behind the hardware development and is on the critical path. I also think because as hardware people we don’t like to think that the success of a product is not under our control.
In addition, performance is related to how the hardware resources are going to be used by software, and that means we have to analyze how the software interacts with the hardware, requiring dynamic analysis. To enable this we have to ensure that the software folks have the necessary tools to give them a fighting chance – something that we really hadn’t done in the past.
Of course, by this I mean a model of the hardware that is available to them early enough in the development process that they can perform real work on it, test and debug code and over time perhaps even get to do some optimization based on the hardware, and get them intimately involved in things such as power conservation. I think we have learned that functionality is not the only aspect of a product that has to work well, performance, battery life, ergonomics and many other factors play into the definition of a winning product, and these are combined hardware/software functions.
In the words of Immanuel Kant, who was the inventor of the term co-design, “to put a question one has to have some information or knowledge”, and this is exactly where we are today. I am not sure that we yet know the right questions to ask, but we are beginning to learn how to make information available to the software folks. Once you have information, you can use intelligence to turn that information into knowledge, and then you can start to form the right questions. When we find the same question being asked by many people then we can look for ways to make that more easily obtainable, and if this affects design decisions, then there is the possibility to automate that aspect of the process in the future.
One such area that may lead to co-design is based on code profiling. We have seen several companies who can extract performance information from running code on a virtual prototype, use that to make decisions about what to partition into hardware and then various degrees of help in completing that task. Automation is still only possible in a tiny fraction of cases, but tools that can reduce the chance of mistakes, or do some of the grunt work will likely succeed, so long as the cost and benefits are in balance.
I would also like to give a shout-out to Neil Johnson who has been writing about a similar subject recently in his blog and comes at it from a different perspective.
We are now on the right track to co-design, rather than the track that the early research took.
Brian Bailey – keeping you covered.
If you found this column to be of interest, visit Programmable Logic Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to programmable logic devices of every flavor and size (FPGAs, CPLDs, CSSPs, PSoCs...).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for my weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).