Interface – verb: Interact with (another system, person, organization, etc); noun: A point where two systems, subjects, organizations, etc., meet and interact.
Let’s face it; interfaces are usually not very thrilling. They don’t contain any wiz bang capabilities, they don’t make a system exciting, especially when they are deeply embedded in the system, and they are usually considered commodities that are a necessary evil. But interfaces can break a system and they can finish up being very expensive if ignored.
What systems am I talking about? It doesn’t really matter because I don’t care if they are hardware interfaces, software interfaces, interfaces between tools, abstractions, platforms… they are never the things that people want to work on. I can remember when I was a junior engineer (yup, I once designed flight control systems), I was often asked to cut my teeth working on interfaces – basically because everyone else didn’t want to be involved. And yet, time and time again systems fail because of the lack of attention interfaces receive.
I remember several years ago when I was consulting, I talked to one company that was having problems getting a new chip verified. We talked to each of the teams and asked them about their specific problems. We heard one team tell us that the bus unit they had designed had been so heavily verified that they would be surprised if a bug would ever found in their piece. They had used formal verification, they had done everything they could think of to ensure that no bugs would exist in this vital piece of system connectivity. We talked to each of the groups working on aspects of functionality and found out what they were doing about verification. While not as sophisticated as the bus group, they all had quite reasonable plans.
Then we brought everyone together. I asked them who was responsible for integration verification. There was silence in the room for several long seconds. One manager broke the ice and said that they had done one early test to see if they were compatible with the bus. Another manager then reiterated that they had done that same test. Then one very brave manager said – the biggest problem I have is keeping up with all of the spec changes on the bus. Some of the other managers then had a very worried look on their face and it became clear that they weren’t even aware that the specs had changed. The chances of that system working, when it did come together, were close to zero. This company had completely ignored the interfaces and it not only cost them their schedule, but they lost the contract associated with providing the chips for a client that would have ordered them by the millions if they had been successful.
For the last decade, I have been working in an Accellera standards group that concentrates on interfaces. The interface in particular is called the Standard Co-Emulation - Modeling Interface (SCE-MI). We have been attempting to create a standard that bridges a software execution environment, such as a simulator or a testbench running on host computer and hardware platforms such as emulators or FPGA prototypes. When we started this work there was C, C++ and later SystemC on the testbench side and both Verilog and VHDL on the RTL side which could be run on an emulator. As time has moved on, the number of languages on each side has increased. Synthesis technology has advanced such that it is no longer just RTL code that can be accelerated, and each of these creates additional challenges in making these interfaces seamless. Within OSCI, the TLM 2.0 standard is being adopted as a way to bridge transaction level models and the SystemVerilog group is attempting to ensure that they can also connect to this interface. The SystemVerilog DPI attempts to bridge the divide between C code and SystemVerilog, although it comes with a lot of restrictions. So we have interfaces that need to:
- Bridge levels of abstraction
- Bridge languages
- Bridge execution platforms
The user just wants to be able to plug models together, create hybrid platforms consisting of several execution engines where each is selected because it can provide the highest performance or necessary debug capabilities or wants to be able to plug an implementation model into a virtual prototype so that they can test to see if that piece correctly operates within a system context. In addition, we are seeing the successful adoption of high-level synthesis in a number of companies, and if this is going to extend from the block level to anything higher, those same interfaces used for modeling and verification must be synthesizable. Given the importance of this, you would expect there to be a large coordinated effort within the industry to make it happen – right?
That is not the case, and yet without this effort, all of the things that I talk about will not be possible because the effort involved for the user will be too high. Transactor models will not be developed because there will be too many variants required. Users will have to keep changing interface models to enable different aspects of a design and verification process.
We continue to develop standards that are conceived in a void and while they may be good for one purpose, they create problems in so many others. I am very pleased to hear that SOCI and Accellera are merging, but as an industry we also need to come together to define a consistent set of interfaces that bridge all three of the interface domains and starts to enable flows that connect the ESL world to the RTL world. If we don’t we will, as an industry, suffer the same fate as that company who ignored the interfaces in their chip.
Brian Bailey (http://brianbailey.us
) – keeping you covered.