Due to new findings regarding the decimation filters discussed in the Experiments chapter, I would like to qualify the results of the experiments: It became apparent that the decimation filter designs resulting after the application of the Word Length Optimization were correct regarding the simulated testbench scenarios, but do not fulfill the requirements of E. Hogenauer from the publication "An Economical Class of Digital Filters for Decimation and Interpolation" in "IEEE Transactions of Acoustics, Speech and Signal Processing 29 from 1981" regarding the most significant bits of filter stages in cascaded integrator–comb filters. This requirement ensures that overflows in the integrators of the filter are compensated in the differentiators. As these requirements are not fulfilled, the resulting decimation filters were – in contrast to the handcoded filters - potentially erroneous.
Because of that, in the further development of the Word Length Optimization a further constraint was added so that the critical integer bits are not reduced and reliable designs are generated. This leads to the following results: The area consumption of the resulting decimation filters is 27 percent higher than the area consumption of the handcoded design while the power consumption is 12 percent higher. Although the resulting power and area consumption of the resulting design is now higher, the design is still much more efficient than the resulting design without the application of the developed optimizations. Furthermore, the additional area and power consumption of the generated designs can still be compensated by the reduction of the design effort using the presented design flow.
Thank you very much for your questions.
1. HDL Coder does not perform any high-level synthesis in terms of scheduling, binding and allocation. Instead, HDL Coder performs a direct mapping between Simulink blocks and VHDL/Verilog constructs both for data-flow and control-flow dominated designs. The presented optimizations only address data-flow dominated designs.
2. For the same specification the word-length optimization will result in the same implementation as long as the same seed for the random number generator is used. Yet, you are addressing an interesting point, as small changes in the specification can result in completely different word-lengths in the implementation. The most important verification step here is the comparison between floating-point and fixed-point model. We achieve that by simulating both designs in Simulink using testbench stimuli that are representing the specified behavior of the design. After that, by considering the signal deviations, we examine if the fixed-point model still fulfills the spec. The verification between the Simulink model and the generated RTL code is also done simulation based.
3. For data-flow dominated designs, which are addressed by our optimizations, the resulting Simulink models and RTL implementations are verified as described above. Control-flow dominated implementations resulting from HDL Coder can be formally verified by property checking against properties that represent the specified behavior.
I hope I answered your questions. If any more questions occur, please feel free to post them.
A couple of questions coming from not very clear issues in the article:
1. How well does this methodology HLS perform to designs with complex control flow? I only show a couple of statements about data-path dominated (I suppose stream-based) designs...
2. It seems that because you are using simulated annealing to convert from floating point to fixed point, you will be getting different implementations from the same spec. every time you run the tool!
Isn't this a burden for verification groups?
3. I don't see any formal methods used in the translation flow, from simuling models down to HDL implementations. How do you guarantee the correctness of the functionality of results? Even more, how do you prove the implementation functionality matches that of the specification?