Hi Juergen. I am the author of this article, thanks for the comments, I agree that Filter design is complex and many books have been written on the subject (and I would imagine many more shall be) This article was written as a simple introduction to digital filters (it was initially printed in the Xilinx Xcell FPGA 101 section) introducing where the ideas behind filtering, how to generate the coefficents and the importance of windowing etc. You are correct that there is a pretty good follow article on this on the more advanced filter options an engineer can undertake to reach his final system implementation.
Hm, all digital filters work sequentially and have the same (MUL, ADD) structure, es mentioned above - especially not inside FPGAs.
The strongest advantage of FPGAs is that the engineer has the possibility (and the duty) to choose the amount of parallelisation in order to find the balance between costs, speed and throughput. Therefore optimized filter design is more than instantiating a core, which (in this case) is not even capable to find and optimize the parameters and coefficients (unlike some competitor's products) and is limited in data width too.
Nowaday's digital filter design for e.g. high speed camera applications (flight control, air bone, object recognition, 3D vector extraction) is mostly done in parallel with tricky coefficient optimizing to balance quality and precision in order to process large images in realtime, performing filtering, binning, non uniformity correction and similar tasks rapidly enough with low area and current requirements.
Also there are a lot of decisions to make to optimize filters in complex multi stage stage constellations used with cascaded half band filtering, compensation filtering and combinations of pixel processing filters like performing "false pixel reconstruction", "prebinning filtering" and "missing pixel interpolation" simultaneously. Many solutions for that strongly differ from the theretically found coefficients.
More Information should follow.