It is quite right that Co-Design will decrease the chances of delay in the system design, generally the traditional design was incorporating the hardware design first and later at the time of designing the software many points of modifying the hardware were getting raised, this approach minimized these chances.
Dear KB3001, EDA evolution contains many examples where the introduction of new technologies, to further automate the design process, met with reticence and resistance before their adoption. For example, RTL languages and logic synthesis when they emerged in the 1990s, but are now in routine use. For ESL and codesign, the challenge is even greater because it does not only involve hardware designers, but also software developers. We are not saying that software developers will replace hardware designers (or vice versa). Anyone who has worked with HLS tools knows that it takes knowledge and experience in hardware to obtain efficient results. Once you know how use it, HLS saves a lot of time and allows one to focus on the algorithm optimization (in collaboration with software developers) rather than on details of state machines. For that, we say that a common langage and platform must be at a higher level of abstraction than RTL, allowing us to bring together "groups of developers with complementary skills"...
Thanks for your input, j_b_ This isn't just musing on paper, we are doing this today with system level (ESL) models in C/C++ with SystemC and TLM-2.0. By working in a common language, this is not just for system architects but allows system architects to work with software developers and hardware designers. So the sort of specialist insight that you talk about, and new perspectives that they can gain from each other, can help to drive design exploration and achieving an optimal design. The technology does not optimize the design (yet), it merely eases the process by retargeting the same function (within an application) to either hardware or software implementation.
My experience has been the same. Instead of expecting one person to be expert at everything we needs groups of developers with complementary skills who can understand each other's requirements and can work well together in an agile way.
PS. High level tools are good after the bootom-up work has been done properly.
I second the encouragement of software engineers who understand hardware and hardware engineers who understand software! All too often I have seen software talent being used to design hardware (usually FPGAs due to the "programming nature" of the VHDL/Verilog) who did not understand any of the hardware nor the implications of their "code". Over time, they learned from their mistakes and migrated towards a software like understanding of hardware and given enough experience would be quite adapt at the development of hardware. The biggest bang for the buck would be to more tightly integrate the software/hardware developers so that just like Agile software team approach everyone is responsible for jumping in and making the next release (hardware prototype) successful.
In my opinion that's something that sounds great on paper but might very well fail in practice. I would rather prefer hardware developers that can program and programmers that understand hardware. Together they'll do the proper dove-tailing much better than such an abstraction monster - while using the correct tools for both sides of the development. At least that's my experience over the last 25 years in this business. It's like the thing with automatic routing of PCBs with all the constraints etc. The theory sounds great, but the results of human professionals are still better than what the machine turns out.
Blog Doing Math in FPGAs Tom Burke 16 comments For a recent project, I explored doing "real" (that is, non-integer) math on a Spartan 3 FPGA. FPGAs, by their nature, do integer math. That is, there's no floating-point ...