The comments about "hardware guys" and "software guys" are certainly on point. One additional impediment is that usually they think entirely differently. Aside from that, trying "abstraction" with hardware is a bit like walking on a cloud, not very solid. Once again, a lot of the problems could be solved by having a much better description of what the ultimate product would be. Of course, after the initial fuzzy-ball description, an absolutely complete detailed description of each and every function is part of the package. At that point, the hardware and software guys should be able to use the same words, unfortunately they don't even speak similar languages.
These days most of the functionality normally comes from software, not hardware. Increasingly, the hardware is just life support for software.
I don't know how many times I've been impacted by a HW guy has decided to change something because he felt it was aesthetically more pleasing without understanding that the change was going to cost weeks or more of software effort.
NB however that the opposite seldom hapens. How often does a SW guy change something that forces a HW guy to make a change? The problem is that management see HW as stuff that can't change but software is trivial to change.
Bottom line is that the HW and SW need to be designed together. Nobody makes any changes anything without first checking the implications.
There is a very large human factor involved. HW team is focused getting the documentation out to the vendor and making sure what they get back matches. SW team input is likely to complicate their design, causing delays and risk. HW guys see SW as only a problem and not part of the solution. Development tools that abstract impl technology might help but then the engineers need to be able to handle the abstraction. Not many of either of those.
Any organisation where you have a "hardware team"and a"software team" is screwed. You need hardware and software skills working together in the same team. Without that you will always end up with hardware people tweaking designs without understanding the software implications, or vice versa.
"hat is a little scary when software is usually behind the hardware development and is on the critical path". That happens because people don't do multi-target development. Build PC-based simulators where you can build and test most of the code. That way you can get a huge head start. PC-based development is often a lot faster with better tools and near infinite resources when compared to targets.
Design tools often fail because they're sold as silver bullets and underdeliver or force some pre-defined architecture and fail to fit in with reality. In truth a word processor can often do the job as well. Ultimately co-design is a people problem - not a technical one - and tools can just make the interactions more constrained and complex.
Steady on, Brian. Early Felix/VCC at Cadence Alta did not assume "a single thread, a single function, a single processor, a single bus". At least, we were looking at heterogeneous 2-processor systems such as late 90's cell phones with a control processor (eg. ARM) running user interface and the primitive "Apps" of that day (eg. an address book), and a DSP and hardware blocks doing voice encoding/decoding and handling baseband. Granted, not so complex and with much easier partitioning decisions than today, but still, not quite as simple as you make out in your note.
I think the reasons Felix/VCC failed are more complex than your summary. I agree with your point that systems of the late 90's were not complex enough to require a lot of tools to "muddle through", and I think most designers "muddled through". It is also a fact that Felix/VCC preceded the standardisation of SystemC and its de facto adoption by the industry - by a long time. This meant that users had to build models in a proprietary format, and the history of HDLs tells us that is viewed as undesirable and is a real barrier to adoption. Finally, the lack of IP models in any format, standard or proprietary, meant that any use of a codesign tool required an a priori somewhat exhausting modelling effort - leading to a real chicken/egg adoption dilemma. But it was not for lack of attention to SW that Felix/VCC failed.
Now, of course, the situation is quite different. There are standard modelling approaches, a standard integration language (SystemC, which is quite capable of integrating models written in C/C++), more IP models that can be integrated, and the systems are way too complex, as you cite, to just "muddle through". Especially because they are software dominated with many processors and hardware blocks. In 2011, "Muddling through will not do".
dyson_, thanks for flipping through my blog!
I'll admit to leaning more toward "eye-opening" than "saddening" wrt to org barriers in co-design. A recognition that the organizational barriers actually exist is important. Pair that with the will to address org barriers and we give teams an opportunity to bring practical solutions to co-design that involve the technology of course but also team based development frameworks. I reckon that's an exciting step forward, even if it has taken a while to figure out :)
Actually, I have not yet been involved with a project that's livelihood depended on software being developed for not-yet-existent silicon processing platforms. The use of FPGA/CPLD/uC/DSP silicon, merged together onto a single die for SOC functionality would seem to beg a breadboard using the known good functional blocks that were tested in silicon prior to merging. I personally would have some issues with a vendor providing functional blocks for which no demonstration and evaluation hardware was available.
I agree that in many organization, the hardware and firmware groups have worked in fairly close cooperation. What you seem to be describing (sorry if I get it wrong) is a hardware/hardware partitioning process. You also seem to be describing a board level system where things such as breadboards are possible. What we are seeing in SoCs is a growing amount of hardware/software partitioning and where until recently there was nothing on which to run software until the first chips came back from the fab. With virtual prototypes now operating fast enough to actually run software on a model of the hardware, it becomes possible to start having the two groups operate more concurrently than they have in the past. Also, the cost of manufacturing chips means that one chip probably has to be useable in a whole family of products, again pushing more of the actual functionality into software.
It is always possible that you are one of the lucky few where the application software team and the hardware and firmware teams actually all talk and cooperatively agree on structures and partitions and each provides the necessary tools to the other such that they can get their jobs done at the same time. Most people are still waiting for that to become reality.
So now I am lost. All of the effective co-design projects I have seen and been involved in were of a different nature. These were distributed processing, distributed functionality projects where the co-design was by separation of actual functional blocks of hardware and firmware that talked to other such functional blocks. In this approach, the high level functionality could be allocated and several teams could attack their respective functional block to achieve schedule and risk goals.
In my experience, trying to develop a large and complex project by sending software developers off to do their work in one direction and hardware developers in another direction never works because the hardware developers find needs to change structure, and the software developers find insurmountable or hobbling hardware limitations that are best addressed by cooperation.
A real co-design is where a truly clear definition of a block of functionality allows a small team of capable engineers to work together as a group to get a good and fast and inexpensive result. No special tools are needed for this, but evaluation or development boards for processing resources are very helpful, and breadboards allow the early detection of issues that change design approaches in hardware and firmware.
NASA's Orion Flight Software Production Systems Manager Darrel G. Raines joins Planet Analog Editor Steve Taranovich and Embedded.com Editor Max Maxfield to talk about embedded flight software used in Orion Spacecraft, part of NASA's Mars mission. Live radio show and live chat. Get your questions ready.
Brought to you by