Despite all the benefits that model based software engineering technology can bring there are many risks as well: How well will your culture adapt to the new approach? How can you integrate legacy and third party components along with new development? Will it even work? Should all engineers be "converted" at once, or is there a way to stagger adoption? How can the overlap of learning curves be minimized?
But there are a number of practical things that can be done to mitigate these risks in the context of code development and generation using the Unified Modellng Language (UML).
One of the first tasks is to determinine the right modeling approach. Within UML, there are three basic design approaches to consider, depending on the ultimate goal of your development. These include: implementation coding using ad-hoc architectural diagramming; implementation coding using elaborative design modeling; model translation to implementation using analysis modeling; and using implementation diagramming to perform "round trip" coding.
It is possible to use the UML as an ad hoc architectural diagramming notation, creating a small number of graphic aids to explain high-level aspects of your software architecture. This type of approach involves very little effort relative to the creation of code and the resulting "back-of-the napkin" diagrams are generally insensitive to most code changes.
In the event you do change your architecture, having just a few, high-level diagrams makes keeping them up to date a small task. However, most organizations looking to the UML for substantial and pervasive benefits cannot find meaningful gains with such a parsimonious investment. High level, ad hoc architectural diagramming generally only touches on high-level systems decomposition, leaving behind many possible benefits of UML.
A step beyond ad-hoc architectural diagramming is to apply the UML in a high-level "design" phase, and then continue on using these design diagrams as input to the coding phase, where developers elaborate on the design - interpreting the design diagrams as they hand code the implementation. Diagrams are created that express architectural and partitioning strategies, and outline problem-space (feature) logic at a high level.
Very often there is a hazy boundary where design concepts end, and where coding concepts begin. In early phases of a project, the models can feel quite comforting, outlining solution strategies for a wide fraction of the overall problem.
Relative to ad-hoc architectural diagramming, there is much greater detail in high-level "design" models. This means a greater effort is required to create and maintain them. Due to the gray zone at the code boundary, it is difficult to define precisely what should be modeled, what should be coded, and how these worlds interface or overlap.
Due to the extremely high cost of maintaining the design diagrams trying to keep them in sync with the ever-changing code they are often left behind by the development team. With no upkeep, or with partial, unreliable manual diagram maintenance efforts, the models become a liability, increasingly inaccurate and misleading.
Another type of modeling is the use of analysis models, which is a solution to a problem that has been defined in terms of the problem itself. The level of abstraction for the models matches the level of abstraction for the domain (component) they are addressing. Analysis models are independent of implementation, and a separate, disjoint design maps these models to implementation, via a translation process.
In this approach, the generated implementation code should not be changed only models or the translation mappings are changed. This type of process is also known as forward engineering, referring to the forward flow from analysis models to implementation code.
Analysis modeling offers some key benefits. For example, it maintains an appropriate level of abstraction, such models are verifiable through execution and the models are free from specific implementation dependencies. Also, the components developed this way are simpler, easier to reuse, and are more flexible in the face of changing requirements.
The aim of using implementation diagramming to do round trip coding makes it necessary to capture all implementation details that intrude on the simplicity of the domain, compounding its complexity with that of the implementation domains.
For implementation diagramming to be practical, there needs to be a combination of process and tooling that will prevent the need for dual edits on UML diagrams and actual code.
On systems with high complexity, very often there are domains (components) that are appropriate for analysis modeling, others that are better suited for implementation diagramming, and even some that are best simply coded. The appropriate application of modeling, and the flexibility to choose approaches can be key to success.
The creation of executable elements from UML models is not a difficult hurdle whether you are using analysis modeling or implementation diagramming. Getting the "feature logic" to run properly is important but generally this is a straightforward exercise. The difficult hurdle for high performance and embedded systems is to achieve the run-time space and time performance required.
Specific techniques for engineering high performance systems are advanced topics of considerable scope unto themselves. However there are some basic themes that can be carried back to this level, and offer the new UML adopter some simple guidance.
If the previous version of your system has to face similar performance challenges, then the architecture and strategies applied here can provide a foundation to work from, in addition to the general strategies.
Very often the basic architecture of your system will dictate how it will perform. Your modeling approach and tooling must support your control over the fundamental makeup of your system. A modeling strategy that is topology independent can allow flexible repartitioning. Support for synchronous function/method calls and event driven behavior allows you to achieve the appropriate mix. Look for a modeling approach and supporting tools that afford the project the architectural control required.
Very often, a proprietary optimization can be critical for performance. Capturing this as a translation pattern can support its use as an alternative as needed during code generation. Sometimes a step towards required performance is as simple as the replacement of a general purpose mechanism with a simpler and more direct construct. In any case, having control over the mechanical details of generated implementation code can be critical for achieving proper system performance.
There are times when the proper form of expression of a component is simply the implementation code. General purpose UML models may complicate and obfuscate something that can be simply expressed in code. The project needs the ability to decide which components are modeled and which are coded by hand.
The path from models to code is often frozen in the form of a vendor provided code generator program. If project specific architecture or implementation strategies are necessary, a model translation approach is needed that affords complete control over generated code. A template based approach can provide complete control to the project, supporting any architecture and even allowing implementation language changes.
Another important consideration is the integration of modeled and non-modeled components, very often the key to the method's effectiveness in most projects. As a modeling project goes to a second release, do you continue maintaining legacy code elements?
Sometimes these legacy elements offer a stable foundation to work forward from, but sometimes you end up carrying forward diseased components, propagating legacy problems.
You chose your model based software engineering process because it is the best way to understand and deliver most new components. Likewise, it is very often less expensive to model replacement components for an existing mass of code requiring substantial repairs/extensions, than it is to continue code-hacking at the mass itself.
However, there are times when it is appropriate to avoid modeling. First, when the available code with the required capabilities is complete, maintainable, properly packaged, and validated, modeling is not necessary. Second, modeling is not necessary when the off-the-shelf components have all of the required capabilities in the appropriate form.
Also, modeling is not recommended for systems that have dedicated and tailored development environments such as GUI, parsing or when the domains defined cannot meet the performance requirements through the use of an automated mapping of models.
Many organizations deny the true cost of maintaining legacy components, and tend to sacrifice too much on the altar of "existing code". The best balance here is difficult to achieve technically and culturally. Step carefully and stick to objective criteria.