Auto DL Shan Bhattacharya
The requirements-driven development mantra and all that it
encompasses has been documented and discussed thoroughly for
almost two decades in the pursuit of building better and more
reliable applications. This mantra has been the undertone of
software processes, certifying authorities, and industry standards
focused on realizing requirements-driven development in its
Despite these efforts, the majority of defects in embedded
software space are still requirements related. One of the prime
contributors to this problem is the continuous flux in
requirements introduced by the changes in product scope. To shield
themselves from the added risks contributed by this flux,
organizations need to learn to manage change in the requirements
and subsequent implementation phases of the product life cycle.
Good requirements management practices usually involve a well
thought out breakdown or decomposition of requirements from system
level on down. If this decomposition is managed well, its
artifacts are configuration controlled, and the various
engineering disciplines work well together, flux can then be
handled throughout the development phases of the lifecycle.
While some create the requirements traceability matrix (RTM) on
release of the product, many errors can be avoided if the RTM is
used as the driving force at the heart of the development process.
This paper will examine how to create an RTM and maintain it
throughout the course of a project, despite the inevitable
development challenges along the way.
The need for traceability tools
Large-scale projects include millions of lines of code strewn
across multiple subsystems and interfaces. Such programs are often
developed by multiple organizations and tiers of customers with
large specification trees.
These projects usually start with an operation concept and
system-level documents describing the expected behavior and other
requirements of a system. Once these system-level requirements are
decomposed into subsystems and engineering disciples, such as
software, they are validated and reconciled by subject matter
experts (SME). These logistics are usually managed with the help
of requirements management tools (i.e., DOORS), configuration
control board meetings, and change management tools (i.e., SYNERGY
As requirements are validated and a project enters the detailed
design phase, the number of artifacts increases rapidly. At this
point, the flux through the system results in a ballooning number
of unresolved issues. A rich set of traceability tools is now
needed to drive the RTM at this level of complexity.
Requirements traceability bi-directionally tracks a requirement,
its implementation, and verification through the project’s life
cycle. As a requirement is realized in design and implemented in
code and tested, it must be linked through each stage’s artifacts
to ensure its correct implementation.
Traceability tools link the various artifacts and activities
associated with the realization of requirement together. Artifact
data may reside in several forms such as text, graphical models,
code, and various forms of test data. Traceability tools should be
able to bridge these different mediums to link artifacts from the
full range of development activities.
Without traceability tools, components from different phases must
be manually identified and their data collated in a single
environment. Effective traceability tools also offer reports such
as upstream and downstream impact analysis and matrix generation
that enable users to leverage traceability information to better
understand the impacts of change.
Managing an RTM
A traceability matrix is built only around the artifacts it
links. The artifacts and activities tied to an RTM need to be
configuration controlled. For instance, as modes and states
requirements are being worked out, they are coordinated with power
management, security systems, and the software that implements
them. These efforts may include artifacts from power point slides
and white papers to state diagrams and code repositories. All of
these artifacts are usually in their own flux, but their snapshots
must be version controlled to present a complete picture of the
system as it exists at any given time. Version control plays a
vital part of a comprehensive change management workflow that
manages the content driving an RTM.
When delivering a product based in software, the software, its
requirements, design, and verification artifacts are usually
delivered with an RTM. Often, just prior to delivery, projects
find themselves scrambling to assemble these artifacts and the RTM
at the last second. Inconsistencies, which may be latent defects,
are likely to be found during final traceability exercises.
However, at this point, they are expensive to correct and result
in unexpected delays and cost overruns.
If the RTM is maintained throughout the entire development cycle,
however, the delivery process is far less painful. Traceability
issues are worked out as they arise throughout the lifecycle, and
artifacts are typically thoroughly vetted long before delivery.
Benefits of Traceability
To fully appreciate the benefits of good traceability practices,
it is necessary to understand the complexities introduced by the
latter phases of development. As requirements are validated and
the project enters the detailed design phase, the number of
artifacts grows rapidly. Implementation and verification phases
add code and test data to the RTM. With the number of engineers
reaching full staffing complement, and the full complexity of the
product coming into view, risks and inefficiencies in the project
rapidly translate to growing costs and schedule slips each time a
defect is identified. Budgetary changes, scope changes, and even
healthy innovation further compounds the flux in the system. To
maintain the integrity of the RTM, traceability must still be
maintained from design artifacts to code, test cases of various
forms, and the resulting test data.
As insurmountable as this number of variables may seem,
maintaining discipline in traceability throughout the system with
traceability tools greatly reduces defects, as these forms of flux
disrupt the development process. In three scenarios below,
traceability-related analysis is used to reconcile issues with
upstream requirements, assess costs for newly added features, and
better monitor project progress.
In this first scenario, an engineer implementing behavior
described within a software requirements specification (SRS) and
may find inconsistencies or errors in the requirements definition.
If traceability has been maintained, the engineer can run upstream
analysis to find how the error impacts at system level
requirements (see Figure 1). They are then able to communicate
with the various owners of these requirements and design artifacts
to better describe the issue for a successful resolution.
Without this upstream traceability available, an engineer may
find a technical solution that requires some requirement or design
modifications, but may be unable or unsuccessful in communicating
these changes upstream. The potential of such limitations can
easily result in defects embedded in the system that will likely
not be noticed until verification.
Figure 1: Here you can see the
upstream impact of requirement SRS_0001. The requirements
highlighted in the PIDS (Prime Item Development Specification) and
the ICD (Interface Control Document) are system level requirements
upstream in the specification tree to the SRS (Software
When scope creep enters a project and its RTM, it is necessary to
very quickly assess the impact of that creep with respect to cost,
staffing, and other logistics. Figure 2 illustrates how
requirements changes affect scope change in the form of newly
added, modified, and deleted requirements. A well-maintained RTM
and some downstream impact analysis quickly measures the impact to
By using manpower-estimation tools common in good requirements
traceability tools, management teams can quickly compute the cost
of implementation by measuring the number of lines of code
affected. These tools can be configured to factor in the impact of
source lines of code changes as well as associated design and
verification costs. Project leaders can accurately assess cost and
negotiate with customers with quantitative evidence. Project
planning such as staffing projections, additional features, and
other logistics can then be much more accurately planned and
justified to management.
Figure 2: Notice how the
“Requirement” column on the left shows system-level requirements
(PROJ_SYS_0012). The “Downstream” column shows that software
requirements PROJ_SRS_0022 and PROJ_SRS_0032 are decomposed from
PROJ_SYS_0012. The “Mapped Files” column on the right offers
associated source files and prototypes as they are mapped to the
software requirements. Hence a modification of PROJ_SYS_0012
impacts the files and prototypes listed.
Without this level of analysis, project leadership often resorts
to quickly gathered estimates by SMEs that are prone to great
error. Even if the estimates are reasonably accurate, management
has no quantitative evidence backing up the estimates and are not
as well prepared during negotiations. Managing scope creep induced
flux is a crucial piece of ensuring timely completion and
delivery. Inexperienced managers and poor estimation is a primary
cause of project overruns. With updated traceability in place and
downstream impact analysis, the guessing game can be taking out of
this nebulas and risky activity.
As verification activities start within a project, it is
essential to monitor progress in order to take appropriate
corrective actions. Grasping what traceability activities are left
to complete and which verification activities have passed and
which have failed, are key steps to successful project completion
and timely delivery. In Figure 3, it is evident which requirements
have yet to be mapped and which requirements have been assigned
Verification progress can also be monitored as it progresses, so
that project status can be measured against schedule. If timely
completion is in jeopardy, it can be detected and measured early,
so staffing increases can be made with precision. This level of
transparency takes the sloppiness out of monitoring project
progress and motivates engineering team members to update the RTM
as each task is completed.
Figure 3: The TBreq diagram above
shows different stages of traceability and verification activities
ranging from mapping requirements to code, assigning verification
activities, to performing verification tasks.
Without such data, management team members are prone to wandering
from office to office asking for progress information on a weekly
basis. The data they get from each discussion will be subjective,
tempered by judgment calls. This ad hoc form of data collection
results in inherently flawed progress reports, which cumulate to
give poor readings of overall project progress.
With a rich RTM in place and traceability information available
at every scope, all levels of management, the customer, and
engineers are empowered with a clear picture of the project at all
times. When transparency and traceability is made available to
all, the path to the finish line becomes more visible to every
team member. Confusion is kept to a minimum and technical
solutions can be reached more efficiently. As this mode of
operation starts to become the norm, buy in and belief in a
project driven by an RTM starts to grow and hopefully becomes part
of the culture of both the project and the organization.
Tools and the future
The benefits of system development based on requirements
management and traceability are becoming more and more evident in
software-based industries. However, solutions to meet these needs
have often addressed portions of the traceability problem, leaving
design, implementation, and verification out of the picture.
Artifacts from these stages are usually assembled in the late
stages of the project and are associated with requirements as
projects near their conclusions. As a result, the RTM often lags
behind the development process instead of driving it.
As projects grow in complexity and industry processes and
certifications demand greater levels of traceability and
transparency, organization must establish the tools chains and
processes necessary as early in the process as possible. This is
particularly crucial in the safety-, security-, and
mission-critical spaces where verification activities often make
up a large part of the overall effort.
With this problem space well understood, technology vendors are
now starting to provide solutions that offer traceability into the
later stages of the development process. Some of these
technologies, such as application lifecycle management tools, also
implement the development process, while integrating the details
with the RTM. This next generation of tools may provide the
complete solutions so desperately needed by system integrators
developing mission- and safety-critical software.
In a world with ever-growing software complexity, embracing such
solutions and letting the requirements drive the project will
contribute to more innovation, while delivering more reliable
products within cost and on time.
Shan Bhattacharya is a field application engineer for LDRA Ltd. He graduated from Cameron
University and began his career in factory automation and
robotics. He continued his career with various defense contractors
including Lockheed Martin where he served as a lead engineer and
finished his time as a deputy IPT lead. Shan has been with LDRA
since 2007 and provides consultation for clients in various
industries focusing on requirements management, software
certifications, and development best practices.