With recent quality concerns, the automotive industry has started looking seriously at ways to improve software development. The increased use of electronic systems that affect the car’s safety has driven automotive companies to look at standards such as ISO/DIS 26262 to help them comply with the specific needs of the electrical, electronic, and programmable electronic (E/E/PE) vehicle systems.
The ISO/DIS 26262 standard provides a risk-based approach for determining Automotive Safety Integrity Levels (ASILs). There are four levels (A-D) of ASILs to specify the necessary safety measures to avoid an unreasonable residual risk, with D representing the most stringent level.
What ISO/DIS 26262 (Part 4) brings to the table is an outline of the practice of allocating technical safety requirements in the system design. The standard mandates how to develop a design and specifies how to derive an item integration and testing plan and subsequently the tests themselves.
In general, automotive companies are seeking to evolve and improve, whether to reduce overhead or achieve a demonstrable level of quality. As companies begin to incorporate ISO/DIS 26262, the easiest way to prepare a plan for business evolution is by gap analysis. The methodology starts by gathering data, then analyzing them to gauge the difference between where the business is currently and where it wants to be. Gap analysis examines operating processes and generated artifacts, typically employing a third party for the assessment. The outcome will be notes and findings upon which the company or individual project principals may act.
Companies involved in systems and software development for the automotive industry are now joining counterparts in industries such as aerospace and railroads in facing compliance with a demanding standard. The need for such compliance has mandated business evolution in which processes and project plans are documented, requirements captured, implementation and verification carried out with respect to the requirements, and all artifacts fully controlled in a configuration management system.
Using gap analysis, companies have an established method that isolates the areas in which they need to improve with respect to a standard, such as ISO/DIS 26262.The results from the gap analysis allow the company to correctly and efficiently focus resources to achieve that improvement.
System shortfalls When companies undergo gap analysis, the results reveal the level of maturity within each phase of the software development lifecycle, and the amount of tool investment at each phase. Even when processes are mature and tools are adequate, the greatest shortfall typically stems from a failure to establish traceability between the software development phases so that requirements directly link to design, code, test, and verification stages of development. Gap analysis reveals the shortcomings in the system, because even where each individual phase is well controlled and adequately tooled, traceability between the phases tends to be lacking.
To compound the issue, as noted in a recent Forrester report, "Analyzing the interdependencies that software designs have with other cross-functional views—like linking a camera’s zoom software with the electrical motor, the electronics processor, and the mechanical lens and bearings—is often a late-stage process conducted only after the software is uploaded to the first physical prototype." So, not only is traceability lacking between the various stages of development, but it is lacking across engineering disciplines—a factor which can only be exacerbated by the distributed nature of automotive development teams.
For most commercially developed software, the construction and maintenance of requirements traceability matrices is performed as a low-priority task and carried out via manual methods such as Excel spreadsheets. Such approaches require continuous human interaction and interpretation of how traceability should be applied, and manual effort to make any updates.
With a strong correlation between requirements degradation and software defects, companies are becoming more focused on ways to mitigate this risk through rigorous requirements and traceability management. And, the cost of managing requirements manually (the expense for traditional manual approaches to traceability, verification, and certification accounts for 50-70% of the overall development budget for a project developed to the highest safety levels) is creating mounting pressure to find better ways to comply to rigorous standards without escalating costs. Modern requirements engineering and traceability tools represents a way to maintain high-quality software with cost savings.
Requirements traceability Requirements traceability is widely accepted as a development best practice to ensure that all requirements are implemented and that all development artifacts can be traced back to one or more requirements. A Requirements Traceability Matrix (RTM) is also a key deliverable within many development standards.
Sidebar: Maintain bidirectional traceability of requirements The intent of this specific practice is to maintain the bidirectional traceability of requirements for each level of product decomposition. When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements, and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source. Requirements traceability can also cover the relationships to other entities such as intermediate and final work products, changes in design documentation, and test plans.
Despite good intentions, many projects fall into a pattern of disjointed software development in which requirements, design, implementation, and testing artifacts are produced from isolated development phases. Such isolation results in tenuous links between that stage and development team and the overall RTM.
Unfortunately, these situations can just as easily occur on projects using state of the art requirements management tools, modeling tools, integrated development environments, and testing tools. Typically, this occurs because many requirement management tools use centralized, database-style architecture and application models. With these implementations, there is plenty of functionality to encourage good quality and good management in the requirements domain, yet little to aid the downstream effort where projects are designed, implemented, and tested.
The tradition view of software development shows each phase flowing into the next, perhaps with feedback to earlier phases, and a surrounding framework of configuration management and processes (e.g., Agile, RUP). Traceability is assumed to be part of the relationships between phases; however, the mechanism by which trace links are recorded is seldom stated.
The reality is, while each individual phase may be conducted efficiently thanks to investment in up-to-date tool technology, these tools are unlikely to contribute directly to the RTM. As a result, the RTM becomes increasingly poorly maintained over the duration of projects and is typically completed as a "rush job." The net result is absent or superficial cross checking between requirements and implementation and consequent inadequacies in the resulting system.
In truth, the RTM sits at the heart of any project (see Figure 1 below). Whether or not the links are physically recorded and managed, they still exist. For example, a developer creates a link simply by reading a design specification and using that to drive the implementation.
Figure 1: The RTM sits at the heart of the project, defining and describing the interaction between the design, code, test, and verification stages of development
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.