In a Common Criteria-based scenario, a product’s security requirements, both functional and assurance, are specified in an implementation-independent manner in a document known as a protection profile. A vendor developing a specific product can have it evaluated and certified against the relevant protection profile.
A higher EAL by itself does not necessarily mean a more secure product, since the interpretation of assurance requirements is based on both the associated security functional requirements and the intended deployment environment as defined in the protection profile. A product at a higher EAL that assumes installation on a standalone platform may be less secure, if used on a network, than a lower-EAL product designed for use on such a network.
EAL6 and EAL7 require the application of formal (i.e., mathematically based) methods; among other things, the developer must present a formal model of select security policies. The need for formal methods, and the increased scope, depth, and rigor required at these high EALs, are best satisfied by using languages and tools that have been designed for this purpose.
Security through language subsets
Programming languages are like cars: Power and performance can attract attention and create industry buzz but can also be dangerous when misused. Expressiveness through Object Orientation, concurrency support, exceptions, generic templates, and other modern features may help make code easier to develop and maintain, but their generality comes at a price. Each raises issues that complicate the verification of high-integrity systems (i.e., systems that are safety critical, require high levels of security, or both). Problems stem from the complexity of the analysis required to demonstrate various program properties in the presence of the feature, and from possible “traps and pitfalls” in the feature’s usage that can cause vulnerabilities. And semantics optimized for run-time efficiency (such as the absence of array index bounds checking in C and C++) can lead to undefined behavior.
Language-related vulnerabilities have been exploited to defeat a system’s security. One of the best known examples is buffer overflow (assigning past the bounds of an array), which can lead to overwriting a memory address and thereby hijacking an application to execute malevolent code. The Common Weakness Enumeration2 describes a large number of such vulnerabilities, as well as others (such as SQL injection and hard-coded passwords) that are at a higher semantic level and are not related to the choice of programming language. An ISO technical report3 analyzes some of the most frequently occurring language-related vulnerabilities and explains how to mitigate them, with language-specific annexes providing additional guidance.
High-integrity software must, of course, be reliable, but more generally must also be analyzable. Demonstrating that a system complies with its operative security or safety certification standard entails, among other things, examining the source code to show that the program’s time and space requirements are met and that its behavior is well-defined; for example, no references to uninitialized variables. The programming language will affect the ease or difficulty in performing this analysis.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.