Quick. Which of these events really happened:
a) Computer worm crashes
safety system in Ohio nuclear plant.
b) Virus halts train service
in 23 states.
c) Young recluse cracks
computers that control California dams.
d) Hacker uses laptop to
release 260,000 gallons of raw sewage.
The answer, sad to say, is all of the above. These attacks, and
thousands like them, demonstrate that building a secure perimeter
around our computer systems is no longer enough. Firewalls, intrusion
detection software, and anti-virus programs are all important, but no
matter how robust a perimeter they may create, malicious hackers can
and will break through.
What we really need is a new approach to designing the systems we
want to protect, an approach that can make those systems inherently
tamper resistant and capable of surviving assaults. Otherwise, we are
simply erecting concrete barriers around a house of cards.
The need for such an approach has been made all the more urgent by a
major shift in cyber crime. Yesterday, hackers cracked systems for
thrills and notoriety; today, they do it for profit. It's become a
full-time job, staffed by dedicated professionals. If a hacker stands
to make money by accessing your data — or by threatening to launch a
denial-of-service attack on your system if you don't pay an extortion
fee — then you're a target.
Worse, these professionals are targeting not only corporate IT
servers, but also control and supervisory systems — systems that keep
factories running, power flowing, and trains from derailing. An attack
on a corporate server might be costly, but an attack on a life-critical
embedded control system can be catastrophic. Consequently, such systems
are considered a prime target for cyber extortionists.
Truth be told, the principles of creating a design that is
inherently survivable and tamper resistant aren't all that new. In
fact, many of them were established as far back as the 1970s, when
researchers such as Saltzer & Schroeder published seminal papers on
The surprise is how much — and how long — the software industry has
ignored them. This omission goes a long way toward explaining why our
servers and desktops are so vulnerable to malicious exploits. It also
explains why many embedded systems are equally at risk.
Consider the key principle of least privilege, which states that a
software component should only have the privileges it needs to perform
a given task, and nothing more. If a component needs to, say, read
data, but has no need to modify that data, then it shouldn't be granted
write privileges, either explicitly or implicitly. Otherwise, that
component could serve as a leverage point for a malicious exploit or a
As it turns out, the majority of operating systems today are in
serious violation of this principle. For instance, in a monolithic
kernel such as Windows or Linux, device drivers, file systems, and
protocol stacks all run in the kernel's memory address space, at the
highest privilege level. Each of these services can, in effect, do
anything it wants.
Consequently, a single programming error or piece of malicious code
in any of these components can compromise the reliability and security
of the entire system. Imagine a building where a crack in a single
brick can bring down the entire structure, and you've got the idea.
In response, many embedded system designers are adopting a more
modular OS architecture, where drivers, protocol stacks, and other
system services run outside of the kernel as user-space processes.
This "microkernel" approach not only allows developers to enforce
the principle of least privilege on system services, but can also
result in a tamper-resistant kernel that hackers cannot bend or modify.
This approach can also satisfy other requirements of a secure,
survivable system, such as fault tolerance (the system will operate
correctly even if a driver faults) and rollback (the system will undo
the effects of an unwanted operation while preserving its integrity).
By extending the microkernel with secure partitioning, applications
now have guaranteed access to computing resources, in virtually any
scenario. The need for such guarantees is especially urgent in the
embedded market. Keeping pace with evolving technologies requires the
ability to download and run new software throughout an embedded
product's lifecycle — in-car telematics and infotainment systems being
In some cases, this new software may be untrusted, an added risk. To
address such concerns, a system must guarantee that existing software
tasks always have the resources (e.g. CPU cycles) they need, even if an
untrusted application or DoS attack attempts to monopolize the CPU.
Properly implemented, resource partitioning can enforce those
guarantees, without any need for software recoding or extra hardware.
None of the scenarios I mentioned earlier caused serious harm — with
the possible (and pungent) exception of the sewage incident. They do
demonstrate, however, the phenomenal trust we place in complex,
software-controlled systems, and how vulnerable we become if those
systems are compromised. As software designers, developers, and
managers, our task, then, is to create systems that are inherently
But trustworthiness isn't simply an add-on layer. It has to be built
from the ground up. Start with a software architecture that embraces
fundamental principles of security — such as separation of privilege,
fail-safe defaults, complete mediation, and economy of mechanism — and
you've got a major head start. Fail to do so, and you fight a costly,
uphill battle. For proof, consider the endless parade of patches needed
to secure our desktops.
When it comes to building secure, survivable systems, what you start
with determines what you end up with. Fortunately, the underlying
principles we need to embrace aren't unproven or obscure, but simply
good, well-accepted programming practices. The groundwork has already
been laid; let the next generation of innovative — and secure — systems
Dan Dodge is CEO, QNX Software Systems.