Embedded systems control much of the world's critical infrastructure, which makes them a prime target for attack by everyone from hackers to terrorists. Embedded systems, however, have at their disposal an impressive set of defenses, mechanisms and procedures that are in common use for operations other than security, but that result in security mechanisms that prove stronger in some cases than traditional enterprise systems like Windows or Linux.
In the early days of my career as an embedded-systems developer, I worked on critical communications systems. Every aspect of the software and hardware had to be perfect-any failure could prove disastrous. Couple this with the fact that updating software sometimes involved climbing hurricaneproof towers in the Everglades, brushing aside various lizards and insects, then manually plugging in PROMs, and you had a team of people highly motivated to get it right the first time.
When I left embedded development briefly to develop Unix-based Internet systems for enterprise environments, I discovered an entirely different world. In the enterprise environment, things go wrong-and that's OK with users; it's even expected. In this world, developers (vendors) aren't responsible for errors, and they can even charge money to upgrade to a version that fixes a bug.
Sensing the gap between the stability of an embedded system and the functionality of the Unix Web servers, a few colleagues and I decided to close it. We developed an Internet server based on hard, real-time methodologies, with the hope of showing the enterprise world what we embedded developers knew existed: simple, small, reliable systems with all the power of a modern Unix server.
A little more than a year ago, the U.S. government helped us realize that all the features used to keep the device running were actually security features. For example, memory scans meant to detect voles chewing on traces also prevented malicious Web content modification.
The product with which I am currently involved, called Hydra, offers dozen of examples of features that we developers took for granted as being necessary for reliability but which became security features. A number of these are simple enough to integrate into embedded systems immediately.
Many systems feature their own memory manager, whereby processes allocate and deallocate fixed-size blocks of memory. One simple mechanism to make these memory managers more secure is to use protection bits. These bits surround the memory and are filled with a distinct pattern.
The appropriate number of protection bits to use will differ depending on your choice of processor. A 32-bit processor, for example, should probably use a 32-bit word to hold the protection bits and fill it with a pattern that is unlikely to exist in a typical buffer.
CPU-constrained systems could use a fixed pattern (I like 0xDecafBad). However, a simple hash on the address is more difficult for a malicious task to guess at. (This strategy is sometimes called a canary, because the hash sacrifices its value in an overrun buffer in much the same way canaries sacrificed their lives detecting poisonous fumes in coal mines). Even if you use a malloc provided to you by your real-time operating system (RTOS), you can simply wrap the call with code to allocate a few extra bytes, and keep a linked list of the memory addresses to monitor.
Systems such as Linux and Windows effectively isolate the memory for a process, and a typical RTOS' flat memory model would seem to increase the likelihood of a buffer overrun. Further examination, however, yields a few advantages to the flat memory model. In a flat memory model, a single diagnostic task (or other process) can examine the protection bits at any time to see if a task has run off the end of a buffer.
In a segmented memory model, all processes must watch out for themselves in the event that a process clobbers its own buffer. Similar protection word mechanisms are sometimes used by debuggers and memory managers in any kind of memory model, but it's possible to use the bits for so much more. For example, rather than using just a pattern in those protection bits, why not employ a simple checksum? Some RTOSes will detect when you try and write outside memory that you allocated, but I know of none that can detect whether the contents of a buffer have been modified by anything other than the code that should be modifying that buffer.
One use for this technique could be as a file system cache, where reads from a disk are checksummed at read time (using a quick cyclic
redundancy checking or other lightweight mechanism), then again each time a process uses the cached data. In a networked system, this technique can protect ARP tables (compute at insert, check at look-up), HTTP caches (compute at insert, check before content delivery) or nearly any data at all.
Checksums are also useful to verify that no one has modified your code image. Say, for example, your image is stored in a PROM or flash memory, and you move it into RAM to run. During the copying to RAM, why not perform a checksum? Then a low-priority task can periodically recompute the checksum to verify that no one has found a way to modify your image in RAM. While this mechanism is not foolproof, it provides a level of protection far beyond that of traditional operating systems.
These days, the preferred mechanism to attack traditional OSes seems to be denial-of-service (DoS) attacks. DoS attacks can affect embedded systems as well, sometimes by causing a condition where the system's software is locked in a particular state.
For example, say you have an embedded device connected to a network, and a single ISR controls all inbound and outbound traffic. It's conceivable that if someone sent you an Internet Control Message Protocol packet from your system's IP, your system may attempt to respond to itself. This could cause your ISR to enter a tight loop of processing the same packet over and over. Unless you've designed a check for this-which I highly recommend-you'll find that you've just been victimized by a simple, and very popular, DoS.
What can you do to protect your system, short of thinking of every conceivable way any task can enter a tight loop? The answer is to make friends with that hardware engineer and get him or her to implement a hardware watchdog timer circuit. Watchdog timers, common in embedded systems, can cause a soft or hard reset unless tickled periodically by software. This ensures that certain key elements of your software are running, and not stuck in a tight loop or out of control in some other way.
These defenses focus on preventing or detecting writes to memory that the system did not intend, but what about unintended reads? Consider the impact of a hacker reading off the end of a buffer on an embedded system housing sensitive data. The hacker may gain access to encryption keys, secret information, passwords or anything else the RAM contains.
To minimize this risk, embedded systems housing sensitive information should cipher that data in RAM immediately after use, or zeroize the data if it is no longer needed by filling it with a pattern. Although certain enterprise systems have the ability to do this to some extent, few do-and no traditional OS can cipher or zeroize portions of its own kernel the way most embedded systems could.
For example, if your system runs from RAM, you could use a mechanism that, upon a call to a particular function, deciphers a block of RAM, runs it, then ciphers it again. While this may seem computationally expensive, many ciphers are available that are either quite lightweight or have available hardware implementations. The question of how much internal ciphering you can do depends on how much CPU you have to spare, or how much your products' price target and schedule are able to accommodate the addition of an extra chip to perform the ciphering.
Stack smashing is the best way for a malicious user to take over your system, since a successful attack means a hacker is able to execute on any binary code on your system he or she wishes. Stack smashing is a form of buffer overflow performed on the stack.
I have one piece of advice for embedded and traditional systems developers: Get those buffers off the stack. By allocating buffers via a memory manager and implementing the protection bit mechanism described earlier, a system can eliminate stack smashing.
An embedded system designer has access to many, many mechanisms that were intended for one type of use but can be redirected to enhance security. The trick is knowing whether they'll work in your system.
One thing you can do that will put a good dent in security testing is to run some network attack tools. A free tool called Nessus is available at http://www.nessus.org/. This tool, like similar commercial tools, runs more than 1,000 various attacks against your system and reports any vulnerabilities.
Taking your embedded system into a secure environment calls for new ways of thinking. Hackers are able to penetrate deeper into networks than ever before; indeed, hackers today are better educated, and thus more dangerous, than even a year ago. As embedded systems that control critical systems connect to the Internet, embedded developers need to concern themselves with security as well as reliability.