In many respects the packet-switched, connectionless, Internet protocol-based information superhighway upon which the average consumer is becoming more dependent is very similar to the U.S. mail and is subject to the same threats to security.
Information is stored in packets, with addresses. The packets are routed by various means through a number of sorting centers and passed through to local delivery portals. Next, they are combined with others with similar addresses and delivered to their final destination.
So far, the Internet and World Wide Web have not had an attention getting and deadly attack similar to the Anthrax attacks on the U.S. mail in the months after Sept. 11. But the cumulative effects of daily virus and worm outbreaks have not lessened, even as the information superhighway has become more heavily used by consumers.
At the same time, consumer trust in the reliability of the information superhighway has not increased, despite the major investments that have been made in beefing up its security.
This conclusion is borne out by rafts of statistics that are constantly being collected on net-centric computing security. According to the Carnegie Mellon University CERT Coordination Center, which serves as a reporting center for Internet security problems, the center received 1,090 vulnerability reports last year, which was more than double the number received the previous year. The number of specific incidents reported to it grew from about 1,300 in 1993 to more than 21,000 last year.
That there are serious security holes in the Internet-Protocol-based communications medium has been clear for a long time. The problem, however, may be enormously under-reported. CERT estimates that the reports it receives represent only 20 percent of the actual total, a suspicion borne out by a just-released study by the Federal Bureau of Investigation of computer and network security at 500 of the country's major government and corporate organizations. That report indicated that while fully 90 percent of the respondents detected security breaches in the past year, only 34 percent actually reported those attacks. The cost of the crime and damage caused by just the reported security breaches last year was at least $455 million, up almost 20 percent over the previous year, by FBI estimates.
What has many in the industry concerned, and intent on finding commercial solutions to the range of security problems, is that the very nature of the Internet is changing: higher-bandwidth connections to the home, 24/7 connectivity, and the addition of literally hundreds of millions of small-footprint smart iAppliances and embedded devices.
Complicating the issues even further is the emergence of new paradigms such as Web services, which entangle servers and clients, servers and other servers and confederations of clients in closely linked fabrics not conceived of previously. Contributors to this week's In Focus section address a number of security concerns that this environment imposes.
Typical of these issues is the effort by the providers of these Web services to simplify user access and access management through a shift to a single user, single sign-on (SSO) methodology that would be used for each and every site accessed, where the same login generates an entire profile with all of the matching relevant information for that user. But as easy as this SSO model makes access to Web Services, it is rife with security problems, said Madeleine Campbell, security technology manager in the electronics, engineering and information technologies division of BTG (West Conshohocken, Penn.) where security problems have generated some of the most costly mistakes in the computer industry in the last few years. "Code Red, Nimda and the Melissa virus have all wreaked havoc on businesses, costing more than a billion dollars of reported losses," she said. "Some of the biggest attacks on the nation's infrastructure within the last few years have been delivered via the Internet."
Campbell believes the most glaring security issue is the question of where will all of the personal information be stored. What is not clear to many in the embedded and small footprint iAppliance market segments see is whether current security mechanisms and protections can scale up from the numbers of clients in the current PC-dictated paradigm into the numbers of users in the new iAppliance environment that are 100 to 1,000 times higher.
"What we have been dealing with until now is a network and security problem that is defined by the assumption that most of the users and points of contact are human users on desktops and laptops," said Alexandar Helmke, senior product manager for networking security at Wind River Systems, Inc. (Alameda, Calif.). "That has all changed. Not only has the size and complexity of the security problem increased, but the dozens of security solutions we have developed in the computer industry may not be appropriate to the smaller footprint devices and the sheer numbers involved."
Campbell believes that the underlying infrastructure of the Internet is basically flawed. "It was not designed to be secure," she said. "While it is was designed to operate originally in a secure environment, but a controlled military setting, not in a commercial almost totally uncontrolled and open context with massive amounts of expansion and growth in users. The original designers could never have dreamed it would become so large or so widely adopted."
Rather than deploy point solutions to the Internet to expand it beyond its original design parameters, maybe it is time, she said, to look at alternative structures now under investigation, for example, models based on the biological immune response. Or an approach with models that distribute "responsibility" to enforce security to every object in the system so that the compromise of one object does not expose the entire system. "There are much more robust mechanisms on which to build a secure network based computing environment than what we have currently," she said.
Willian Wulf, professor of engineering and applied science in the Department of Computer Science at the University of Virginia, is even more emphatic. His opinion is that current system is flawed because the strategic assumptions upon which it is based are outdated as are the responses to security breaches. Most cyber security, he points out, are based on what he calls the "Maginot Line" model: the assumption that the "thing" we need to protect is inside the system, developing firewalls, cryptographic mechanisms, intrusion detection and virus detection to keep outside attackers from penetrating our defenses and gaining access or taking control of it.
This model of computer security has been used since the first mainframe operating systems, he pointed out, and like the Maginot Line is a fragile and inadequate defense. "No matter how formidable the defenses, the attacker can make an end run around them, and once inside, the entire system is compromised, " said Wulf. "The Maginot Line model is especially inappropriate in a networked environment, which does not have an 'inside' or 'outside' defined by the hardware." Moreover, he said, this model has never worked. "Every system built to protect a Maginot Line-type system has been compromised." While the immediate problems of protecting cyber systems can be patched by implementing "best practices," the fundamental problems with the model cannot. "After 40 years of trying to develop a foolproof system, it's time we realized that we are not likely to succeed. It's time to change the flawed inside-outside model of security," he said.