Traditional cryptographic solutions have relied upon computationally intensive algorithms to encrypt information. But faster processors, new algorithms and specialized hardware have made these techniques susceptible to compromise, forcing classical encryption to evolve in order to ensure security.
There is a growing effort to protect information transmission, usually with some type of cryptographic protocol where the information is modified into a form that is unreadable to anyone but the intended audience.
The fundamental part of any cryptographic protocol is the key: a string of random bits that are used to encode the data to be communicated between parties. To be useful and to provide security, the key must be absolutely random, must be kept completely secret from anyone but the communicating parties and must be refreshed frequently enough to keep the channel safe from eavesdropping.
As outlined in Gilbert Vernam's theorem, an encrypted message is only absolutely safe from eavesdropping if the length of the key is equal to the length of the message and the key is used only one time. This is impractical because all the communicating parties must share a secret sequence of random numbers and can use it only once. Typically, the keys are exchanged by physical means (that is, CD-ROM), creating a security loophole as well as other difficulties. The problem of secrecy is just deferred from the message to the key in what has become the key distribution problem, an extreme challenge for secure information exchange.
In reality, the one-time pad criteria are becoming harder to fulfill because of the large flux of information in current optical communication, networking and data storage systems. That's why encryption protocols that use shorter keys and mathematical approaches to encryption, such as AES, are typically used to protect information. However, these protocols introduce potential security holes related to the key-refresh rate and key-expansion ratio, the most crucial parameter in the security of any cryptographic protocol.
So since there is often no practical way to distribute large and secret keys, most of today's cryptographic protocols rely instead on public-key distribution and the assumed computational difficulty of breaking the protocol.
Both public and private key exchange are based upon mathematical principles through which encryption and decryption of the protected information is relatively simple-the communicating parties just share the keys-but deciphering an intercepted message requires staggering amounts of computing power and time.
Remote parties create a secure channel using key exchange protocols (for example, Diffie-Helmann) based upon classical data exchange and manipulation, where sharing the prime factors of a large number creates the key.
These protocols are widely used but are not proved to be completely secure, representing one of the main threats to modern communication systems and data channels.
The assumption with these algorithms is that, using conventional technology, the computing time it would take to decipher the messages makes it nearly impossible. However, as computing power continues its upsurge and new code-breaking algorithms are developed, today's secrets will eventually become vulnerable. Messages that are secret today are very likely to be compromised tomorrow.
Compounding the problem, most systems rarely refresh their cryptographic keys. Many are refreshed less than once per year for systems that require a physical key exchange such as that provided by cryptographic boxes. And for public-key encryption techniques, it's done even less frequently because of the unwieldy task of updating keys and managing multiple keys.
That results in very large key-expansion rates (raw-data length/key length). This weakens the overall security of the system and may enable successful brute-force attacks on the encrypted data.
Keys can be compromised in many ways, including by brute-force deciphering or so-called "lunch-hour attacks," a difficult-to-detect form of espionage where key information is obtained by physical means from within. Once a key is compromised, all of the information transmitted over the communication link is vulnerable until the key is refreshed. For systems with very low (or zero) key-refresh rates, a compromised key provides an eavesdropper full access to information encrypted with that key.
A quantum-key distribution (QKD) system solves these key distribution and management problems by allowing continuous regeneration of keys and a means by which to disseminate an encryption key with absolute security between remote locations via a dedicated fiber link.
The keys generated and disseminated using quantum cryptography are proved to be absolutely random and secure based on the laws of quantum mechanics, not upon the assumed security of complex mathematical algorithms. By sending the key encoded at the single-photon level on a photon-by-photon basis, quantum mechanics guarantees that the act of an eavesdropper intercepting a photon, even just to observe or read, will irretrievably change the information encoded on that photon. Therefore, the eavesdropper can neither copy nor clone a photon or read the information encoded on the photon without modifying it, a process that is provably detectable. This arises from Heisenberg's Uncertainty Principle: an eavesdropper listening in on the channel over which the key is distributed will necessarily leave traces (and the more information the eavesdropper obtains, the greater the detectable disturbances).
Consequently, the communicating parties can use part of their key to determine the presence of an eavesdropper and only use that key when they know it has not been compromised.
Given secure key distribution, one can then use the (provably secure) Vernam cipher for ultimate security or generate a series of random and long keys with extremely small key expansion ratios. This has the advantage of removing the risk inherent in schemes that rely on the conjectured security of cryptographic solutions relying on computational difficulty-which have been demonstrated to suffer from retroactive security loss due to unanticipated advances in hardware and algorithms-thus ensuring future security of the coded data.
QKD provides continuous key regeneration that enhances the security of the communications channel, protecting against both cryptographic deciphering and internal espionage.
A compromised key in a QKD system could be used only to decrypt a small fraction of the information exchanged, since the cryptographic key in the system is refreshed at least once a second.
Thus, not only does quantum-key distribution protect against cryptographic attacks, it also enhances the physical security of the system in the face of internal threats.
The pace of computing power and the development of faster algorithms has made many forms of classical cryptography obsolete, resulting in cryptographic protocol changes and increasing key lengths as a prerequisite to keeping information secure.
Quick, costly replacement
New protocols and increasing key lengths require that an organization be forced to acquire new hardware and software to replace existing cryptographic solutions, often at great expense. Once a cryptographic system is known to be insecure or vulnerable to attack, the system must be replaced on short notice and with little regard to cost.
Progress in computational power, advances in hardware design or the discovery of new mathematical algorithms will not compromise the security provided by systems using QKD.
Quantum cryptography is also safe from future advances in code breaking and computing, which will severely reduce or possibly obliterate the security of classical cryptographic solutions.
For more information on the subject, see "Primer on Quantum Informa-tion Processing" at www.magiqtech.com/products/primer.php.
Michael LaGasse is vice president of engineering at MagiQ Technologies Inc. (New York).
See related chart