Larry Smarr has a simple back-of-the-envelope computation to support his proposition that the supercomputer's distinguished 30-year reign as a scientific-research driver may be nearing an end.
"From 1985 to today, supercomputers went from a gigaflops [the Cray] to 100 teraflops [IBM's Blue Gene] an increase in performance of a factor of about 100,000 times. Now consider just one supernetwork, New York State's NYSERnet. It just lit up with 32 ten-gigabit lambdas that's 320 Gbits, vs.
a megabit in 1985," Smarr said. "So network performance has gone up by a factor of 320,000 times over the period."
And Smarr sees no reason that optical networking based on dense wavelength-division multiplexing (DWDM) cannot keep up that pace indefinitely, or even accelerate it.
Smarr is director of the California Institute for Telecommunications and Information Technology (Calit2), associated with the University of California, and founding director of the National Center for Supercomputing Applications, headquartered on the University of Illinois' Urbana campus. He is part of a grass-roots movement that has taken on the project of building a true Information Superhighway one that will let supercomputers directly connect any two points on the globe and operate as though they were in the same cabinet.
It's one of five projects whose overarching goals distinguish them as seminal works in progress.
Certainly, both the information technology revolution and the ability to observe and control the material world at the atomic level promise to amplify the astounding scientific strides that were made in the last century. The countless research projects under way across the globe make for a compelling and complex mix, and there is no reason to believe that contemporary observers will prove any more adept than their predecessors at spotting the most important developments of their day.
Still, when forced to count them on one hand, EE Times came up with this list: the globe-spanning optical network that has captured Smarr's imagination, the ultimate computer, the decoding of the workings of the cell, controlled fusion and room-temperature superconductors.
The real Info Superhighway
The hardest part of the optical-network project has already been completed. In the 1990s, network service providers installed optical fiber that supported wavelength multiplexing, in which individual wavelengths could carry gigabit data channels in the same fiber. (The ultimate capacity of such fiber is not known, since any advance in the optical and electronic transceivers at each end of the fiber can produce more channels.) One immediate consequence was overcapacity in metropolitan networks, which helped precipitate the economic crash of optical networking in 2000.
But Smarr and his scientific-computing colleagues recognized the opportunity that this unused capacity presented. In 1999, the NCSA, based at the University of Illinois, teamed with Argonne National Lab and Northwestern University to convince the governor of Illinois that the state should invest in a fiber network. That effort yielded I-Wire, a dark-fiber communications infrastructure linking Argonne with academic campuses and other facilities in the state, and, shortly thereafter, I-Light, an optical-fiber network that links campuses to each other as well as to the Internet infrastructure, including Internet2.
"Now there are about 24 states and regions that have put similar networks in place, going to the carriers and leasing gigabits of fiber capacity for the next 20 years, rather than simply renting a certain amount of bandwidth," Smarr said. "But those networks are isolated from one another, so last year, the National Lambda Rail initiative was launched. It now connects 20 of the major cities in the U.S."
Smarr is installing a local optical network at the University of California at San Diego with the objective of achieving true end-to-end optical networking that can take advantage of the elegance of DWDM technology. In such a system, two endpoints can be connected on a single wavelength channel that's routed through all-optical devices. The linked machines effectively appear to be connected by a dedicated optical fiber, but with the added advantage that the connection can be established or released at any time.
"With today's Internet, the end user and the data archives are all isolated islands, with a tiny little soda straw's worth of bandwidth between them. But the scientific user wants to open up a fire hydrant of data across the world to the data archive or scientific instrument or remote colleague they are interacting with at the moment. Then they can just turn it off, and it goes back into the pool," Smarr said.
As part of the Calit2 effort, Smarr is installing additional fiber-optic cable at both UCSD and UC-Irvine, along with a fiber link between the campuses. His group is also working with the National Lambda Rail to install a backbone called CaveWave from UCSD to the University of Washington. Beyond that, a wide optical pipe is planned to the University of Chicago and then to an international gateway, called Starlight, based at the University of Illinois. Starlight has already become the largest 10-Gbit exchange on the planet, according to Smarr.
Canada, Australia, the European Union, Japan and China are assembling similar systems, in an effort that is loosely organized under an organization called the Global Lambda Integrated Facility.
The ultimate computer
As the network speeds ahead of the supercomputer in scientific research, the PlanetLab initiative plans to turn the race inside out by using the global network as the ultimate computer.
A virtual computer that runs on existing nodes worldwide, PlanetLab currently has about 580 nodes at 250 sites where computer researchers are developing the architecture in a collaborative effort. Each participant is assigned a "slice" of the virtual computer in which software development takes place. When mature, the system will allow anyone to have a piece of a single global computer comprising all the resources connected to the Internet.
"PlanetLab got started because people didn't have any way to test distributed applications. You want to develop some global Internet service, but how do you go about it?" said Rick McGeer, PlanetLab program director at HP Labs.
Collaborative projects can simply have too much overhead: "You end up spending a lot of time on the phone, answering questions such as, 'How do I get an account at your end,?' and then you have to go to your IT people to set it up. Then you find out that your machine is configured a little differently than theirs, so you have to rewrite your code," said McGeer. "You spend a year to a year and a half setting up your experiment and only one month running it. It's logistical nonsense."
Discussions of the problem resulted in an underground meeting in April 2002 at which the community hammered out some specific services for collaborative computing. "A few things were agreed on: One, we all program in Linux or BCS, a variant; and, two, we don't need much else just a bare virtual machine," said McGeer. "Everything else can be built out from there."
Although a starkly simple architecture, the virtual computer paradigm gets interesting when expanded to a global scale. For example, McGeer said, "a substrate service for other services open DHT, which is run out of Intel's Berkeley lab is a hash table distributed across PlanetLab. It's conceptually a very simple idea, but it has turned out to be useful for all sorts of things."
PlanetLab participants began using DHT as a router. That led to a Defense Advanced Research Projects Agency effort to use it for eliminating Internet routing bottlenecks. DHT has since become an essential facility for creating temporary user groups or for making a resource available to anyone. If someone wants to make some data set available, it can be indexed with DHT. Anyone can then access it without having to go through a specific node on the network.
The bottom line is that the PlanetLab virtual computer makes the Internet a far more collaborative and open community.
While currently focused on the researchers and computer scientists who created it, the project will most likely follow the same path as the Internet itself, which originated as a government project to link computers for national-security purposes. Eventually, the general public found uses for what had been an obscure research service; today, of course, the Net is a fundamental worldwide resource.
There is no reason not to expect that PlanetLab will follow the same path, so that one day everyone could be using the same computer.
Genetically engineered machines
If you had a catalog of more than 400 standard biomolecular parts, what could you build with it? The Massachusetts Institute of Technology's computer science department, which manages the BioBricks catalog, created the annual intercollegiate Genetically Altered Machines (iGEM) summer design contest to seek answers to that question.
Genetically engineered machines are catching on in a big way. The BioBricks catalog, which originated at MIT a few years ago, is the result of a plan by some biologists and EEs to abstract functional subsystems from within the living cell, standardize the I/O functions and compile a base of DNA strands that code for them. The strands could then be reinserted into a bacterium, and the cell's RNA-based assembler would begin manufacturing the part for which it codes.
Compiling the catalog is tedious work, but the potential of such a technology is striking. It could lead to a true nanotechnology system for fabricating nanobots, for example, and it would likely find a host of applications in biological research and medicine.
The catalog is modeled on the old-style TTL component libraries that standardized electronics design. And just as in the early days of digital-system design, no one knows what can be achieved with the parts or how complex they can become. But the model is the VLSI revolution, where experience and economies of scale eventually take over. It may be time for some EEs to contemplate a career change to biotech.
Cambridge University is a newcomer to the iGEM competition but then, so is almost every other participant. The Cambridge team is exploring three possible designs: a pulse generator, a chemotactic patterning system and a Flipase-based molecular machine that would flip DNA segments inside E. coli bacteria. Natural operations usually excise segments, but the Flipase-based machine will swap segments around.
The idea is to insert a programmable unit into the bacteria that could turn on different behaviors at will.
The results of this year's iGEM contest will be presented at MIT in November.
The latest news in the field of fusion research is the decision by the International Thermonuclear Experimental Reactor (Iter) Consortium to conduct its latest experiment in Cadarache, France. And what an experiment it will be: With up-front construction costs of $5 billion, the facility is expected to be in service for 30 years, and the initial experiment may require an additional $5 billion. But even if successful, it is only an experiment. The Iter tokamak (toroidal magnetic-field generator) will never become a power plant.
Purdue University physicist Rusi Talyarkhan can only dream about such largesse. He has been involved in a number of tabletop experiments with a special chemical solution of deuterated acetone, stimulated with ultrasound, that he says have revealed unmistakable signs of hydrogen fusion. The results of the sonofusion experiments have passed peer review in major journals and have been independently confirmed by other groups. And Talyarkhan is convinced that an experiment to scale up the system will result in the elusive goal of energy break-even. But so far he hasn't been able to get the funding to do it.
"We are not looking for billions of dollars up front, only for a chance," he said, explaining the engineering problems his experiment will have to overcome. "We have been doing some small preliminary scaling experiments, but you end up in a Catch-22 where you really need to directly tackle some difficult engineering problems."
In one respect, Talyarkhan is more afraid of success than failure: A scaled-up version of his current apparatus, if it works as predicted by computer simulation, could deliver a lethal dose of neutrons. Consequently, before he can run the experiment, a fail-safe facility with adequate protection against radiation must be built.
"You can change parameters such as the temperature of the working fluid or the drive amplitude with which you are forcing the deuterated bubbles down to the nanometer scale from a size of thousands of microns," he explained. "You find that with a small increase in the drive amplitude, from 150 pounds per square inch to around 350 psi, the rate of reaction jumps by four or five orders of magnitude. "
But apart from the practical matter of energy generation, the experimental approach reveals an engineering principle that could have many applications. "The idea of using simple mechanical energy to initiate and control nuclear-level forces, the very workings of the universe that is exciting in itself," Talyarkhan said. "Immediate applications of the technology could produce neutrons or gamma rays for radiography or explosives detection. And then there is the ultimate that we are all striving for and hoping for, and that is to resolve the energy crisis."
Just as considerable sweat equity has been invested in fusion research, an immense amount of work has gone into breaking the riddle of high-critical-temperature superconduction since the effect was discovered two decades ago. Whoever comes up with the definitive theory, experiment or combination of the two in this field is a shoo-in for a Nobel. And the hope is that once physicists fully understand the mechanism, room-temperature superconductors will be the fallout.
But all that determined research has created a Hydra effect: The more physicists discover about high-Tc superconductors, the more oddities they find, and those unexpected turns require investigation. The field of transition metal oxides (TMOs), for example, has ballooned to encompass far more than just superconduction.
TMOs, physicists now agree, consist of strongly correlated electron systems that can adopt a wide variety of phases, leading to a seemingly unending number of physical effects. That's good news if you are a physics graduate looking for a job, but bad news for the prospect of creating room-temperature superconductors.
Perhaps what is needed is a completely fresh view of the problem. That is what Richard Saam, a consulting engineer at Proteus Systems Inc.'s Corpus Christi, Texas, facility, brings to the table. Proteus is a consulting engineering firm that specializes in 3-D CAD modeling, and Saam, with a background in chemistry and environmental engineering, is well-versed in that area.
When high-Tc cuprate superconductors were discovered in the mid-1980s, Saam was struck by the similarity of the crystal lattice to a 3-D structure he had designed to solve an environmental-cleanup problem. The invention was a filter that could separate oil and water mixtures simply by forcing the fluid through the lattice. Saam had an intuition that something similar was going on in the high-Tc lattices, where correlated electrons were the "water" and the "oil."
When work began on the oil and water separator, "with my background in chemistry, I thought that the benzene ring might be a good place to start," Saam said. "Although it is an angstrom-scale structure, I thought it might scale up to something robust that we could use. . . . Around about 1986, when Dr. [Paul] Chu at the University of Houston discovered the YBCO [yttrium barium copper oxide] superconductor, I began to think about what would happen if this filter structure was shrunk back to the angstrom scale. Maybe it could reproduce the high-temperature superconductor data."
Starting from that idea and the equations of electromagnetism, Saam developed a theoretical structure for modeling high-critical-temperature superconductors. He now has designed an experiment to test the theory.
Rather than work with actual materials, Saam plans to use lasers to create a standing optical wave that resembles his filter structure. If produced in a vacuum, a beam of electrons could be passed through the standing wave to verify the electron dynamics his theory predicts.
Several physics groups have been using optical-standing-wave lattices to create models of TMO electron systems. Saam's proposal is along the same lines, but he believes that atomic systems introduce some unrealistic elements. Essentially, he said, the complexity of current TMO studies are a deterrent to finding the right model. Saam's experiment would strip away much of the complexity of real atomic lattices.
In addition, the inherent scalability of his model could lead to the discovery of room-temperature superconductivity. It would require some work but it's not a grand-challenge experiment and Saam may have hit upon the key to success.
Must-do research projects
1. The challenge: Tabletop fusion
Project: Scale up previous sonofusion experiments to higher pressures; demonstrate energy break-even
Principal investigator: Rusi Taleyarkhan, Purdue University
Payoff: An unlimited source of clean energy
2. The challenge: Room-temperature superconductors
Project: Create artificial optical crystal lattices using high-intensity lasers to study electron behavior
Principal investigator: Richard Saam, Proteus Systems Inc.
Payoff: Across-the-board performance and power boost for electronic and electromechanical systems
3. The challenge: Design system for nanotechnology
Project: BioBricks, a DNA parts catalog and design system for biosynthesis Principal investigator: Open development community coordinated by MIT's Randy Rettberg, BioBricks catalog director
Payoff: Self-reproducing molecular machines; engineered materials with any predetermined physical properties
4. The challenge: The ultimate supercomputer
Project: PlanetLab, an overlay network that transforms the Internet into one large computer Principal investigator: Open design community administered by the PlanetLab Consortium (www.planet-lab.org)
Payoff: The network is the computer
5. The challenge: Implement a globe-spanning digital network with infinite bandwidth and zero latency
Project: The Global Lambda Integrated Facility, a test bed for developing dense wavelength-division multiplexing optical networks and middleware that would unite high-performance computers and scientific instruments worldwide
Principal investigator: Open development community established by Kees Neggers of Surfnet and Cees de Laat of the University of Amsterdam (www.glif.is)
Payoff: Construction work on the Information Superhighway finally wraps up; could spark a new era of scientific research and technical innovation