It was the defining moment of the Deep Space 1 mission. Ten months into its voyage, the spacecraft, spiraling out toward the orbit of Mars, sped toward the tiny, eccentric asteroid Braille as the chunk of rock plunged up through the plane of the ecliptic. DS 1 locked on to the asteroid, enabled its cameras, its infrared spectrometer and its plasma instrument, switched to high-speed navigation to keep the asteroid in its sites, and flashed by.
Long minutes passed as the craft's computer processed and prepared to transmit what promised to be dramatic fly-by footage tailored for the evening news. The now painfully slow 20-kbit/second link became active, and back to Earth came . . . darkness.
The lack of photos for the folks back home would tarnish a virtually flawless mission that had validated 12 new technologies for deep-space use. Those range from ion-beam propulsion to low-power commercial microelectronics to what may, in the long run, be the most significant of all: autonomous navigation. Autonav is expected to spare NASA massive new investments in deploying future missions. It also played a central, if innocent, role in the missing pictures. An autonav-equipped spacecraft must monitor its own position, compute its own coue, translate that data into commands to its thrusters and execute its own trajectory corrections. Autonav is a central technology at this stage in the exploration of space, for two fundamental reasons.
The first is bandwidth. NASA has ditched its doctrine of one high-profile, fail-safe mission at a time in favor of large numbers of longer, less expensive missions simultaneously. NASA's deep-space tracking and communications facilities-three large antennas near Goldstone, Calif., Madrid and Canberra, Australia-must handle all command and telemetry operations for all the missions. The low data rates involved and the diverse locations of the missions mandate a drastic reduction in the volume of traffic.
The second issue is sheer distance. The Braille fly-by, for example, took place so far from Earth that transmission delays would have made it physically impossible to control the spacecraft from here. So spacecraft have to become more autonomous.
Spacecraft have long been able to locate the sun and orient themselves so that their vital solar panels aim toward it, while their high-gain antennas are aimed toward Earth. But the autonomous-navigation program takes the concept much further.Deep Space 1 was to locate itself in space by photographing asteroids against the background of stars, processing the images to get parallax data from the asteroids and turning that data into a location estimate for the spacecraft. It was then to determine its location relative to its intended orbit, plan any necessary course corrections and execute them.
All the software for autonav was to have run on a new main computer. "We had a very aggressive schedule to put a new processor in DS 1," recalled Jet Propulsion Lab avionics project manager Dankai Liu. But JPL "soon realized that the program was just too aggressive and that we didn't really need it. So we used the same computer" used on Pathfinder.
That system was a radiation-hardened version of an IBM Power architecture RS 6000, running at 20 MHz and armed with only 4 Mbytes of DRAM. The system would host a VXworks real-time kernel. Code would be written in C and stored in two mostly redundant banks of E2PROM. The hardware platform would run all spacecraft systems, including autonomous navigation.
The software for autonav was organized into object-like modules. The key to those, coded by Bob Werner, member of technical staff for navigation engineering, was the navigation executive. "During the mission, we expressed our intent-where to go, what to photograph and so on-in files," Werner said. "The executive takes those plans out of the files and sends commands to perform actions.
"For instance, the executive would consult the maneuvers file to get the plan for a maneuver," Werner said. "It would then ask the Attitude Control System module to orient the spacecraft in the correct direction, and ask the thruster control modules to start and stop either the ion engine or the hydrazine thrusters, depending on the plan.
"For most of the mission, we used the ion engine. It only delivered enough thrust to accelerate DS 1 at about 20 miles an hour per day, but we could burn it for months at a time if we needed to. When we were getting close to the target and needed greater acceleration, we could burn the hydrazine thrusters."
By very careful coding, the entire navigation package was crammed into about 50,000 lines of code, residing in the 4-Mbyte memory along with a complete picture from the camera.
In operation, autonav needed only two things from Earth, said navigation team chief Ed Reidel: a guide suggesting which asteroids to target and a schedule of when the spacecraft could perform particular functions. The latter data relieved the executive of having to decide when to take pictures, perform calculations and make course corrections, so that it wouldn't overcommit its slender computing resources.
"From the asteroid list, autonav would decide which asteroids were the most advisable to photograph at a particular time," Reidel said. "It would then maneuver the spacecraft to line up for the picture, see the data transferred from the camera into memory, and perform the image processing and location calculations. That data would be sent to the Orbit Determination Link. When invited to run by ground control-sometimes as seldom as once in 45 days-ODL would run the accumulated location data through a dynamic filter to determine the state of the spacecraft."
DS 1 did two types of maneuvers: a discrete trajectory correction, in which the craft would simply use its thrusters to make a point change in course, and a mission burn, in which the ion engine would establish a new orbit. Since ground control only had to send permission commands for course corrections instead of detailed sequences of hardware operations, traffic was reduced enormously.
The process functioned flawlessly, bringing DS 1 right on target to its encounter with Braille. But it was too slow-paced-requiring 5 to 10 minutes of background processing to update a location estimate-for use during the fly-by.
"For the final 48 hours before the encounter, we only took pictures of Braille," Reidel said. "Otherwise autonav went through its normal sequence of steps. It made its last course correction about 12 hours out. At a range of 30 km from closest approach, we switched to an accelerated form of orbit estimation. The ground sequence at this point commanded the rate of picture taking, since we now had to satisfy both the navigation requirements and the science requirements of the mission. So autonav was taking what pictures it could get of Braille, and attempting to extract new position data and relay that data to Attitude Control System to keep the spacecraft pointed at the asteroid."
It was then that trouble struck. "Up to this time we had been using the main camera, which is shutterless and has a very sensitive CCD imaging chip," Reidel related. "But we had calculated-and had actually demonstrated on the ground-that if the asteroid was as bright as we thought it would be, it would saturate the CCD cells, and we would get nothing but a smear." At that point, "we had to switch from the CCD camera to the experimental active-pixel CMOS imager."
The active-pixel imager and its optics "turned out to be far less sensitive than anticipated" and the body of the asteroid "quite a bit darker." As a result, he said, "The CMOS camera never locked on."
Good data from Deep Space 1's IR spectrometer and plasma instrument indicate that the craft did remain pretty much on target through the fly-by, again validating the autonav system. But the chance for dramatic exposure on the evening news was lost.
NASA plans an extension of the mission that will carry DS 1 further into the solar system for an encounter with two comets. That buys JPL further time to tinker with the CMOS camera and exercise autonav. "Autonav will be absolutely necessary for future missions, beginning with Deep Impact," said Liu. "It is fully validated now, and ready to go."