The Grand Challenge was conceived in 2002 by Darpa director Anthony Tether. The first race, in March 2004, offered a $1 million purse. When no one finished, the purse was upped to $2 million. So far, Darpa has reported spending about $20 million to organize and promote the Grand Challenge program.
The agency has sponsored autonomous-vehicle research for more than a decade in hopes of meeting a Congressional mandate that one-third of all military vehicles be autonomous by 2015. Progress, however, has been so slow that Darpa decided to enlist outside help. Hence, the Grand Challenge.
"All of the winners achieved something that the naysayers had said was impossible just a few years ago," Intel's Bradski pointed out.
"Previously, Darpa spent at least a billion dollars on autonomous-vehicle development," he said, "but in less than three years the Grand Challenge has surpassed all previous efforts. The Grand Challenge encouraged researchers to take the big risk, because it was OK to fail. As a consequence, we have made incredible progress. I predict that not only military vehicles, but also consumer vehicles, will start incorporating some of our discoveries within the next three to five years."
Instead of inventing new sensors, the winners of the Grand Challenge incorporated the most reliable of existing technologies to tackle the autonomous-vehicle problem. Both of the winning teams credit their accomplishment to their system integration efforts and the artificial intelligence of their software, since their computers, sensors and vehicles were all off-the-shelf.
"Almost everybody used same German-made lidars [laser detection and ranging system]," Bradski said. "It scans horizontally, so you have a line of distance information about what is in front of you. CMU also had a gimbal-mounted lidar in a dome on top of the car that was shock-mounted and could provide 3-D range information by scanning up and down as well as horizontally."
In the end, simpler was better. The reliability of the off-the-shelf hardware and the ingenuity of the software that interpreted the data streams from the sensors paid off for the winners. In addition to lidars, some vehicles also had radar units, but Stanley used just five forward-facing lidars and a single video camera on the roof.
"We used a commercial vehicle because of their inherent reliability," said Bradski. "Our job was not to invent the car, but to create a robot to drive it. We didn't win because of some fantastic new sensor technology, but because of the reliability of our software."
The only nonstock part of Stanley was its drive-by-wire system, which enabled the computer to control the steering, brakes and accelerator. Stanley's drive-by-wire system was already an experiment at Volkswagen, which sent teams of engineers to assist the Stanford team in setting it up. Eventually, VW wants to use the system in commercial vehicles that will be able to avoid collisions automatically. For its part, Carnegie Mellon used a drive-by-wire system designed by Caterpillar, one of the team's sponsors.
Under the hood
The Stanford University team divided up its artificially intelligent software into three major subsystems a data acquisition module, a planning module and a "world model."
The data acquisition module also included obstacle verification algorithms that made sure the data passed up to the planner was valid. The planner connected the dots between the GPS coordinates and adjusted for obstacles, while the world model kept the entire system operating reliably.
"Our world model contains all our knowledge about driving, such as how sharp a turn you can make without rolling over," said Bradski. "The world model ran entirely on probabilities it made no hard decisions. The planner laid down paths to achieve our goals, which was by fitting curves between way points, plus how to get around obstacles. Then kinematics scored the possible paths and chose the safest route. All that was done 10 times a second [for a latency of 1/10 second]."
The Carnegie Mellon team's software was more complicated, resulting in a latency closer to 1/4 second. "CMU spent more time on mapping, but we made no adjustments to our map, instead letting the planner decide the safest route from our real-time data from the sensors," said Bradski.