The Defense Advanced Research Projects Agency in recent years has poured hundreds of millions into every aspect of "big" artificial intelligence-expert systems, neural networks, genetic algorithms, evolutionary programming, fractal geometry, chaos theory, cellular automata, artificial life. And that just scratches the surface on the software side; legions of cognitive hardware architectures have also been beneficiaries of Darpa largesse.
But thus far the far-flung investment has yielded little tangible return in solving the big-AI problem-getting machines to think like humans, learning from experience and applying logic and common sense to solve real-world problems. Given laymen's expectations of robots as fully cognitively functional assistants, that lack of quantitative progress has been a thorn in the agency's side.
Last year, Darpa began ratcheting up its cognitive-computing efforts for the 21st century, making the discipline a "strategic thrust" for its Information Processing Technology Office and charging IPTO with the heady task of chipping away at the big-AI problem. A case in point is Darpa's Perceptive Assistant that Learns (PAL) program, for which the agency this month granted two awards.
Darpa awarded $7 million to Carnegie-Mellon University's School of Computer Science (Pittsburgh) for work on a PAL dubbed Radar, which formally stands for Reflective Agents with Distributed Adaptive Reasoning but is also a winking reference to the eponymous administrative assistant in the TV series MASH. As envisioned, Radar will read and manage e-mailed messages, autonomously plan meetings, optimally allocate scarce office resources (such as meeting rooms), maintain its own Web site and automatically write reports.
Stanford Research Institute International (Menlo Park, Calif.), meanwhile, received $22 million in Darpa funding for a PAL named Calo, a take on the Latin word calonis, or "soldier's assistant," that formally stands for Cognitive Agent that Learns and Observes. Calo will handle a broader range of interrelated decision-support tasks, concentrating especially on those tasks that current military AI systems perform poorly.
Chess-playing programs like IBM's Deep Blue have shown the world that today's high-speed computers can accurately imitate human functions, noted IPTO director Ronald Brachman. Now Darpa, through PAL and other programs, will look to foster what IPTO describes as "systems that know what they're doing."
From a broader perspective, Darpa plans to attack the big-AI problem by providing its own quantitative measures of success. Part of that process will involve approaching AI in smaller chunks, with more task-specific platforms that prove AI's utility for real-world tasks.
"We still want to do the big thing-Darpa has a history of doing big things, like the Internet-so we don't want to scale back our hopes for the scale of impact we might have, because for national security and the Department of Defense we need to be way out ahead," said Brachman. "But we also need to be able to measure our progress and show its true value and impact and have it be scientifically credible."
A year ago, Brachman's IPTO issued a broad agency announcement (BAA) of its Cognitive Information Processing Technology (CIPT) initiative. The BAA called for a category of system that could "reason, use represented knowledge, learn from experience, accumulate knowledge, explain itself, accept direction and be aware of its own behavior and capabilities, as well as respond in a robust manner to surprises." Barraged with proposals, Darpa recently extended the deadline to May 31, 2004.
"We still like the way the BAA sounds, even a year later, so we've extended it for another year," said Brachman. "We have received literally hundreds of abstracts . . . some that were exactly what we expected, because the researcher was familiar with our areas, and some that are eye-opening and very exciting and are leading us in new directions. And then there are some that are so far 'out there' we don't know what to make of them."
Not all the potential contractors are happy about Darpa's attempt to quell the big-AI problem with performance metrics. "There's a class of people who are on board with these evaluation procedures, because they know it will help the field remain credible, and there's a class of people, to be perfectly honest, who find this an assault to their research sensibilities and don't ever want to be measured," said Brachman.
But he expressed confidence that there are enough of the pro-evaluation researchers to support all of Darpa's efforts, noting that the bigger problem could be to follow through on the daunting task of crafting metrics for quantifying such cognitive functions as learning.
The result of the phase-zero effort will be a set of evaluation and testing procedures that have been custom-tuned to each award granted, Brachman promised. For instance, software that has an element of learning might build in tags that enable it to report to Darpa what parts of its result came from handcrafted code and built-in knowledge, and what parts were learned by the software while it was operating.
Brachman's secret weapon will not be self-endorsed evaluation metrics designed to counter critics, however, but a new generation of "mini AI" applications he hopes will prove so compelling that even the critics would want to use them.
"Suppose you could build a system that could absorb all of your to-do-list notes, say by scribbling on a PDA. But instead of just sitting there, the notes would be automatically integrated into a knowledge base that had a representation of the kinds of things that we do in the office every day, such as have meetings and travel and write proposals and read documents and send e-mail. A knowledge base could relate our separate to-do-list entries, even if they had no words in common," said Brachman.
Such a "personal knowledge pad" is one of a planned series of CIPT "silver bullets" that Darpa hopes will shoot down its critics. By year's end, Brachman promises two more narrow callings for cognitive-computing mini-AI apps.
The PAL is an order of magnitude more complex than the knowledge pad, since the PAL attempts to fuse separate modalities, including as voice mail, e-mail and writing, on a PDA. The other mini-AI apps yet to come will likely be even more narrowly focused than the knowledge pad, managing e-mail only, for example.
According to Brachman, commercially available software to-do lists "don't even come close" to what he has in mind for the knowledge pad, because "they simply don't understand anything about the task." Brachman calls this domain knowledge. He cited an example application that would associate a to-do-list note saying "Flight 2077 to San Jose on Aug. 2" with a separate entry saying "Call United"; much as a a human secretary would do, the knowledge pad would infer, from its domain knowledge, that both notes refer to the same flight.
Thus the knowledge pad would not just be a passive relational database but would initiate secretarial actions on its own to fulfill the imperatives that its knowledge base infers from associating its entries-such as reminding its user that a call needs to be made to confirm the time of a pending flight.
"You can start to realize how [the mini-AI app] could prioritize items and discover how to know when they are redundant, how to know when they are outdated and can be thrown away, how to know when to ask the user whether they have forgotten an item or simply failed to report its completion," said Brachman.
NASA is betting that the knowledge pad, by automating solutions to everyday problems, will prove attractive to PDA makers. If the concept mutates, as the Internet did, to become as ubiquitous as the Web, today's AI critics could find themselves hooked on the mini-AI apps.
On the other hand, small-concept AI could backfire. This isn't, after all, Spielberg's AI.
"The knowledge pad is a microcosm of the whole AI problem, because I suspect that people will look at it and say, 'What's the big deal?' " Brachman said.
Citing his own flight-information inference example, he said, "Of course everybody knows that plane information goes together; it doesn't take a rocket scientist to understand that. We may have to respond that, 'Yes, it isn't rocket science; but there is actual inference going on, and understanding of language to a degree, and reasoning and anticipation and a bunch of things that are really at the heart of AI, but in a very limited setting so that we hope it is doable.' "
That sounds like an apology, but this time it will be accompanied by graphs, bar charts, figures of merit and every other type of "metric" ammunition the military-funded agency can muster.
All that's left to be seen is whether the assault will wear down, and perhaps win over, AI's army of critics.
See related chart