ARLINGTON, Va. A program that may push cognitive technology to a new level is being launched by the Department of Defense. The DOD, a longtime supporter and user of artificial-intelligence systems, aims to build what it is calling an "enduring personalized cognitive assistant," or Epca.
The system will be able to "reason, use represented knowledge, learn from experience, accumulate knowledge, explain itself, accept direction, be aware of its own behavior and capabilities as well as respond in a robust manner to surprises," according to a Defense Advanced Research Projects Agency (Darpa) Broad Agency Announcement. The BAA requests research proposals for Epca designs by Dec. 19.
Darpa expects to begin funding the new cognitive computer projects as early as the first quarter of 2003. Besides Epca, Darpa's Information Technology Processing Office will also consider funding any project fitting its characterization of a "new class of cognitive system."
"What we are really after with the enduring personalized cognitive assistant is to get people working on a multiyear path to bring all the pieces together," said director Ronald Brachman, who will co-head the initiative along with deputy director Zachary Lemnios. "It's not as if we need the world's best machine-learning algorithm. We just need something that is adequate, put together with a knowledge representation and perfected enough to see if, when you unify all these elements, that you can actually make it work."
Darpa has backed similar initiatives before. The first was for artificial intelligence, and the next for neural networks. The agency claims that both of those projects were successful, even though they did not achieve their stated goals.
"People say that neural networks and AI were not successful because we don't have humanoid robots walking around, but they don't realize that there are hundreds of applications of this technology that we use every day without thinking," Brachman said. "Machine-learning techniques are now built into a variety of commercial systems, finding credit card fraud, evaluating mortgage applications, detecting illegal telephone calls and recognizing speech." He maintained that "AI planning algorithms were successful in Desert Storm and are being used every day by the military in complicated logistic situations."
A prime reason for looking at AI designs again is the rapidly increasing speed of computer technology. Brachman cited IBM Corp.'s Deep Blue project, which bested a grand champion in chess last year.
"The speed of computers enabled IBM's Deep Blue to beat a human grand champion they didn't need to discover new algorithms, but just needed the raw crunching power to get over this last hump," Brachman said.
Brachman also cited major advances in understanding the workings of the human brain as further evidence that now is the right time to move to a new level of AI capability. He is reluctant, however, to set overly ambitious goals, like the creation of an artificial brain. Instead, Brachman opts for setting what he thinks are achievable goals that may lead to breakthroughs on a longer time scale.
"We have no illusions that after four years we will have solved artificial intelligence, or that we'll have a humanoid robot walking around our offices. However, we do believe that the strength of the community, under the right leadership, has a pretty good chance of being able to prove whether or not it's doable, and maybe five or six years after we are gone, if people stay the course, we will actually have something mind-blowingly significant," Brachman said.
The specifications for Darpa's enduring personalized cognitive assistant illustrate Brachman's commitment to achievable goals. "Enduring" means that the assistant remembers what it has already learned, and "personalized" means that it applies what it learns to specific problems. "We're not looking for superhuman behavior, like reading minds," he said, "but just commonsense reasoning that one would expect even from a child.
"The main focus in the first go around will be the office we are not going to try to build the end-all, be-all, artificial human being that is absolutely not what we are after. But human secretaries . . . have certain capabilities that do seem within the realm of possibility and we don't know if we will be successful so we're right there on the hairy edge," Brachman said.
Brachman envisions an enduring personalized cognitive assistant that could learn to help around the office by observing and interacting with office workers. Generally, human office assistants learn how an office operates over time, by interacting with others.
"Sometimes an assistant will merely watch you and draw conclusions. Sometimes you have to tell a new person, 'Please don't do it this way' or 'From now on when I say X, you do Y,' so it's a combination of learning by example and by being guided. To my knowledge, as simple as it sounds, that has never been tried before," Brachman said.
Beyond the Epca program, the BAA describes a vast area of research that it will consider funding. Darpa will keep accepting proposals for projects until June 6, 2003.
"We have not committed to a specific technology to make this work. If it takes somebody building an artificial analog-based computational device based on an understanding of the brain let's say it's different from a neural net, something totally new that has nothing to do with AI or symbolic processing then we want to hear about it. We don't want to discourage people who think along those lines," Brachman said.
Of the very broad call for proposals, Brachman did note that Darpa is particularly interested in "systems" aspects, because, as he contends, architecture is the rarest entity to emerge from an academic environment. The reason: Researchers have to "publish or perish" and almost never have the time to integrate all the various bits and pieces that they describe in their papers into real systems.
For architecture, the BAA specifically cites three "cognitive" aspects Darpa hopes to see fleshed out in proposals: reactive, deliberative and reflective processes. Reactive processes are quick, direct responses to real-time inputs. Deliberative processes include planning and other structured reasoning tasks, including communications tasks that "deal thoughtfully with natural language." A system's reflective processes, in the BAA view, take as their input the observations that the system makes of itself a rudimentary form of self-awareness.
The other aspects of cognition that will likely spawn both hardware and software submodules, according to the BAA, include both long-term and short-term memories, perception, representation, reasoning, communications and actuation. By integrating those modules with a knowledge base under the control of reactive, deliberative and reflective processes, the BAA hopes that systems can be built that have "common sense." Such "cognitive systems might be best characterized as systems that know what they are doing."
Although a lot of research has gone into uncovering the mechanisms of memory, sight and hearing, with advances in understanding specific cognitive operations like speech recognition, no one is close to completing a blueprint that would show how to wire up all those computing elements. The BAA program suggests that this is due to the lack of critical research that "must be done to determine how to take advantage of huge numbers of computing elements to produce intelligent processing of the sort that we would call cognitive."
The BAA asks the question, "Can the human and animal perceptual systems give us insights into how to find important low-frequency events in huge amounts of data?" And it cautions that working "smart" is the goal, not simply copying nature. Thus, instead of wiring up transistors that mimic real neural networks in the brain and hoping for cognition to emerge, the BAA calls for researchers to consider new architectures that can "harness raw computing power in the powerful ways that brains do . . . the real power of human information processing seems to come from higher-level capabilities that use abstraction, mental simulation and planning, hypothetical reasoning, powerful language-understanding and generation capabilities, and self-awareness."
The BAA also suggests several interesting intelligent interfaces that would adapt to users, rather than requiring the user to adapt to them. For instance, a software development environment based on natural language might engage users in a dialogue to suggest new capabilities, and the system would then respond with what would have to change in its internal state to accommodate them. "In a debugging context the system could directly help us determine where and why its behavior strayed from the desired outcome," Brachman said.