Instead, the iCub's design has been based on the so-called embodied approach.
The robot learns skills through exploration and interaction with its environment and with other agents, particularly humans. It learns about the world through its experience of what its body can do and how its sensors help it accomplish specific tasks or goals.
To learn in this way, the iCub's cognitive architecture is based on what psychology and neurology have uncovered about the innate abilities of human newborns and their subsequent development. A "modulation circuit" enhances and refines existing skills by combining a memory of past sensor input and actions with the motivation to achieve or optimize a particular task.
A separate circuit lets the robot "rehearse" possible sets of actions and, based on its associative memory, see the expected results. Based on how successful those plans turn out to be in simulation, they can be kept, thrown out or amended. The expected outcomes can be compared with those that occur when actions are actually performed.
For the robot to learn like a human, it must have the functionality of a human. To that end, its designers say the iCub will be able to crawl on all fours and to sit up. Its hands are said to be able to perform dexterous manipulation, and its head and eyes are fully articulated. It has visual, vestibular, auditory and haptic sensory capabilities. In short, the iCub's strength, range of motion and senses are as close to those of a toddler as possible given today's technology.
The middleware for the iCub is a software architecture called Yarp ("yet another robot platform"), which supports modularity. New devices, functions and communications channels can be used by defining an interface that controls the device at a low level but allows standardized high-level commands to bring the devices into play.
According to the iCub team, this will encourage the long-term reuse of successful modules and the replacement of those that are less successful or that have been superseded, producing a kind of robotic "gene" that could live for many generations of bots.
The learning curve
However sophisticated, iCub is not intended to be a finished product but is designed as a platform for further research. To that end, the RobotCub Consortium has sent out calls for research proposals, with robots going to the teams that posit the best proposals. In the first wave, the top three proposals came from Imperial College London, Pierre and Marie Curie University (Paris) and Lumière University (Lyon, France).
The Imperial proposal was submitted by Murray Shanahan of the computer science department, upon whose work some of the iCub cognitive architecture is based, and Yiannis Demiris of the department of electrical and electronic engineering. According to Demiris, the team intends to design and implement a cognitive architecture that will let the robot explore and learn to manipulate objects in its environment. The scientists favor models of the "self" and the environment that enable mental rehearsal of possible actions.
In particular, Demiris will be concerned with the learning. "I would like the iCub to have the capacity to [discover] the effects of its actions on objects," he said. "This learning by experimentation treats the robot as a little scientist, capable of forming hypotheses about the world and experimenting to confirm or disprove them." He will also endow the iCub with the ability to learn by imitation.
In Paris, Sigaud at the Institute for Intelligent Systems and Robotics said his project is based on a simple scenario. "The iCub robot will sit at a table with a few objects within reach, and interact with these objects according to its own drives," he said. "Then a human user will modify the robot's behavior so that it meets simple constraints. The user can do this both by demonstrating the expected behaviors and cheering or rewarding the robot, depending on its actions."
Peter Ford Dominey of the Laboratory for the Study of Machine Cognition in Lyon is also concerned with robot-human interaction, in the form of cooperation. "We have demonstrated that robots can learn to perform novel, non-prespecified, shared, cooperative tasks, alternating their actions with the human's, but the robot does not 'understand' the real goal of the activity," he said. "In comparison, we know that humans possess a profound and likely innate propensity to understand and communicate goal-related intentions." By 18 months, he said, "children understand the intentional goals of adults demonstrating cooperative games."
Through the use of what Dominey called "situated simulations," the Lyon team's robot will "acquire knowledge about the results of actions that will begin to provide a more profound sense of 'meaning,' including the ability to understand the intentions of others," he said.
All of the researchers agreed that the open-source path is important. "Commercial software requires and imposes a wall. One has to stop, worry about how and how much to pay, [then determine], 'Does it run on a second computer? Can I have a student use this?' etc.," Sigaud said. By contrast, "The iCub open-source simulator can be downloaded quickly and freely, and anyone from a clever high school student to a seasoned researcher can begin to use it immediately, to share their results and to benefit from the collective effort of the surrounding community."
Three other successful proposals to the consortium came from the Technical University of Munich (Germany), Pompeu Fabra University (Barcelona, Spain) and the Middle East Technical University (Ankara, Turkey).
Sunny Bains is a London-based scientist and freelance technology journalist.