All robots have brains of some sort. These can run the gamut from workstation-level computers to analog electronic circuits designed to give the robot basic bug-brained instincts. Here's a quick rundown of the types of controllers you're likely to find on a robot:
One of the most common forms of robot control is the microcontroller. A microcontroller is basically a computer on a chip that's been designed for embedded computing applications (like robot brains!). It typically has a CPU (a central processing unit), an erasable memory space (called an EEPROM) for storing control programs, some Random-Access Memory (RAM) for storing temporary data, a clock for controlling the speed at which the CPU barks its orders, and input/output (I/O) pins for getting data into and out of the chip.
Microcontroller chips usually come mounted on microcontroller boards, often called modules. The boards contain support electronics and sockets for connecting wires to input and output devices (sensors, motors, other controllers, and so forth) and to a power source. In most cases, creating the control programs that run the robot is done on another computer and then downloaded into the microcontroller's memory via a standard computer cable. Some robot controllers, like MINDSTORMS' RCX computer brick, use an infrared transmitter (on the PC end) and a receiver (on the microcontroller end) to load programs into the robot.
A lot of today's robots keep most of their brains someplace else, accessing the programs stored on a standalone computer (or network of computers) via a radio link. Before the increasing popularity of wireless data communications, robots of this type had to be tethered (connected via trunks of cables) to a computer, which greatly limited their range. Advantages of off-board computing include the availability of more computing muscle, less weight on the bot, and less power needs. The disadvantages are that the range can be limited (to the local radio range) and the robot is computer-dependent.
A radical form of robotic control, originally developed by former Los Alamos robotics researcher Mark Tilden, doesn't use a computer at all. Called BEAM (Biological Electronic Aesthetics Mechanics) robotics, this scheme uses conventional analog electronic components (capacitors, resistors, transistors, and integrated circuits) to build what Tilden has dubbed nervous nets (a play on the neural networks of AI). BEAM technology is basically an updated version of what Grey Walter was doing in the late '40s and '50s with his robot tortoises. Inspired by nature, BEAM bots often take the form of robo-critters and can exhibit extremely lifelike behaviors given their simple components.
Frequently, robots have special controllers to manage power and speed. Not surprisingly, they are called power controllers and speed (or motor) controllers. These usually take the form of separate circuit boards with all of the electronics needed to perform their tasks, and they are often located next to their respective system (the power source and the drive train). Wires send data to and from theses controllers to the robot's main microcontroller. Sometimes, rather than power and speed controllers being on separate circuit boards, they're located on the microcontroller board (either handled as part of the microchip's duties, or separate circuitry on the main board).
Although sensors are a way for a robot to listen to its world, there are a number of ways of making the conversation a little less one-sided. These include the following:
Increasingly, robots are communicating with their world in real time. This is largely thanks to increasingly inexpensive short-range radio technology (such as Wireless Fidelity or WiFi), developed for wireless computer networking. Using such a radio link, a robot can send real-time sensor data, video images, sounds, and more to a local computer (and that computer can send all this to remote locations via the Internet). This opens up all sorts of possibilities for interfacing robots with the rest of the world.
Getting a robot to respond to fixed speech commands has become near commonplace.
Numerous advances have been made over the past few years to the point where reasonably sophisticated voice command and speech synthesis systems are starting to show up in high-tech toys. But there's still a long way to go.
There are two flavors of speech recognition. Voice-dependent recognition, the "easier" of the two, requires that the user (or users) teach the robot (or other digital device, such as a desktop computer or talking teddy bear) commands using their voice. The voice recognition system will then only recognize the voices to which it has been programmed to respond. Systems of this type have become so reliable that, at some hospitals, surgeons now perform operations with robotic instruments guided only by voices. Voice-independent recognition, the more difficult of the two, can accept commands from anyone, but it is less reliable.
Voice synthesis, the capability of a machine to consistently speak in a manner that humans can understand, is still a work-in-progress. Text-to-voice programs have been around since before PCs, but after decades of development, they still come off sounding like machines with bad Swedish accents. Ironically, in robot applications, people seem to find a certain charm in a bot that sounds like one of the Cylon warriors from Battlestar Galactica. Of course, getting robots to understand the meaning of human speech, and to be able to say anything meaningful in return, is still largely science fiction. So far, most robots that "speak" are actually accessing already recorded sound files from a database that it puts together on the fly in response to sensor input.