An autonomous sound-sensing robot developed by programmers at Cornell uses three microphones to first detect and then follow an audio source.
Programmers at Cornell University have developed a robot that listens for and then tracks an audio source. The robot - dubbed PeanutBot - uses three microphones to triangulate the direction of the source relative to the robot, and then rotates and advances toward the source as shown in the following short (46 sec) video:
PeanutBot was developed as part of a project with the Adaptive Communications and Signals Processing Group (ACSP) research group at Cornell that specializes in studying various aspects of autonomous vehicle control. This was apparently the first ACSP project to study audio - rather than video - sensing techniques for the control of an autonomous vehicle.
Full details are available on the PeanutBot project home page, including a high-level design description, background mathematics, hardware and software design details, schematics, source code and even an itemized parts list including cost (which came to only $361 total).