PORTLAND, Ore. Neophyte scuba divers tend to expect underwater vistas akin to those in tropical-island brochures-bright, clear water, filled with colorful fish. The harsh reality is that most underwater scenes are poorly lit at best, with the norm being dark, monotone and murky.
To compensate, the octopus' visual system has adapted to spotting prey in the worst waters. Indeed, who would be a better model for a low-visibility vision system than an animal that, when attacked itself, sprays its own low-visibility "ink"?
Accordingly, researchers sponsored by a National Science Foundation effort are intent on imparting the vision abilities of an octopus to undersea autonomous robots. By mimicking the octopus' ability to see well underwater with an analog silicon octopus retina ("o-retina"), the University of Buffalo group believes it can revolutionize space and undersea exploration, and improve visibility in hazardous environments and hard-to-reach places such as underground pipes.
"The octopus retina has been exhaustively studied over the last 40 years, so there is a lot of good data on its structure, and a lot of studies done to determine what kind of images an octopus can see and how their visual system works," said Albert Titus, an assistant professor of electrical engineering at the University of Buffalo and the principal scientist behind the research. "The octopus' eye is simpler than our eye, neurologically, but it is complex enough that it isn't so trivial either."
Both octopus and human eyes contain such structures as photoreceptors and complex ganglion cells. But the octopus, an invertebrate, has fewer cells, of fewer types and with fewer connections than "most vertebrate retinas," Titus said.
Titus' first task was to create an engineering model of the octopus retina that mimicked the structure of the creature's eye. It held some surprises that helped the group ensure the model was accurate.
"The main difference at the structural level is that the type of receptors are different," he said. "For instance, octopi have polarization sensitivity, which we don't have. They can see polarization, which we can't, which is important for survival underwater."
Another big difference is that certain optical illusions fool the octopus. For instance, an octopus cannot tell "right from left," Titus said-or, more accurately, it can't distinguish between diagonal mirror images. In particular, octopi cannot distinguish between a horizontal bar at 10 o'clock and one at 2 o' clock. In trials, octopi quickly discovered which vertical or horizontal bar contained a food treat, but with mirror-image diagonal bars their success rate remained a dismal 50 percent, no matter how many trials.
Consequently, one of Titus' main goals was to create a neural network that mimicked the wiring of the octopus retina so well that the model could distinguish horizontal from vertical bars, but not mirror-image diagonal bars. That was accomplished in the revision 1 chip, Titus said.
"But we don't have the polarization working yet-that's one of the more difficult problems, that's what we will be looking at next," he said.
The chip is an analog CMOS device implemented in a 1.6-micron process at the Mosis prototyping and low-volume facility in Marina del Rey, Calif. The 2.2 mm2 chip, housed in a 40-pin dual-in-line package, harbors an 8 x 5 array of CMOS photoreceptors that transduce light in a manner similar to other silicon retina chips, according to Titus, but which also includes an octopus-style interconnection network.
"Our chips use CMOS photodetectors as our receptor cells, and then distribute this signal over a so-called resistive network that basically performs a convolution with a Gaussian function to basically do edge detection," said Titus. Signals travel down "two parallel paths-almost like the black and white squares on a chess board, where the bishop can move diagonally but the other [pieces] cannot. Likewise, some [photoreceptors] are wired one way and the others are not; that gives us the orientation selectivity of the octopus."
Today, only the retina and its integral neural network are cast in the CMOS hardware, with the rest of the visual-processing algorithms performed in software on a post-processing PC. There the image that the octopus would see can be derived from the retinal outputs of the chip in the manner of the ganglion cells, optic nerve and, ultimately, visual cortex of the octopus brain.
"Just like in our eyes, the retina is just the beginning stage [for the octopus]," Titus said. "So what this is intended for is to process the input image and then send it to the next stage of processing." The processing going up the levels to the brain "is not on-chip at this point, and probably would never be on the same chip. But our goal is to put it on another set of chips. Right now we have software running on a computer that receives the chip's data."
For the future, Titus' group will study various aspects of the visual system with an eye to implementing different kinds of retinas, especially with different functions not performed by human retinas.
"For instance, we are looking at doing depth perception [with one eye] by using motion" instead of stereoscopics, which requires two eyes, said Titus.
The current chip prototype integrates the signal from each retinal cell into a contrast-enhanced image with the edges of objects preidentified even before the signal is sent up the software optic nerve.
The neural network interconnecting the "octopus" eye's retinal cells "almost does a data compression kind of thing, the same way as the human eye," Titus said. "Our eyes get rid of a lot of information when we send it up through the pipeline, which is the optic nerve to the brain, because the optic nerve has significantly fewer connections than the retina has. So the retina does some kind of compression of the data, getting rid of stuff that is seemingly not needed. Then our brain uses this information to put the scene back together."
The next step for Titus' group is implementing an artificial o-retina at a higher resolution housing many more analog pixels. "We are using an analog fabrication process where scaling to a larger array is not a big deal, because we mostly use local connections-there are not many global connections," said Titus.
Also on the new chip, Titus plans to implement the polarization selectivity of the octopus, which is used to detect camouflaging by fish. It also exists in butterflies, he said, though probably not for detecting camouflaging.