Portland, Ore. - Japanese researchers working toward a cure for blindness showed off an artificial-retina prototype at the International Joint Conference on Neural Networks here recently. The Osaka University team looks forward to the day when any kind of blindness can be corrected by an implantable chip that replaces any defective area in the human vision apparatus, from the retina to the optic nerve to the wiring of the brain itself.
"We've built a three-chip set that performs the analog functions of a real retina," said Tetsuya Yagi, a professor in the electronic-engineering department at Osaka University. Similar work, he said, is going on in Germany and the United States.
Most prosthesis efforts to cure blindness use a video camera (CCD chip with lens) and a digital signal processor to transform pixels into a pattern of electrical stimulation that can be delivered by electrodes to remaining retinal cells, the optic nerve or the visual cortex itself. Recently the University of Southern California successfully implanted a 16-electrode retinal array that gave partial sight to patients blinded by retinitis pigmentosa (see www.eetimes.com/sys/news/OEG20030505S0013). That works, but the video camera and DSP are "outboard" paraphernalia-a camera on a pair of eyeglasses connected to a belt pack that contains the DSP and batteries.Implants go analog
Instead of digitizing a video signal and using a DSP, "we use analog chips," said Yagi, who worked with Seiji Kameda under Japan's five-year national "artificial-vision system" effort. Kameda is now a postdoctoral fellow at Hiroshima University. They chose analog, Yagi said, "because analog uses so very little energy to operate and, more importantly, does not generate heat, which is very important when we start engineering implants."
Yagi's current chip, an array of loosely coupled photodiodes, provides a 184-pixel (40 x 46-pixel) image. It was fabricated in 0.6-micron CMOS using double polysilicon and three metal layers on a 8.9 mm2 die. This device and a second analog chip-a variable resistive network-mimic two of the five layers between the eye and the brain. In a companion FPGA, sample-and-hold circuits store analog values between the separate computational layers, making it possible to transfer analog voltage levels between chips.
Yagi has the other layers of the visual system in sight and the next one already on the drawing board. Nor is he alone. Labs around the world are aiming to decode the neural patterns of electrical stimulation on each visual-processing layer, so that dead cells anyplace can eventually be bypassed with specialized electronics that precisely replicate the missing signals.
Visual circuitry is among the few tissues of the nervous system where the correlation between the physical structure and electrical property is well understood. Without movement, the signal goes to neutral gray (because of the persistent negative feedback), but in the presence of motion the retinal circuit outputs a smoothed and contrast-enhanced image. At the conference, Yagi imaged the output from each layer into separate video windows, showing intermediate smoothed and contrast-enhanced images, as well as the final subtracted signal complete with ghosting whenever there was movement. "I plan to build the next layer on another, parallel, operating chip, eventually connecting to the visual cortex itself," he said.
By carefully characterizing and replicating the signal-processing capabilities at each layer of the retinal-processing circuit, Yagi and like-minded colleagues in Japan's artificial-vision program aim to someday help any blind patient by substituting prostheses for whatever part of the visual circuitry is damaged.
Elsewhere at the neural-nets conference, a German contribution from the University of Bonn detailed efforts to unravel the encoding used by the retinal-processing circuit. The German researchers described their progress in performing "fittings" of retinal implants, whereby the user helps the technician tune the artificial stimulation of the ganglion cells for an optimal representation. The method used an array of 256 tunable spatiotemporal filters to map visual patterns onto encoded output patterns that approximate normal retinal information-processing steps.