Portland, Ore. -- By mimicking the way a fly's brain interprets images coming in through its eyes, an algorithm created by a researcher at Australia's University of Adelaide lets digital cameras "see" more clearly.
Today, all cameras must be adjusted to take only a part of the range of available information.
Scenes that involve large differences in brightness between their shadows and highlights are particularly difficult to capture. The photographer can adjust the camera to capture either shadows or highlights, but cannot optimally capture both simultaneously.
The human eye is similarly hampered, but it compensates by quickly adjusting the diameter of the pupil when scanning a scene--making it larger to take in shadow details, then smaller for taking in highlight details--so that people do not often notice that they can't view both simultaneously.
Insect eyes, on the other hand, appear to be able to record both shadows and highlights at the same time. At the University of Adelaide, postdoctoral research fellow Russell Brinkworth tested this theory by directly recording images from the brain cells of a fly, then crafting an algorithm to mimic the observed behaviors .
The result is an algorithm that can accept inputs from a camera's sensor, process it and recover information that would otherwise be lost, enabling the camera to record clear scenes with detail in both the shadowed areas and the highlights.
Chip version envisioned
Thus far, Brinkworth has demonstrated his algorithm only with a cumbersome laboratory setup that inserts a computer running his algorithm between the input sensor and the digital storage medium. He next wants to encode the algorithm into a chip that could be embedded into digital cameras and video recorders.
Brinkworth is one of 16 fledgling scientists presenting research for the first time via the Fresh Science national program, sponsored by the national and regional governments of Australia. His work was funded by the U.S. Air Force.