How best could we utilize the information coming from dual cameras to enhance the overall image quality?
Dual camera smartphones are here, faster and in larger volumes than analysts expected.
Smartphone manufacturers integrate a second camera for several reasons, primarily to improve image quality and to be able to extract depth information for applications such as DSLR-like shallow depth-of-field effect (Bokeh).
Adding a second camera brings forth new challenges. Among these challenges are how to calibrate such dual cameras, one with respect to the other, how to switch between cameras in a way that enhances the user experience and how to optimize the image quality of this new and innovative mobile imaging hardware using advanced algorithms and software tools.
In this article, we wish to focus on the latter: How do we best utilize the information coming from two cameras to enhance the overall image quality? One such approach is called Image Fusion.
Introducing Image Fusion
Image Fusion is the process of combining two or more input images into a single image. The main reason for combining the images is to get a more informative output image.
In mobile, dual camera Image Fusion comes into play in several ways: The first is related to a dual camera with one color sensor and another monochromatic sensor (with the Bayer filter removed). The monochromatic sensor captures 2.5 times more light, thus reaches better resolution and SNR. By fusing the images coming from both cameras, the output image has better SNR and resolution, especially in low light conditions.
The second is with a zoom dual camera – a wide field-of-view camera coupled with a telephoto narrow field-of-view camera. In this case, Image Fusion will also improve the SNR and resolution from no zoom up to the point the telephoto camera field-of-view is the dominant one. In the example images below it is easy to see the resolution improvement in the fused image vs the standard digital zoom (images were taken with a 3x optical zoom camera).
Image Fusion of Wide and Tele Frames Results in Higher Resolution
The algorithmic flow of Image Fusion includes Rectification, Global registration, Local registration and parallax correction, decision and fusion.
Image Fusion Algorithm Flow
- Rectification – as a first step, the algorithm rectifies the two input images and corrects for distortion, scale and shift that may be introduced by the optics and AF mechanism. For this purpose, the algorithm uses pre-computed rectification data that is stored in the sensors' OTP memory. After this step, corresponding points in the two images lie on epi-polar lines that are parallel to either the x axis or the y axis, depending on the camera module configuration.
- Global registration – the second step is to perform global registration. The algorithm calculates and compensates for global differences between the two cameras which could be attributed to dynamic changes between images and specific properties of each camera.
- Local registration and parallax correction – following the global registration, a fine local registration step is performed, in which the parallax (i.e. shift in x and y dimensions, which is dependent on object distance) is determined for each pixel in the input images.
- Decision and fusion – in the last two steps, the algorithm fuses the two images according to the local parallax found in step 3. This stage is composed of a sophisticated decision block, that handles occlusions, corrects for registration errors and has an adaptive algorithm that eliminates artifacts from the final image. The fusion block implements the actual data fusion in a way that the 2 original images are seamlessly blended together.
Performing image fusion presents several algorithmic challenges, among which are occlusions, lens imperfections and transitions between overlapped and non-overlapped areas. Next we will review each of these challenges.
Next page: Image Fusion Challneges