When one thinks about an application with two image sensors, the first thought is likely to be a 3D camera. However, there are numerous designs that can be improved by using the data from two image sensors.
One example is Black Box Car Driver Recorders (CDRs), which typically are mounted near the rear view mirror and incorporate two cameras (Figure 1). One camera points out the windshield and the other camera points at the driver. The camera video is stored on a local memory chip and can be retrieved if there is an accident or dispute.
Other applications for two cameras and their data include precision analytics in surveillance and pedestrian detection in automobiles. In these designs, the output of both cameras is used to create an algorithm that includes depth perception. With this data, a processor can very accurately “see” images and discern people from shadows or other objects.
All these designs require an Image Signal Processor, (ISP). However, supporting two sensors for an ISP is not straightforward. Although most ISPs can support the throughput of two image sensors, the vast majority of ISP devices have been designed to interface to only one sensor.
Even ISPs that have two ports often cannot combine and process both images or, if they can, they tend to be very expensive.
Figure 1. Car driver recorder
In addition to the ISP interface being limited to processing only one image sensor, higher resolution image sensors pose yet another design challenge. Historically, all image sensors up to 720p30 resolution are connected to an ISP via a uniform CMOS parallel bus (Figure 2).
Figure 2. ISP Connection via CMOS parallel interface
At 720p60 resolution and higher, image sensors cannot transmit over a CMOS parallel bus with acceptable quality. Because the parallel bus speeds have to be higher than 70MHz, the switching noise causes the image sensor’s quality to degrade. To overcome this issue, image sensor vendors are introducing serial, instead of parallel, buses to transmit the data. However, because many ISP devices are designed only with a parallel bus, it is necessary to convert the new sensor serial buses to this parallel bus (Figure 3).
Figure 3 – Conversion to parallel bus
Finally, for applications that require 3D algorithms, it is necessary that the two image sensors be synchronized. This is not trivial, as each sensor manufacturer has its own methodology and format. For example, some image sensors use I/O pins for triggering, while others use I2C, SPI or a combination of both. Virtually all ISPs face the design challenge to support multiple modes to ensure that the various sensors are synchronous.
A few ISP vendors have attempted to solve the problem of supporting two image sensors by providing two independent interfaces and two processing engines. The result, however, is a very expensive ISP device that not only contains far more image processing capability than is necessary, but is also more complex to configure and program for a software developer.
All these dual image sensor challenges can be overcome if a design supports formatting two sensors correctly, synchronizes them and merges the data in the right format before sending it to the ISP. As previously stated, many current ISPs can handle the throughput of two sensors. The key issues are receiving the images synchronously, in the right format and on the right bus. The most cost effective design solution is the use of a small FPGA and frame buffer memory.
Figure 4. FPGA design solution
Figure 4 illustrates a cost effective solution to synchronize, merge and output the correct format to an ISP. By leveraging a low cost FPGA such as the Lattice MachXO2 and an inexpensive LP SDRAM device, the two image sensors can be bridged to an ISP. The FPGA design needs to incorporate the following capabilities: First, it has to interface to the I2C or SPI register configuration setting from the ISP. This configuration setting is the same for both sensors. The FPGA then needs to send the serial configuration data (I2C or SPI) to both sensors and ensure they are properly configured. At this point, both sensors will be configured the same, but they still need to be synchronized. The flexibility of the MachXO2 FPGA enables implementation of the unique control necessary for each particular sensor manufacture. To ensure the clock driving each sensor is exactly the same, the FPGA will also output the clock to both sensors. Once both sensors are set up and synchronized, both sensors begin to transmit image data.
The FPGA will then need to de-serialize the high speed serial image data in the I/O cell and logic fabric so it can convert the sensor data streams into a parallel format. The MachXO2 FPGA then looks for the appropriate control characters so it recognizes the start of the frame and start of line for each sensor. This is normally done by looking for a control character or sequence of commands. Once the sensor image data is detected, the FPGA can then extract the raw image data and begin to use the low power SDRAM memory to store frames. Of course, an LP SDRAM memory controller will be required in the FPGA to read and write the image data appropriately. The next function the FPGA performs is to order the frames into the desired output format. For example, one popular format is a top bottom configuration, another one is side by side (Figure 5).
Figure 5. Output formats
Before the integrated sensor data can be transmitted, the sensor data in the FPGA has to be converted to a Bayer pattern image format. This ensures the correct RGB colors are passed to the ISP. With the image correct and the output format configuration known, the MachXO2 FPGA then outputs the formatted data in a parallel bus to an ISP. The external LP SDRAM is used to buffer the incoming frames as the previous frame is driven out. Typically, the LP SDRAM is run twice as fast as the output clock to the ISP. To ensure the ISP reads and recognizes the data, the FPGA output is designed to mimic a parallel image sensor output. That is, the FPGA generates clock, frame valid, line valid and usually a 12 bit data bus to the ISP.
This design has already been implemented by Lattice Semiconductor and Aptina. By using a Lattice MachXO2 and two Aptina 9MT024 image sensors, the design is a very cost effective solution that virtually any ISP can accept. All the key functions of configuring the sensors, synchronizing them and then appropriately formatting the output image data are performed. More information about this design is available at www.latticesemi.com/dualsensorbridgeAbout the author
Ted Marena has been the Director of Business Development at Lattice Semiconductor
since 2010. In this role, Ted leads a team responsible for partnering with other semiconductor companies to create compelling design solutions for customers.
Previously at Lattice, Ted has been the Director of Field Applications and an Area Sales Manager. Before joining Lattice, Ted was an electrical design engineer. Ted earned a Masters of Business degree from Bentley College and a BSEE from the University of Connecticut.
If you found this article to be of interest, visit Programmable Logic Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to programmable logic devices of every flavor and size (FPGAs, CPLDs, CSSPs, PSoCs...).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for my weekly newsletter – just Click Here
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]).