Have you ever thought about adding an image sensor or camera to your design, so your system can see what's happening in the world? It's an interesting idea, but the implementation can be tricky. The algorithms for managing image data can be complex, and cameras can generate an overwhelming amount of data. That can overload the microcontroller and prevent it from doing anything else.
We wanted to see if we could make the job of adding sight any easier, so we experimented with various devices. We found what we were looking for in the state configurable timer (SCT), a peripheral available on LPC1800 and LPC4300 microcontrollers.
The SCT is equipped with 16 events, 32 states, eight inputs, 16 outputs, and a match/capture capability, so it can easily input or output complex waveforms. Working with an eight-bit camera, we used the camera's timing signals to create the states and events to have the SCT sample the camera output data at the correct timing intervals. The SCT handles the state-controlled input-data stream, and the microcontroller's on-chip general-purpose DMA (GPDMA) helps it function as a full camera interface.
For our experiment, we used an LPC1800 series microcontroller equipped with an 180MHz Cortex-M3 core. Our camera interface used only 8 percent of the CPU's bandwidth, and since we used the SCT as the interface to the camera, we eliminated the need for an external camera interface. That made our design smaller, more compact, and less expensive to produce.
The SCT camera interface
Here's a block diagram of the SCT camera interface.
The SCT camera interface (click here for a larger version).
The camera module is an OmniVision OV7670, but any camera with an eight-bit parallel output with RGB565 format and QVGA mode support should work. You can easily adapt the camera pinout to reuse the hardware and software in our application note.
The OV7670 is controlled using an I2C interface taking commands from the LPC1800 microcontroller. The camera's output is a byte-wide interface that sends out one byte per pixel clock. Since each pixel is represented by 16 bits of data in an RGB565 format, it takes two pixel clocks from the camera to send out one pixel of image data. The camera also sends the horizontal and vertical sync pulses, which determine the position of the pixel.
Here's what the camera's output signals look like.
Camera output signals.
After the camera has been configured through the I2C interface, the SCT block samples the video data and transfers the data using GPDMA to the external SDRAM available on the MCB1800 development board. The LCD controller on the LPC1800 uses its dedicated DMA controller to pull the frame buffer data from SDRAM through the external memory controller for display on the board's LCD panel. We're using the LCD purely to visualize the captured image; in many cases, this would not be used.
In our design, the SCT is configured to be a single 32-bit timer with eight nonsynchronized inputs for the SCT from the camera interface. We are using the bus clock as the clock for the SCT and the prescaler.
Based on the timing diagram shown above, the video signal starts with a vertical sync signal and sends out the image data line by line during the horizontal reference signal high period. When all the lines have been transmitted, there is a vertical blanking period, and then a new vertical sync signal can start a new frame.
SCT camera module states and events.
The timing of the image transmission and the signaling of the camera interface give us three states to monitor in the SCT module, as illustrated above. This state machine runs in an autonomous fashion, utilizing DMA to move data and offloading the CPU.
- Waiting for the next frame (wantV): The rising edge of VSync serves as the event to trigger the SCT transit to wantV. This state is the initial state of the camera image collection after reset.
- Waiting for the next line (wantH): The falling edge of the VSync signal and the falling edge of the Href signal serve as the event to trigger the SCT transit to wantH.
- Receiving a line of data (inH). The rising edge of the Href signal triggers the SCT to transition to inH. During this state, each PCLK triggers an SCT event to request the DMA transfer. At the rising edge of VSync, the DMA is initialized to transfer the first line of the image to the SDRAM. At each falling edge of Href, the DMA transfer is initialized to transfer a new line of data to the SDRAM.
See for yourself
If you'd like to see a working version of this design, we've published an application note (registration required), complete with software code, on the LPCWare community site. The design is also available as a demo board through NXP.
Another cool idea
The camera interface we describe here can be used in a number of applications. If you're curious about different ways to employ vision in a design, check out what our colleagues at Charmed Labs and Carnegie Mellon are doing. Using an LPC microcontroller as the basis of their design, they've developed a fast vision sensor that lets you teach a system to find objects and report its findings though several simple interfaces. More about their design, Pixy, is available on their Kickstarter project page.
In this case, a dual-core LPC4300 was used to process the captured lines of image data on the fly, thereby eliminating the need for a frame buffer and hence external memory. How about you? Can you think of any applications or products that would benefit from the gift of sight?