KYOTO, Japan IBM researchers have integrated a 10-Gbps silicon photodetector using a 130-nm 1.5V CMOS process, opening the way to chip-to-chip and board-to-board optical interconnects.
Eventually, optical photodetectors may be used for chip-level interconnects as well. By 2010, when processors exceed clock frequencies of 11.5 GHz, optical interconnects may be needed to avoid chip-to-chip and on-chip bottlenecks.
In a presentation here Wednesday (June 11th) at the 2003 Symposium on VLSI Technology, Min Yang, a research staff member at the IBM Watson Research Center at Yorktown Heights, N.Y., described a monolithically integrated photodetector that achieves a much higher data detection rate, 10-Gbps, than previously discussed silicon photodetectors.
Also, the IBM detector operates at 1.5 V, a fraction of the operating voltage of previously reported silicon photodetectors, Yang said.
A dozen or more of the photodetectors, which measure 16-by-15 square microns, could be created on a single device for parallel data links. The detectors initially could be used for high-speed data communications between servers, either at the cabinet-to-cabinet or board-to-board levels.
Because the detectors are fabbed in a conventional silicon CMOS process, the potential for cost reduction compared with gallium arsenide-based detectors is significant, she said. Today, optical detectors made on compound semiconductor substrates are bonded with silicon die for a multi-chip solution that is relatively costly, she said.
Key to the approach are deep lateral trenches, similar to the high-aspect-ratio trenches used in IBM's embedded DRAM technology. In fact, the cost of integrating the trenches could be reduced if customers also required on-chip embedded DRAM so that the cost of the trench-creation mask layers could be shared, Yang said.
Two mask layers were required for the lateral trench detector and one additional mask for the polysilicon resistors. Filling the trenches with a highly doped polysilicon is "not trivial," she added.