Rick, a guy from zvi orbach's company talked about 3d chips as a strong way to fight defects. Self assembly and Molecular Imprints suffer strong defect rate, but at least they have the resolution. I wonder how well a combination will to solve the defect issue will work cost wise ?
Resistion: the way i understood it , you build 2 equivalent layers on top of one another, And for gate(or cell) you choose which layer 2 use after manufacturing - using boundary scan for detection and e-beam for the repair. more details in .
Assuming defects are uncorrelated between layers(big assumption), this greatly decreases you defect probability.
Thanks, alex_m1. The concept of this repair layer sounds interesting, but of course, it's still cheaper to have all the layers within defect tolerances to begin with (so then you could go on to heterogeneous integration).
@resistion:i think the case of 3-4 real logic layers + 1 repair layer might be interesting economically and might offer enough defence against defects to make those methods(self assmbly,imprints) useful.
It's the right idea, it looks like you are referring to the example of double patterning using two exposures, for a single layer. More generally, overlay is referring to the interaction between successive layers, so that they link up properly. Multiple exposures does have higher risk than single exposure. But these recent new single-exposure technologies have their own sources contributing overlay error. Meanwhile multi-patterning is heading toward self-aligned approaches.
When patterning a first layer, the features deviate from their positions within a tolerance. Then when the next layer is patterned over the first layer, those features also deviate from their positions within a tolerance. If the tolerances are followed, there should be no risk of broken connections, line shorts, etc.
"No matter what Intel says, Moore's Law is slowing down," said Bob Johnson, a semiconductor analyst for Gartner. "Only a few high-volume, high-performance apps can justify 20 nm and beyond." He sees problems ahead for logic chips in general. The smartphone market is nearing saturation, ultramobiles are canabalizing PCs, and "logic is running out of gas."
If logic is getting affected, that's a really big problem.
a 2 x 64 bit bus carrying 25 GByte/s for LP DDR ( Dual Data Rate ) means a Clock Rate of 800 MHz, not at all unusual for LP DDR 3, it works even with conventional lossy Packages.
Apple has been using them for the 5s since last Fall, had to shrink Package interconnect pitch to accomodate wider channels, but that's still a conventional PoP package. SK Hynix claims their LP DDR 3 can run at Clock Rates double that but have n't seen a SoC - DRAM module packaged in conventional PoP working at 1.6 GHz yet.
We do special loss - less Packages that clean up the Eye Diagram even at much higher Clock Rates for very high Bandwidth and low Power loss w/o having to drill any TSVs into live chips.
The reason they changed to wafers per day, is not only that they cannot get enough wafers per hour, but also the EUV machine is not (or cannot be) up 100% of the day. So even if the WPH is quite reasonable, with the extended downtime, it still reduces the throughput, that is measured in wafers per day.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.