Thanks a lot. Now I can visualize it better. Being a software engineer, I don't relate to manufacturing processes that much, but we also face the problem of when we need to let go of what we have / overhaul it / redevelop it.
We used to think of 2.5D & 3D-IC technology in terms of going forward in levels of integration to extend Moore's Law or mating different fab technologies. You invite us to think about what we have invested already in previous nodes, then use IC stacking to let our design teams focus efforts on the new functions of the new products.
Both are perfectly valid ways to look at the same thing, IC stacking. But I had not given thought about using it to extend our previous design investments in the way you explain it. Thanks for highlighting this for us Bill!
Part of the point is to 'think different' and not copy what others have done. So examples might be meaningless since they have already been created.
Some 'think different' ideas:
1. Re-examine product roadmaps. Rather than focusing on slight variations for a follow on product, create products that create market frenzy. Maybe the "Internet of Things" will achieve this goal but my fear is the manufacturing cost will be too high and sales required for positive ROI will be extremely high. Look at Apple's stock value since they created iPod, iPhone, iPad, etc. Both consumers and shareholders were happy
2. Re-examine product development. Rather than designing an entire design to the smallest geometry available, only use newest technology where it provides benefits that older technologies cannot. Apple use latest technology where no substitution existed but where established solutions existed, they used 'it'. This allow them to focus more resources on risky technology to minimize risk.
3. Re-examine flows and tools used. Implementation tools will always be required but a new class of Path Finding tools can help evaluate thousands of test cases to identify the optimum combination for your product's requirements. Explore alternative ways to differentiate yourself.
5. Re-define what integration means from a homogeneous silicon mounted to a PCB to a heterogeneous package based upon interposers that improve performance, area, power, size and lower costs. Rather than compromise all functionality in homogeneous silicon, optimize each function in the process tuned for that function. Re-think interposers as the Next Generation PCB platform.
6. Re-purpose old fabs to micro fabs (similar to micro/mini steel mills). Examples:
a. Many of the standards based IP worked and were certified in much older processes (130nm). 'We' keep pushing older IP into the latest process node (20nm, FinFETs, etc) when the only reason is for homogeneous silicon: a solution that is expensive but also compromises each embedded function. Companies expend lots of resources to migrate, recertify while increasing the risk and time to deliver IP. Rather than migrating working IP, develop brand new IP functionality.
At many conferences, I have heard clamor for newer IP that cannot be developed because resources are tied up migrating existing IP. IP forced to be migrated to generate sales $s. If we re-think and purchase the IP that works in older technologies (as a die) it frees up resources to create new functionality.
b. Fabs can be converted to "interposer" fabs using older generations of silicon processes: fewer masks, no double patterning, fully depreciated, etc.
Think different can be applied to businesses, to how we design, what we design, what we purchase, how often we purchase, etc.
The fundamental idea about reuse does seem to make sense. However I am not able to relate to that from this article. As engineers we need to develop better product and we try to do that. If there are any instances where someone used the recycling / reuse to showcase better product (like the example given - Apple), at least some of those should be mentioned. Without that it becomes just a theoretical article.
Yes, the last decade has really started to 'get' IP and how it can help accelerate product design. I could also argue lower risk/cost but this depends on which vendor a customer decides to purchase from. One way to lower risks/costs/schedule is to purchase subsystems where many of the IP components are already designed, tested, certified and released together from one vendor.
But this still requires integration into a homogeneous silicon design. So the IP must be ready on the process node of choice when it is required. Anyone that has designed SoCs/ASICs realizes that any AMS type functions are normally the long pole in the tent and can hold up tape outs. I have talked to some IP developers and the latest FinFETS have thrown a new level of design complexity.
The next phase for IP is to go back in time when 'golden' silicon units or off the shelf ICs were sold to anyone that wanted to integrate into a PCB based design. PCB design with standard ICs could be turned more quickly since the ICs (blocks) were already in stock. It was a question of system design and placing/routing a x layer PCB board. A similar approach could be done by using silicon or glass interposers as next generation PCB replacements. Whether die or some variation of small outline packaging could be used are a few of the business/technical challenges to answer. But I believe these will be resolved before an affordable silicon process using double/triple patterning will exist.
We have plenty of statistics that show older fabrication sites are closing, design starts hit a peak long ago and have not returned, many 'me too' products in the market with slight variations, etc. In various SemiWiki articles by Paul McLellan over the past few months discussed 450mm wafers and EUV might never become production worthy.
The "IoT" might help but given the cost to create new designs based on the latest silicon nodes/processes/technologies (choose term you want) that requires double/triple patterning, lots of masks, etc is a very expensive investment that requires large sales $s to have any positive return on investment. This development costs are limiting the number of HW based products that are hitting the market.
I mention going back in time to use older technologies that can easily meet requirements. Great examples are: USB 2.0, SATA 3.0, PCI Express Gen1 (maybe Gen2), etc. All of these worked in 130nm silicon. The only reason these were migrated to smaller processes was due to homogeneous silicon integration. So resources were applied to something that was already solved and working. If these IP migration resources were freed up, they could have been applied to developing other (new) IP that does not exist. Driving innovation in the IP building block area could be used/integrated in a system approach using 2.5 or 3D packaging (change thinking from homogeneous to inhomogeneous silicon).
By migrating to 2.5/3D packaging solutions, each function required could remain on the silicon process that is ideal for its requirements. Currently, any time you integrate CPU, GPU, logic, AMS/RF and dense memory, you are compromising all but the one function that the process is targeted for. Today, many of the dense memories (Caches) are not integrated with their CPU/GPU but are inhomogeneous stacked package solutions. This allows the memory to be processed in a process that is optimum for memory. The next generation Hybrid Memory Cube (HMC) is a great example (http://www.hybridmemorycube.org/). Rather than building multi $B fabs, mini/micro fabs could exist supplying IP or even silicon or glass interposers on older (fully depreciated) fabs not requiring the latest processing node.
I think that relying on linear silicon process scaling over the past 30-40 years might have finally come to an end due to the costs to solve technical hurdles. Is it time to 'think differently'? (and I have been on this great ride since 1980 starting out a 5u (yes, 'u').
Bill, I think I get what you are conveying in the article above. When I worked at Synopsys at the turn of the Millenium, soft- and hard IP reuse was a big topic. As the following 15 years demonstrated, without silicon IP rarely any chip-design would have met schedules, development budgets, nor specifications.A lot off innovation has happened since, IC designers are combining proven IP blocks, with some proprietary circuitry in an SoC, add software and offer a complete, high-value SOLUTION.
Chip designers are telling me that this business model is showing cracks, because IP design on today's advanced process technologies takes a lot of time, finds fewer takers and therefor results in silicon IP getting rather expensive. On top of it, you are out of luck, if you want to use a process technology that's not (or not yet) mainstream, because you want to get the most out of your ideas and design skills. IP vendors can't help you any more.
Only very large companies can recruit the resources needed to complement your great idea with the necessary "peripheral IP" to get it to market on time to make profit.
This trend will leave many applications- and design experts behind, due to lack of sufficient resources to keep up with the race to sub 10nm technology. It will drain a lot of precious expertise and resources from our industry... or further accelerate the consolidation trend.
To avoid this painful loss of expertise for our industry, I also believe that we need to think differently. Just like the automotive industry has managed to remain competitive by building a strong EcoSystem of medium-size building block suppliers and large car assemblers, the semiconductor industry needs to rethink the approach to electronic system design.
Die-level system building blocks, mounted side-by-side on an interposer, or eventually vertically stacked within one package, will give the experts in medium-size companies plenty of opportunity to offer their designs at the die-level. Large assemblers will be able to combine these building blocks to an entire system, in a very short time. E.g. I watched a BMW assembly line to put together an entire 3-series car in ONE DAY.
I don't quite understand what you're going for. Should the semiconductor industry as a whole start "Thinking Different"? But difference for difference's sake is foolish. What are we ultimately trying to accomplish by it?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.