Zavi, I totally agree. Tapeout is a huge process, so extreme caution should be exercised in any decision to short-change part of the verification for an "early tapeout." Although there is obviously time-to-market value in putting silicon in the customer's hands as early as possible, even with an expectation of a second pass, the silicon must be extremely functional or else the customer can't do much with it.
So how do you decide which parts of the verification plan can be bypassed for an early tapeout? A good conservative approach is to assume that anything that has not been verified is not going to work. Which parts of the chip can you and the customer afford to not have working on the first pass?
Another criteria for early tapeout I have seen, which is also risky and unpredictable, is the idea that "as long as it's functional, we can deal with out of spec parametrics on the next pass." The problem there is that you don't just design for functionality in some abstract sense -- you design to meet certain functionality and parametric performance over a space of PVT corners. Suppose for example, our design includes a voltage regulator, but due to short-cutting the verification, we didn't prove that we meet the max load current spec over all corners, with parasitics, and when our silicon comes back, we find that the customer will not be able to run his application as intended, due to the regulator failing to meet its load current spec. Is that "just a parametric" issue, or is the IC not fully functional in the customer's system?
Whichever way that question is answered, the bigger issues are cost, lost time, possibly lost market window or lost design wins if customers have other options besides waiting for you to do a second pass. The difference between an early tapeout and a thoroughly verified tapeout can be the difference between a profitable product launch and a big money sink.
This is indeed a good idea - it allows fast fixing of bugs in silicon with minimal cost (at the cost of extra die). This could also be helpful to fix DOA bugs when using the silicon as a verificatio platform
Back in the late 1980's/early 1990's while at VLSI Technology, we always put a sample of spare gates into any ASIC design that we taped out. Some customers had tight schedules to hit their primary selling time (Christmas sales). To achieve the volume mfg that was required, it was critical to have safeguards just in case something was overlooked during their verification. VLSI Tech (fab) would hold some wafers prior to metallization just in case some bugs were found. Using this technique, our customer never missed their market and I would bet that this technique was used over 50% of the time (yes, either they missed all the requirements or the requirements had changed). By integrating spare gates that had inputs tied to VSS, it allowed very fast changes with minimal mask charges and more importantly extremely fast re-spin of new silicon. But this approach required careful and diligent hand editing of the physical database to alter metal/contact layers as well as a fab that was willing to split a lot. To my recollection, if a re-spin was required, it only required one mask change. Although designs requiring a metal re-spin were not considered a "first pass" success, it was very successful from the customer's perspective.
One other economic consideration is customer early engagement. If the early silicon is good enough for the client to use then they can often use it to clarify their requirements, start product development, etc
Your comparison with FPGA is very appropriate. We are also seeing the FPGA world move towards more simulation and less lab (although adoption of advanced techniques needs some persuasion!)
We encourage our customers to take a Requirements Driven Verification approach which helps to ensure a high level of confidence in the main user requirments and use cases thus reducing the likelihood of DOA
Thanks for this succinct presentation of the trade-offs that should be considered when planning for early tapeout. The key point here is to havea plan for this!
I have been part of design teams that have done this before - sometimes called the two-pass approach. For the first pass, some of the design and/or verification tasks may be incomplete, but the chip is spun early to get system integration started. For any features that have been implemented but not fully tested, you have to assume they will be broken and make sure you can live without them with bypass/disable functions. This enables early system testing to iron out issues at that level and speed up s/w development and integration. All good stuff.
However, as your DAC panelists concluded, this is not a substitute for advanced verification methods! Just ask the guys designing complex FPGAs today. They can't use the "burn-and-learn" approach of the past anymore. H/W debug in the lab is much too resource intesive. A cost/benefit analysis will favor investing in advanced methods such as:
Firstly, how we define "early" tapeout? Which stage is early in the whole development cycle? That could be a different answer to different products and different companies. Secondly, early tapeout could bring absolute benefits for verification. But it may be confined in a narrow scope for corner bugs not normal bugs which can be caught easily by normal flow. Maybe accelation in mix-signal simulation and early development for bios, driver and software could be potential advantages. Tapeout is a hugh process and involves both frontend and backend efforts. We should take extra work for DE and PD into consideration when evaluating the ecnomical gains and losses.
You're right - econmics should dominate the decision. However, TTM (time to market) is often a key issue in economics (imageine a new mobile phone missing its Christmas launch). So the fab cost might be justified on a TTM calculation
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.