cs20 is correct. You may be able to run much more on actual silicon, but the visibility for debug should something go wrong is awful, and only gets worse with the increasing amounts of IP and processor cores on silicon. And with increasing design complexity you have an increased number of possibilities of use cases, and who knows if you don't have access to customer code whether you've covered the use case they have in mind?
But we are discussing this from an engineering view point, whereas such decisions are often made much more on an economic view point. It costs a couple of million for a new mask set, so up front avoiding a respin saves me a fixed amount of money against saving a theoretical amount if I can catch a bug in silicon which may not be there. Economics will win out every time.
It is a good reference for the team who want to do early tapeout.
The respin cost is too high today. Our target is "first time success" to save the cost and reduce TTM. Unfortunatly the respin is the reality. So our "reality" target is reduce the respin time and cost. The methods include: tapeout for partial respin and ECO; full verification to reduce bug founds in silicon sample; use a FPGA prototype to test in a "real world" and do HW/SW co-development.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.