I am a fan of Richard Feynman. When I was a freshman at Caltech in 1964 he gave a couple of lectures to our physics class. At the time I was both too young and too nave to appreciate the privilege I was given: it was just another class in a subject I was particularly interested in. I left Caltech the following year, victim of my immaturity, but as the years passed, I rediscovered my interest in physics. I still have my well-worn copy of "The Feynman Lectures on PHYSICS" Volume I and, through the years, I have built a library of books written by him or about him. And so, during one of my recent trips to Silicon Valley, I was happy to find a book by Feynman I did not yet have. Its title is "The Character of Physical Law". It is a collection of seven lectures given by Feynman at Cornell University as the Messenger Lectures in 196. At the time he had not been awarded the Nobel Prize in Physics, it would come a year later. Every time I read a book by Dr. Feynman I learn something new, or re-learn things from a different perspective. This book did not disappoint.
The Law of Symmetry
In physics the word "symmetry" does not quite have the same meaning as in regular speech. Professor Herman Weyl, a German mathematician proposed a definition for symmetry using common words. In the book Feynman paraphrases the definition such: a thing is symmetrical if there is something that you can do to it so that after you have finished doing it it looks the same as it did before. Examples of physical laws that are symmetric are: translation in space, translation in time, fixed rotation in space, and of course, uniform velocity in a straight line. The latter is the principle of relativity. Remarkably one law that is not symmetrical is change of scale. It is not true that if you build something and then you build the same thing twice as small, the two will work exactly the same. When I first read the chapter on Symmetry in Physical Law this fact registered as interesting but made no further impression. It was only a few days later that I became aware of the importance of the asymmetry of the change of scale to the semiconductor industry and to design automation in particular.
What it means to the electronics industry is that if we build the same circuit on a smaller scale we cannot expect it to behave exactly the same as the original one. Of course we all know that foundries have employed shrink up to 10% to improve operating parameters and yields while keeping the operational parameters constant, so it would seem that semiconductor design and fabrication are immune from asymmetry. But this is not the case. Even with a shrink as small as 10% things never worked exactly the same. But the difference has been so small to be either imperceptible or negligible. The difference between the values of a parameter of the circuit measured in both instances was relatively so small that it did not impact the functional or physical characteristics during normal operations of the device.
But lets take a circuit designed to operate successfully when manufactured with a 250 nm process. If we take the same circuit and manufacture it without any change at 130nm, the changes in the values of its parameters are likely to be big enough to impact its functionality and the physical characteristics of its environment. Of course as the process dimensions shrink, the difference in the parameters values grows in relative importance so that going from one process node to the next will not be possible with just a shrink. Moore's Law has masked the problem. The law is very simple, yet the industry has attributed meanings to it that it was never intended to have. Moore simply predicted that it would be possible to double the number of transistors fabricated on the same area of silicon every 18 months (now modified to 24). The law never said anything about functionality, restrictions on the utilization of those transistors, or even the characteristics of the parameters required to make such piece of silicon into a useful product. The law best applies to memory devices, but even there it does not assure scalability. This property was assumed, may be only implicitly, by the industry and the assumption held for a long time ( in semiconductor industry terms).
I believe that we have reached that point going from 65 nm to 45 nm and that every successive node migration will prove to be asymmetrical. The EDA industry has played a key role in masking the asymmetry of scale until now by developing new tools when they became necessary. But all of those tools were focused on solving problems encountered in the place and route phase of the design or in implementing preventive and corrective measures during the preparation of the data for manufacturing, like RET and DFY measures.
We need a new approach
At the leading edge of the family of processes used today, we need to explicitly recognize the asymmetry of scale by drastically changing the way we approach the design of an IC, not just its implementation. This means a change in the front-end methodology and tools. Although it is possible to find skilled engineers that know how to implement a leading edge device efficiently, their efficiency is significantly impacted by the way the device is designed. In other words, we need to change the way we design, not just the way we implement. A few industry trends are indicators that we have subconsciously understood the need for change. Platform based design is a method that allows engineers to limit the work required to implement a new design while taking advantage of proven silicon manufacturing capabilities inherent in the platform. The push toward ESL capabilities, although flawed initially, is maturing and a few companies are beginning to offer true system level design tools, not just HDL implementation tools with a more abstract syntax. Design re-use is focusing on two methods: proven hard macros, and re-configurable soft macros. Both of these free the user from just re-implementing the same HDL description used in previous processes. Architects need tools that allow them to consider the characteristics of the process while they make decisions on the architecture and functionality of the product. Making these decisions at the implementation phase will be too time consuming and much more difficult due to unrealistic assumptions made during the design phase.
The semiconductor foundries already know that business as usual is a doomed strategy, and have made their position clear by announcing that Restricted Design Rules (RDR) will replace the Design Rules used at present. The reason is simple. It is no longer possible to describe all feasible circuit characteristics implementable in a process through a list of exceptions and rules. The number of such rules has grown to the point that a human cannot handle them all. In fact in many cases rules are contradictory making the choice of implementation unrealistic. The EDA industry must find a way to express the RDR in such a way that they can be understood and observed during the design of the product, not just during its implementation. Design Rules were based on the assumption that one needed to only add new rules to the one already existing, and in some cases also modify some of the existing ones, when going from one process to the next. RDR make no such assumption. They simply tell engineers how to implement a design for the specific process. EDA vendors must understand the symmetry principle in order to forecast the needs of designers and be ready with a product when the market materializes. Waiting for a market in order to decide to develop a product will be too late and it will significantly impact revenue for both themselves and the semiconductor industry.