When the number of process steps has to double suddenly, it may no longer make economical sense to use 30% shrink to advance to the next node. It may have to be at least 35% shrink; the max of course is 50%. So 28 nm should be followed by 18 nm instead of 20 nm, for example.
At 22nm, there are somewhere between 50 and 100 silicon atoms (depending on how you measure) between the source and the drain.
Moore's law, for conventional silicon, will end when that distance shrinks such that quantum mechanics takes over. At that point, we'll either need a semiconducting material with smaller atoms, or another approach.
Very nice lecture. I think if you check the Lg for transistors in the last 2-3 years, there hasn't been much shrinkage below 25 nm. So maybe we are already at the limit, practically. But while that dimension might be frozen at some point, the other dimension still allows some play to increase density. But we'll probably hit that limit soon even by wrapping the gate around the fin. 10 nm size is possibly too far-fetched. It's probably ~3X OT? And there's source-drain tunneling.
But they are implementing double patterning a little earlier than this limit and complaining the costs. For example, maybe they want to go from 60 nm half-pitch to 42 nm half-pitch and this requires double patterning. If it's double the cost, the cost per component doesn't change. So they need to go from 60 to maybe 39 nm, so at least they have some cost reduction.
Of course brute force scaling is not the only way forward. We still have all manner of "more than Moore" ways to improve microelectronic devices. And the end of Moores Law doesn't mean a total end to scaling, just a slowing of the rate. That next node may now take a few years instead of just 18 months.
One argument for it all coming to an end is that FinFETs get us from 20nm which was the end of the line for planar to 5nm before it too breaks down and there is no new switch even on the drawing board. It took FinFETs 20 years from initial stidies to first manufacture. That could mean that it will take 20 or more years to get beyond 5nm.
I find it hard to accept the DARPA person's opinion that US national security is threatened if Moore's law comes to an end. There is plenty more to do beyond scaling that have not received the same attention! Some one already commented about "More Than Moore" and there are quite a few challenges remaining in circuit boards and substrates.
I'm the "DARPA person" in question. They pay us to be paranoid about national defense, and there really are people in the world who do not wish the United States well. (I hope that doesn't surprise anyone here.) So keep that bias in mind.
But I do think there's a real issue here, and it's the main reason I finally gave in and decided to perform some government service for the past couple of years. The issue is that for several decades, if you wanted to field military electronics, you developed it at great cost, but when it was complete, only peer nation states could afford to do likewise. Nowadays, commercial off the shelf electronics are very high performance, readily available, and inexpensive. So many more players besides peer nation states can make electronics with military implications. We still do develop electronics beyond COTS for U.S. military purposes, but there are times when using COTS is just the best we or anyone else can do.
When Moore's Law finally grinds to a halt, further advances in COTS will continue but at a far slower rate. Yes, there is low hanging fruit in SW, algorithms, 3D stacking, specialized processors, and other items I mentioned in my talk. But you cannot sustainably combine lots of onesies and replace an underlying exponential. One of the motivating ideas for my talk was that the U.S. must plan for the end of Moore's Law as though it will cause all players, not just peer nation states, to end up with the same HW capabilities. That would drastically reduce some of the advantages the U.S. has long enjoyed in certain militarily relevant arenas. -Bob Colwell
That is a really scary thought that I'd never considered before. It's especially sobering because although I am sure people will argue about when, I think we can all agree that eventually Moore's Law will run out of steam.
But Mr. Colwell, I would ask: is it Moore's Law that needs to continue, or just scaling? I know they are largely considered the same thing, but I see a subtle difference. Because I know that process engineers and technologists can continue scaling to smaller nodes well beyond where we are today. But the question is whether they can do it economically. If not, it wouldn't make sense for continued mass production of chips for use in smartphones, tablets, PCs, etc. But if its a question of producing a limited number of chips for national security purposes, wouldn't the government want to continue funding that, even at great expense?
Consider the economics of scaling to 5 nm on 450 mm wafers. How many ICs will ever have enough volume to justify the development costs? I understand DARPA's concerns, and the fact that defense technology has always relied upon the commercial sector to follow Moore's Law on its own -- meaning that if not for commercial demand for ICs at the next generation process node (first in PCs, then later in smartphones, tablets, game consoles, etc), defense system developers would probably not have access to these IC technologies. Defense electronics has never had high enough volume to fill a fab -- at any process node.
But to what extent are U.S. electronic defense systems advantages due to CMOS scaling rather than due to other attributes -- new IP, new architectures, etc.? Even if Moore's Law slows and eventually comes to a halt, and everyone in the world has access to the same process technology, I hardly think that everyone in the world will suddenly have the capability to design and successfully build, test & deploy the types of systems -- and the SoC's that go into those systems -- that DARPA and the contractors in the DoD food chain are able to do. Not that they can't or won't start catching up, but my point is simply that CMOS scaling is just one variable -- and not even the most important variable -- that has enabled U.S. defense technology to maintain its leading edge.
Ah! Exactly, Frank. The reason COTS performance has become militarily scary is precisely because it's so cheap and easy to leverage it into systems with military application. Those who do, are taking a free ride on many billions of dollars of investment made by private industry in the chip designs, the algorithms, the tools, the fabs and the know-how...
I'd say the main way COTS has helped the U.S. is to lower the cost and SWaP (size, weight and power). Ideally, it would also have been an advantage for the U.S. if we were collectively more nimble than the rest of the world. In the commercial space, I propose that we often (but not always) are, but for military equipment we're slow. Sometimes even beyond slow. And it's not the designers -- I've met many of them and they're really sharp and dedicated folks. It's the specification, requirement, procurement, and acquisition system that exists around them, I think, that causes the slowness and I suspect much of the cost inflation.
It's the specification, requirement, procurement, and acquisition system that exists around them, I think, that causes the slowness and I suspect much of the cost inflation.
@rpcy1- do you believe there is any chance that the specification, requirement, procurement, and acquisition system can be streamlined so as to not only increase speed but also reduce cost? Politicians are always talking about streamlining systems and cutting waste. But in reality, can that be done here?
At the risk of sounding cynical, I'd say this nation reacts really effectively, especially in times of national crisis, but the rest of the time, we seem pretty sluggish at anticipating new challenges and rising to meet them. My best estimate is therefore that there's little hope of a serious renovation to our military acquisitions methods, absent some precipitating event. And those are never pleasant.
I would assume that the economics for military application is way different from commercial applications. An iPad must be purchaseable for a few $100, and that requires huge volume. Mulitary is more like a few hundred units at most so the cost per part will be way higher. Now, if you can do everything the military does with an iPad, then what have we been wasting our money on? The cost to make custom hardware that can perform a specific function will probably always be beyond the purchasing power of many nations - the cost of a fighter plane is proabbly more than the GDP of many nations.
Couldn't agree more, Brian. What your argument suggests is that the price to the U.S. Government of a new military system, such as a fighter plane, is not a linear function of the price of the electronic components contained therein.
We have an unusual way of looking at some things here at DARPA. We try to find technology possibilities that lie somewhere between physically impossible and "pretty darned hard"; we call that "DARPA-hard." Military procurement and acquisition lies way beyond DARPA hard!
I'm paranoid about national defense, too, but I don't see Moore's Law as the issue.
Moore's Law simply states that the number of transistors on integrated circuits doubles approximately every two years. At some point, we'll run into hard limits imposed by the laws of physics on how small a transistor can be, and there's some evidence we are approaching that limit. (We are arguably reaching a point where while it might be theoretically possible to shrink circuits smaller, in practice, you can't afford to do it.)
One of the developments of semiconductor electronics over the past few decades is commoditization. Circuitry gets smaller, faster, and cheaper. What used to require expensive proprietary hardware can now be done with off the shelf components. This affects all areas where such things are used, including national security and defence.
But the issue has always been less about what hardware you had than what you did with it. The race isn't hardware, it's applications. When the HW playing field is level and everyone has the same gear, you win by making smarter use of it. Moore's Law has nothing to do with that.
(I am curious about what areas you see where further scaling and component shrinkage might be critical. What sort of gear used in security and defence needs to be smaller to confer an advantage?)
For the others to catch up to US in technology, the US would have to be essentially static for an extended period of time. You can be assured that is probably not the case. In fact, it is probably more likely the others will try to stay a few steps back from the US leading edge which they fear is fraught with unknown risks they cannot learn fast enough about.
I'd be more worried about something all of mankind gets stuck on, due to fundamental physics, like related to quantum or entropy.
If Moore's law is coming to an end with the current technology, it is high time that some different out-of-box technology has to evolve e.g. nano technology or using the molecular biology to build the future circuits
Agree; check out our UPSIDE program here at DARPA. The way I look at it, DARPA's PERFECT program extracts the maximum DoD-relevant performance out of what's left on Moore's Law, and UPSIDE asks the follow-up question "what else is out there besides?"
I think the next innovation is bound to happen in quantum computation. It is a natural extension of the current technology. Molecular or nanotechnology are just an extension of the quantum field of study.
In the last few decades, no doubt processing power has a direct relation to number of transistors. The time may have changed by now. Multi-cores design and parallel processing may have a greater effect today.
but multicore and parallel processing is not really an innovation but more of an affordability and luxury thing. The real innovation is how to make low power consuming devices, smart softwares and integration of both. More cores means more power consumption but the battery capacity is staying the same as far as smartphones are concerned.
Hard to see how the end of Moore's Law threatens National Security when counterfeit chips have proliferated throughout military and are a $169 billion risk to the electronics supply chain (according to research firm IHS). Seems like whatever national advantage leading-edge semiconductors provided to national security ended some time ago. There is no real "trusted foundry" any more, nor any US-owned leading-edge foundry.
There wasn't always a Moore's Law, right? And US national security was certainly not the worse for it.
Perhaps the best thing that could happen for national security IS and end to Moore's Law. In that event, dirt cheap and ever more powerful weapons would not be available to everyone, and the US defense industry could resume operating as it was until, say, the 1960s. The difficulty and cost of developing significantly better weapons increases, same as it always was before Moore's Law, fewer people would have access to them, and things would stabilize.
There are many many ways of improving technology that don't have to do with raw speed. Else, we would still be living in caves. For instance, we still haven't exploited parallel processing software to any great extent. Just one simple example. We still don't have quantum computers or quantum communications. Any number of areas for technology to go that isn't strictly denser ICs.
Change always brings about angst and opportunity. I believe there are many ways to innovate and a true end to Moore's Law will make people think in different ways. This, in itself, will set the thinkers ahead of the rest who are followers. When we stop thinking, we lose. Of course, as Max implies, politicians stopped thinking a long time ago and we are paying for that.
Bert22306, I just don't follow your logic here. I'm old enough to remember when there wasn't a Moore's Law. We did not have a high-tech military then: no smart bombs, no UAVs, no GPS, no computer-guided anything. Now we do. If our technology stops giving us an advantage, we have a real problem for which there was no counterpart back then.
I think there are both absolute and relative implications of Moore's Law in the military arena. In absolute terms, one can make very capable military systems (say, radio, radar, jammers, etc.) with COTS components. If Moore's Law stopped tomorrow, or went on another 15 years, these systems would still be formidable, in the sense that they first and foremost must deal with Nature, not just adversaries. High power at high frequencies, with very capable FPGAs to do the processing, and algorithms pulled off the internet...the barrier to entry is no longer very high.
My concern about the end of Moore's Law isn't so much about such "absolute" threats. I worry about the relative threats, where an advantage in electronics translates into an operational capability.
Yes, there are many ways of continuing to improve computers, but I claim the sum total of all of them aren't worth a damn compared to the aggregate beneficence of Moore's Law. As I said in my talk, you cannot substitute any number of incremental improvements for the death of an exponential.
I'm a chip architect at heart. Consider the period 1980 - 2010. From my personal experience, chip clocks went from 1MHz to 3.5GHz, a 3500x improvement. How much did architecture and microarchitecture add on top of that? I'll guesstimate 50x - 100x. Admittedly, that's not completely fair, because much of our architect effort went towards making that clock improvement possible, but still, I do think there's a signal in that noise: the underlying exponential came from silicon. Is there 3500x beyond the end of Moore's Law? No way.
"We did not have a high-tech military then: no smart bombs, no UAVs, no GPS, no computer-guided anything. Now we do. If our technology stops giving us an advantage, we have a real problem for which there was no counterpart back then."
And yet, we still had the best military technology. It just cost us more, and the potential adversaries had a harder time funding anything that was technically equivalent.
We did have smart bombs before GPS, btw. They were not GPS guided, but they were guided by onboard gyros. More expensive? Perhaps, however it's also more expensive for adversaries too. Those who can afford the extra expense have a leg up, and that would be us. My point is that any advantage Moore's law gives us, in terms of cheap and always better weapons, is the advantage it gives everyone else too. It makes it that much easier for the bad guys to carry shoulder-launched missiles, to build clever IEDs, to monitor our communications, and on and on.
The relative threat is everyone's concern, and Moore's Law, if it does anything at all with respect to defense, it evens the playing field. In defense, that's considered bad, not good.
Bert22306, I'm not sure what you're referring to by stating that the U.S. "still had the best technology." Perhaps you mean big comms systems, radar systems, and satellites? I'd agree with you on those. (By "smart bombs" I mean laser-guided, not GPS. Laser guided bombs were not available until 1972 or so.) Certainly not in rifles, tanks, radios, and fighters. (Source: Boyd, by Robert Coram, and About Face, by Col. David Hackworth.) But that isn't really the point I'm going after.
What I'm saying is that back then, non-peer-state folks did not have the ability to instantly and cheaply call each other from a mobile handheld device (smartphones). They didn't have the ability to make IEDs and set them off remotely. They didn't have ground to air missiles with competent seekers. Radar was out of the question (as opposed to today: standard equipment on high end cars.) There was no internet to use for wide coordination of multiple people and cheap dissemination of directives and other information. Non-peer-states were not going to create jammers. We have put very powerful technology into the hands of people who wish us ill. I think your basic point is that, so far, Moore's Law may have helped them more than it has helped us, and I think that's probably right.
For various reasons, and despite the acquisition horrors, there are many examples where the U.S. has taken better advantage of the Moore's Law COTS bounty than non-peer-state actors. (And probably some that came out the other way.) We must also worry about peer threats, but I don't see that as COTS-related. My concern is that in general, stationary targets get hit, and the end of exponentially-improving electronics makes it way harder to keep moving.
What I mean is, our military platforms and weapon systems, before Moore's Law was a factor (say, up until the 1960s), were still the best there were, RELATIVE TO what other countries had at the time. That's all that matters, for defense. It's always relative to what your adversaries have. We had the best jets, subs, aircraft carriers, and surface combatants. Mainly because we could afford the R&D to develop them, and the acquisition costs.
Your contention was that an end to Moore's Law is a threat to our national security. My contention is that an end to Moore's Law would instead be a great advantage for our national security, BECAUSE national security is a matter of relative military capabilties. An end to Moore's Law would prevent our enemies from building ever more powerful weapons on the cheap. Instead, we would be back to a situation in which building better weapon systems is difficult and costly. This is an advantage for countries that can afford it, and a disadvantage for countries that cannot afford the expense.
When you say this: "What I'm saying is that back then, non-peer-state folks did not have the ability to instantly and cheaply call each other from a mobile handheld device (smartphones). They didn't have the ability to make IEDs and set them off remotely. They didn't have ground to air missiles with competent seekers."
My response is, PRECISELY! Moore's Law is to blame for this. It hurt our national security, it didn't help it. Suddenly, the playing field was leveled. It gave our adversaries all manner of cheap high tech weapons. An end to Moore's Law would slow down our adversaries' ability to acquire ever more powerful weapons, which in relative terms helps those who can afford the cost of expensive innovation.
How can this be good for the US military? As I understand your argument, you're saying that military power will follow very closely the economic power of states. Given that most forecasters are predicting that China will have the biggest economy by far by mid-century or sooner, with the USA running a distant third by some estimates, means the US military will also lag significantly behind the militaries of the economic leaders. This can only imply that global US influence will shrink significantly by mid-century.
Absolutely. If you're saying that an even richer nation in the future will be able to spend more on weaponry than we can afford, and obtain the advantage that way, then the argument would switch over. That makes sense. However that's not what the original contention had been.
My argument is that ever since Moore's Law became a factor, it has created a challenge to our national security. So unless the economic realities change drastically (and they well might, as you suggest), we can't convincingly claim that Moore's Law is esential to our national security. Can't have it both ways.
I'm not saying a richer nation can outspend us on their way to a technological advantage. Somebody could potentially outspend us and gain an advantage in numbers, perhaps, but before you just assume that, take a serious look at the world economics that would imply and ask how realistic that would be, at least any time soon. If their acquisition system is less constipated than ours, that would also help them get an edge sooner. In this forum, I haven't tried to address the question of whether one can spend one's way past Moore's Law and thereby dominate. We largely ride Moore's Law COTS components too, though not exclusively. But historically we've ridden the leading edge better, and that's what we're about to lose, since there won't be a leading edge any more.
The only reason I mentioned all the high tech wonders that non-peer-state folks are essentially getting for free (meaning no investment in the intellectual property development or the manufacturing infrastructure required of such devices) is that that situation has let such potential competitors "in the door". Yes, we are using technology that is derived from the same Moore's Law developments, as well as more exotic stuff that only peer states can reasonably attempt.
Finally, I don't recall saying Moore's Law is "essential to our national security." I'm saying the impending end of it will have implications to the U.S. Dept of Defense, and we can (a) handle those implications in intelligent ways or (b) by sticking our heads in the sand and simply hoping everything turns out all right. The gist of the talk I gave was that I'm actively pushing for (a).
Here's another thought: maybe the end of Moore's Law would actually help US national security. Why? Because unless there is a major shift in the tide, China will soon eclipse the US as the world's largest economy and will be able to outspend any country on the planet in defense. If there is a technological advantage to be acquired by purchasing faster chips, it would then go to the country with the deepest pockets -- China, to the detriment of the US. We would, in effect, be outspent in the chip race by the Chinese in the same way the Soviets were outspent in the arms race by Uncle Sam.
Too many contradictions in these arguments, Tom. If Moore's Law continues to hold, it means that very soon everyone can afford these faster chips. Deep pockets or no. If Moore's Law stops or slows down, the faster technology will become expensive, available only to those with deep pockets. That seems to be what's happening already.
The effect on "national security" has to be that the richer a country is, the better off it is militarily *without* Moore's Law!!
Besides, I think that the term "national security threat" is being thrown about too loosely, used to justify way too many questionable things lately.
Bert: well, I respectfully disagree with your conclusion. Moore's law leads to faster chips, but the military tends to use the most advanced chips -- which are more expensive even under Moore's Law. That favors the richer country, which economists say will be China in the not-too-distant future. If Moore's law ends, and chips stop get faster at that rate, that advantage is narrowed or -- in the extreme -- even eliminated.
Of course, the wealthier country could still benefit from other advanced technologies that would speed up processing outside the chip. And all major countries already have enough firepower to blow up the planet several times over, so this is probably academic.
On the civil liberties front, the demise of Moore's Law could limit the mass processing of big-data to the point where there is no instant analysis of individuals based on "total information awareness" programs. That would cut into the expansion of intelligence programs like the NSA's.
The real meaning of Moore's law was cost reduction, specifically the cost per component (e.g., transistor) in an IC, every 1-2 years. Not innovation (which is where More than Moore can thrive). An end to Moore's Law would be nice if it meant an end to companies' drive to reduce costs, which inhibits spending on innovation. That is nice but hard to imagine. Some other law of systematic cost reduction would probably take over.
Moore's Law is not dead, but it is noticeably aging, slowing down and getting cranky and hard to deal with.
In my one interview with Gordon Moore a decade ago I asked him if CMOS scaling would end. He said it would as we approach transistors the size of a few atoms (which we are doing now). He said advances would slow and get more expensive before this happened (as they are now).
Yes, there are advances in architecture (especially 3-D ICs) yet to come. And who knows maybe someone will come up with a new device platform (graphene?) that holds the promise of several decades of exponential improvements.
But for now Moore's Law is clearly slowing down and an end is in sight in perhaps 10-20 years.
People smarter than I am see it, such as Henry Samueli, founder of Broadcom and a former EE professor, who is out talking to his customers about it. He is out talking to his customers about it.
This is not a subject of debate. It is a reality smart people are starting to plan for.
I would respectfully disagree with Mr. Kurzweil on that. And I have a suggestion -- when somebody says "quantum computer", you should auto-translate that into "quantum accelerator." You would not like a computer that was truly and only based on quantum principles, because such machines are probabilistic. Would you really want to edit a document, save it to permanent storage, and then get a probabilistic version of it back tomorrow? Would you like your bank to keep track of your account balances that way? Me neither. There are a lot of computing tasks that just aren't appropriate for quantum technology, not now, and quite possibly not ever. So I do not foresee a wholesale transition to some sort of quantum-based technology.
Mr. Kurzweil is basing his prognostications on humanity's long term overall cleverness, and I would agree that our history is quite amazing and something one should not readily bet against. But all industries and technologies can stall for considerable periods of time. After 100+ years, most of us still get around in metal boxes propelled by exploding hydrocarbons. It's not crazy to contemplate our industry doing that, and I would claim it's imperative for the Dept of Defense to think about it.
I expect that when we run out of steam on the current Moore's Law march with CMOS, that there will be an industry transition to Quantum Dots. Unlike Quantum Computing, Quantum Dots are not probablistic computing, and can be done at the single atom level. Not sure that it will buy you much more than 10 or 20 years after CMOS.
The other thing to keep in mind is that Moore's Law is not a Law. We are constantly reversing cause and effect here. CMOS has continued to scale exponentially, because the Semiconductor Industry managed their investments in scaling and drove their behavior to acheive those rates and not the other way around.
Finally, I would posit that with or without continued Moore's Law progress on Electronic Technology, there is already enough COTS HW out there that a sufficient technological lead doesn't matter any more. The world is already fighting that lead quite effectively with computer viruses, cyber attacks, IEDs, etc. that peer nations are already at such an asymetric disadvantage as for it not to matter any more.
I agree with your assessment of a quantum computer. However, technology being designed to implement a quantum computer could easily transfer to classical computers. Particularly, the silicon photonics that Intel, BM, and several others are working on could facilitate a paradigm shift such as suggested by Kurzweil.
What if an optical chip-to-chip data bus was as fast as the on-chip data bus? Just as we today connect a component implemented in one area of the die to another component in another area of the die, a chip-speed optical bus would allow inter-die components. This would be a paradigm shift. Instead of distributed computing using clusters of individual and independent computers, we could treat individual dies as a single virtual die, or in other words a very large SoC. In addition, this virtual SoC would be scalable to fit in whatever power envelope was available. There are many possibilities, but this one seems quite feasible in the near future.
Back in March 2013, EET had an article regarding Moore's Law and I will partially quote here:
"Recently researchers at Massachusetts Institute of Technology (MIT) compared the accuracy of each competing law in both its short- and long-term predictions. MIT claims their findings will improve the accuracy of future predictions about technological change, candidate technologies and policies for global change. Overall the best long-term predictor is Wright's Law, which improves its accuracy over Moore's law by framing its horizon in terms of units-of-production instead of absolute time. For instance, Moore's Law predicts that every 18 months the density of semiconductors will double, whereas Wright's Law predicts that as the number of units manufactured increases the cost-of-production decreases (no matter how long that might take). Thus Wright's Law--named after aeronautical engineer, Theodore "T.P." Wright--offers more accurate long-term predictions since it automatically adapts to economic growth rates. Growth of prediction errors for competing laws to Moore's Law shows Wright's Law the best at long-time horizons, Goddard's Law as the worse at short time horizons, and Sinclair-Klepper-Cohen the worst for long-time horizons. Wright's and other alternatives, such as Goddard's (which postulates that progress is driven only by economies of scale) and Sinclair-Klepper-Cohen's (which combines Wright's and Goddard's), were compared to the actual cost and production units in 62 different technologies, including computers, communications systems, solar cells, aircraft and automobiles. Historical data allowed accurate comparisons using "hind-casting" whereby a statistical model was developed to rank the performance of each postulated law over time. MIT claims its results show that with careful use of historical data, future technological progress is forecastable with a typical accuracy of about 2.5 percent per year. The research was conducted by MIT professor Jessika Trancik, professor Bela Nagy at the Santa Fe Institute, professor Doyne Farmer at the University of Oxford and and professor Quan Bui at St. JohnÕs College (Santa Fe, N.M.)." Pasted from <http://www.eetimes.com/electronics-news/4408525/Moore-s-Law-trumped-by-Wright-s-Law?cid=Newsletter+-+EETimes+Daily>
MIT is correct when looking at progress in hindsight. They are mistaken when extrapolating into the future. All the switch from Moore's Law to Wright's Law does is change an uncertanty in time to an uncertainty in units. There are also blind spots with respect to technology transitions, which Wrights Law has trouble accomodating as it assumes the units are continuous over the same learning environment. Given that the semiconductor industry purposely creates discontinuities with a completely new process every few years, and that Moore's law is a cause, not an effect (see my comment, above), it's small wonder that Moore's Law is used in that industry rather than Wright's Law.
Finally, maybe you can make the case that your Units forecast is more certain than your by Time forecast. That would really inform which one to use. But I have never seen a forecast that was independent in only one of those dimensions, so the case for Wrights Law may be the theoretical underpinnings for your model, but it isn't going to replace current heuristics.
I am not certain that in the 21st Century, the definition of "Moore's Law" is directly driven by (or dependent on) "National Security" concerns and/or "Defense [Offense?]" spending. A few good examples of the fact that Toto is no longer in Kansas can be seen in such "COTS" programs as the SpaceX Dragon spacecraft, DishNetwork/DirectTV/SiriusXM satellites, and even the Atom/SnapDragon processors. Ditto for scientific endeavors such as particle accelerators/colliders vis-a-vis weapons. To find a direct causal relationship between the military budgets and what Google has done to the internet would be a far stretch. In the previous decades, Moore's Law had a military driver but today's autonomous vehicles do not need a centralized conductor that is charting the course ahead. Unless, of course, we can ultimately attribute such 'progress' to John/Jane Q. Public (aka consumers).
Before you can declare Moor'es Law dead, consider what ti reall is, a prediction. Then there's law. Take Ohm's law. It seems to be irrefutable. V=IR. It seems to be universal, even in space. Then there are "laws" that governments pass. They can be revoked, just as Moor'es "law." Law passed by governments are really more rules than laws. If it can be revoked, it's not a law in the first place.
I believe that Moore's law as *conventionally* stated will stop (transistors shrinking and # of transisitors doubling every N years).
However, I feel that there will be innovations that will help continue forth with improvements in performance, power consumption and functionality (recall that these are the *end* objectives that we are really interested in. Scaling to smaller dimensions has just been the *means* of achieving this *end*).
And I feel that these innovations will enable using the same or maybe slightly *longer* channel length transistors (say, 45 nm) than where we are headed towards (longer channel lengths imply better yields), and yet result in better performance and lower power. These innovations may be in the form of using a different material than Silicon, etc.
Could not agree more. I say bring it on! Our over-reliance on transistor level improvements over 30 years or so made us LAZY, but what's 30 years in the history of human progress? nothing. We ought to look at other levels including:
- Algorithmic: for many decades now, our way of thinking about problem solving has been biased towards Von-Neuman implementation platforms with semiconductor chups. Let's look beyond that and devise new algorithms for wider platforms and paradigms.
- Architectural: Hardware is not just about transistors, it's about computing and communication designs and architectures. I do not think we have explored the realm of possibilities here adequately, there is still a lot to be done.
-Physical: Binary electronics using semiconductors is one of many possibilities for computing, storage and communication. Here again, we have scratched the surface.
To solve our computing, storage and communications needs, we must train a new breed of scientists and engineers. Out with the modularization, fragmentation and specialization of training and teaching, and in with holistic education.
Performance progress has traditionally been lead by advances in semiconductor process but if we are approaching a plateau (and I'm optimistic that progress will be challenging both scientifically and financially, not things grinding to a halt), then we still have algorithm and architecture in which to innovate. The challenges to advancing in semiconductor process also challenges incumbent RTL-based design since larger system level design perspectives will be required. Yes, that horse has been beaten for a long while but there is growth (starting from an admittedly small base) as more and more people "get it" and turn to ESL.
"So, what happens if the whole world has equal access to technology? Does stability depend on one country having a bigger stick than everyone else?"
The truth is precisely the opposite, peace takes place when there is an equilibrium of power, and that happens when there is equal access to technology, not when one country has a bigger stick than everyone else.
The American expert is right in seeking US advantage in high technology, but that is not necessarily in the interest of world peace :-) Others have to seek the same advantage and at some stage, they will realize that their interests lie in collaborating and cooperating rather than constantly seeking an advantage over the others. It's a process and we are nowhere near maturity....
My Mom the Radio Star Max MaxfieldPost a comment I've said it before and I'll say it again -- it's a funny old world when you come to think about it. Last Friday lunchtime, for example, I received an email from Tim Levell, the editor for ...
A Book For All Reasons Bernard Cole1 Comment Robert Oshana's recent book "Software Engineering for Embedded Systems (Newnes/Elsevier)," written and edited with Mark Kraeling, is a 'book for all reasons.' At almost 1,200 pages, it ...