I'm concerned about the GALS (Globally Asynchronous, Locally Synchronous) approach. Aren't you going to have metastability problems with multiple clock islands? Wouldn't one be better off with all islands having the same clock frequency, but with relaxed timing between islands?
@tpfj: "Really what I'm saying is the laws of physics that are currently kicking our asses will unfortunately still apply to asynchronous circuits, and hence should then suffer in similar ways."
I completely agree with you in this point. Indeed, the really worthy advantages comes from other side effects derived from this situation, not from a direct speed advantage.
The main message I pretended to explain, is that our live has been easy as we had not needed to pay attention to datapath internal geometry for a long while by using synchronous design, but now the paradigm has changed and clock starts to be a problem by itself.
The huge clock network in modern ICs sucks as many as 50% of the total power being wasted. For this reason, there are most practical applications of async circuits in RFID cards, in which async MCUs are heavily used.
In addition, the lack of a global clock with a fixed frequency reduces the EMI by orders of magnitude in async parts. For this reason, Async is used too in low power wireless communication modems.
I agree, the trend is collapsing. Surprisingly, Moore's component, ie. circuit density, is the one that is holding out the longest. We lost speed a long time ago, we lost power fairly recently, we are now rapidly losing silicon $ cost (just look to current process node fab and related mask costs). I don't believe that asynchronous circuits will solve this either. Sure they will scoop out all those currently wasted picoseconds on the non-critical synchronous paths as well as any delay savings through flops that will be absent from an asynchronous equivalent design. But realistically, how much of an advantage do you think this will give you against a well balanced (ie many paths near critical speed) sychronous equivalent. I doubt it is double? Also, remember, a synchronous circuit gives you inherent parallism through pipe-lining. How do you achieve the same parallism asychronously? Really what I'm saying is the laws of physics that are currently kicking our asses will unfortunately still apply to asynchronous circuits, and hence should then suffer in similar ways.
"Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density and speed of components. Moore himself wrote only about the density of components (or transistors) at minimum cost."
@tpfj: "joore's Law says nothing about speed and everything about circuit density"
Moore law is all about jumping from one process node to the next and the overal exponential advantage. Density, Speed and Power consumption are all related with the transistor shrinking process.
Moore's law is meaningless beyond CMOS or derivatives, as it badly collapses when data communication replaces data processing as the most expensive process in terms of power consumption, propagation delay, occupied layout area and, of course, monetary cost.
Remeber that by just adding cores/logic, you cannot increase the overall performance in General Purpose Programming.
Back in the nineties, CPU were sold by highlighting their clock speed, as this was really the parameter that supposed an advantage in day to day application. e.g. office suite, graphics, boot-up time.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.