but multicore and parallel processing is not really an innovation but more of an affordability and luxury thing. The real innovation is how to make low power consuming devices, smart softwares and integration of both. More cores means more power consumption but the battery capacity is staying the same as far as smartphones are concerned.
Ah! Exactly, Frank. The reason COTS performance has become militarily scary is precisely because it's so cheap and easy to leverage it into systems with military application. Those who do, are taking a free ride on many billions of dollars of investment made by private industry in the chip designs, the algorithms, the tools, the fabs and the know-how...
I'd say the main way COTS has helped the U.S. is to lower the cost and SWaP (size, weight and power). Ideally, it would also have been an advantage for the U.S. if we were collectively more nimble than the rest of the world. In the commercial space, I propose that we often (but not always) are, but for military equipment we're slow. Sometimes even beyond slow. And it's not the designers -- I've met many of them and they're really sharp and dedicated folks. It's the specification, requirement, procurement, and acquisition system that exists around them, I think, that causes the slowness and I suspect much of the cost inflation.
Agree; check out our UPSIDE program here at DARPA. The way I look at it, DARPA's PERFECT program extracts the maximum DoD-relevant performance out of what's left on Moore's Law, and UPSIDE asks the follow-up question "what else is out there besides?"
Consider the economics of scaling to 5 nm on 450 mm wafers. How many ICs will ever have enough volume to justify the development costs? I understand DARPA's concerns, and the fact that defense technology has always relied upon the commercial sector to follow Moore's Law on its own -- meaning that if not for commercial demand for ICs at the next generation process node (first in PCs, then later in smartphones, tablets, game consoles, etc), defense system developers would probably not have access to these IC technologies. Defense electronics has never had high enough volume to fill a fab -- at any process node.
But to what extent are U.S. electronic defense systems advantages due to CMOS scaling rather than due to other attributes -- new IP, new architectures, etc.? Even if Moore's Law slows and eventually comes to a halt, and everyone in the world has access to the same process technology, I hardly think that everyone in the world will suddenly have the capability to design and successfully build, test & deploy the types of systems -- and the SoC's that go into those systems -- that DARPA and the contractors in the DoD food chain are able to do. Not that they can't or won't start catching up, but my point is simply that CMOS scaling is just one variable -- and not even the most important variable -- that has enabled U.S. defense technology to maintain its leading edge.
Couldn't agree more, Brian. What your argument suggests is that the price to the U.S. Government of a new military system, such as a fighter plane, is not a linear function of the price of the electronic components contained therein.
We have an unusual way of looking at some things here at DARPA. We try to find technology possibilities that lie somewhere between physically impossible and "pretty darned hard"; we call that "DARPA-hard." Military procurement and acquisition lies way beyond DARPA hard!
I would assume that the economics for military application is way different from commercial applications. An iPad must be purchaseable for a few $100, and that requires huge volume. Mulitary is more like a few hundred units at most so the cost per part will be way higher. Now, if you can do everything the military does with an iPad, then what have we been wasting our money on? The cost to make custom hardware that can perform a specific function will probably always be beyond the purchasing power of many nations - the cost of a fighter plane is proabbly more than the GDP of many nations.
In the last few decades, no doubt processing power has a direct relation to number of transistors. The time may have changed by now. Multi-cores design and parallel processing may have a greater effect today.
If Moore's law is coming to an end with the current technology, it is high time that some different out-of-box technology has to evolve e.g. nano technology or using the molecular biology to build the future circuits
That is a really scary thought that I'd never considered before. It's especially sobering because although I am sure people will argue about when, I think we can all agree that eventually Moore's Law will run out of steam.
But Mr. Colwell, I would ask: is it Moore's Law that needs to continue, or just scaling? I know they are largely considered the same thing, but I see a subtle difference. Because I know that process engineers and technologists can continue scaling to smaller nodes well beyond where we are today. But the question is whether they can do it economically. If not, it wouldn't make sense for continued mass production of chips for use in smartphones, tablets, PCs, etc. But if its a question of producing a limited number of chips for national security purposes, wouldn't the government want to continue funding that, even at great expense?
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.