Why not optical computing? Well, it comes down to cost, and of course, ecosystem (chip and server design, manufacturing, software, etc.). Electronic computing via CMOS and its successors (Si-Ge, III-V compound hybrids, graphene, and other options) has several years ahead of it, as I see it. For example, at the 8nm Si CMOS node, there will be a sea of 50 billion transistors available for a SoC processor at a manufacturing cost of tens of dollars. I am betting that innovations in circuits, computer architecture (especially in the development and use of specialized co-processors) and software will enable these to be well utilized and meet the aggressive energy efficiency goals that the IAP is aligning with.
I agree with Rick's comment that silicon photonics may find a role in the near future for communication between nodes or racks within the data center, but here again cost has been a barrier, and multi-lane SERDES technologies continue to perform (at low cost!).
@Jim: Indeed, I have heard for a decade that copper is out of gas (like Moore's Law) but engineers keep pushing it a little further as a lower cost option than optics.
Still I am intrigued by all the startups (about six to my count) that have been doing work on silicon photonics and now getting bought up by the ikes of Cisco and Mellanox who claim products are coming in 2014.
I was not yet aware of the study that concluded "roughly half of the gains in computational performance since 1985 were realized by advances in computer architecture and software, with the other half by advances in semiconductor manufacturing technology." Intuitively, that feels about right. Moore's Law has been critical to optimizing the power/performance/die area curve, but advancements have been equally critical, and likely will remain so.
I just talked to Imec last week about their 3D memory stacking technology. Also, what about invoking "sleep mode" on unused transistors, is there a future in that? Memoir mentions some small gains (5%) in power savings using it here.
@JanineLove. Yes "sleep mode" as you say, aka "power gating", i.e., turning off gates, when not needed has been employed by chip designers for some time now, as is clock gating and voltage scaling (dialing back frequency and voltage to reduce power consumption). Sleep mode is also utilized for servers in data centers during off-hours. Our IAP colleagues at Harvard, Prof. David Brooks and Prof. Gu-Yeon Wei are doing innovative circuit work with the integration of voltage regulators on-chip to reduce power consumption and enble ultra-fast voltage scaling.
Thanks for the comment AZskibum. Frankly, I think most semiconductor technologists tend to give Moore's Law and Dennard scaling most of the credit for the advances in computing performance, but as you and the Stanford study note, the computer architects and software guys deserve about half the credit, and from here on out they will be shouldering most of the load!
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.