@Rick: Maybe it's just me, but I believe that Intel is starting to take advantage from its mostly unsung asynchronous logic IP and Know-How dominance.
This idea is sparking into my head after reading the description in this column of two of the papers that are going to be presented. These are...
"The approach leverages process variations that create unique delays when passing through a circuit array."
Building a deterministic delay by creating custom logic arrays plus balancing the gate capacitive loads is a well proven asynchronous logic technique -- and of course, coding information with custom delays is inherently asynchronous.
"a 256 node on-chip network offering 20.2 Tbits/second of aggregate bandwidth. It uses packet-switching techniques to set up a link and circuit switching to stream data between nodes."
The more I think about this, the more I recall an old-school flagship asynchronous company: Fulcrum Microsystems. This company was born in the year 2000 from Caltech and was using asynchronous IP mostly based in the work of the async logic pioneer and guru Alain Martin for building high-performance data switching chips. The point is that Intel acquired Fulcrum in 2011... and I've been anxiously waiting for the moment in which Intel would show a product resembling Fulcrum technology!!
There's no dobut that Intel continues to leverage its IP dominance and extensive know-how, and that Wang's presentation highlights some of the areas where Intel is strongest. I'm not sure there is anything to suggest that they are doing so to any greaterd egree, however, than they have to this point.
@zewde yeraswork: I'm not sure there is anything to suggest that they are doing so to any greaterd egree, however, than they have to this point.
I think the question is one of focus. Intel seems to have developed notions of what areas of research they should pursue, and plan to concentrate on them.
They also sound like they will be harder-nosed about results, and ask regularly "Is this particular research effort getting anywhere? Should we continue to fund it, or should we decide it won't pan out, kill the project, and apply the people and resources elsewhere?"
When you are dealing with the sort of advanced technologies Intel is, simply doing the research can be enormously expensive, before you ever get to something that might be used in a product. You need the possibility of a really big win to justify placing a big bet, and Intel will be thinking about what kind of win might result from any effort.
But this does sound like R&D with more focus on D than R.
The very first company that Intel allowed to use its foundry is Achronix, a startup FPGA company.
Certainly it is a first step for Intel to try to share its foundry costs with other companies, but I always thought that was weird that the first company they did this with was a small startup, until I heard that the company also happens to specialize in asynchronous logic. I wonder if there was some technology sharing involved.
Rick: "FYI, Intel has been shipping Fulcrom's networking silicon for some time. It is also said to be rolling some of Fulcrum's technology into sa future data center interconnect technology."
Thank you very much for this info. About data center market, I strongly believe that Fulcrum interconnect technology may suppose a heavy weapon to help Intel facing the ARM 64bits challenge in the server arena.
Balancing creative thinking, control, and innovation are world class challenges. Until you know the outcome, exploring new ideas may be either undisciplined play or the foundation of the next breakthrough. Both technical and business innovation often grow out of the most unexpected places - and the fast failures may be early steps on the journey towards the next breakthrough. I wish them luck as Intel Labs Vows to Run Tight Ship: "The 1,000-person group aims to be very disciplined, take risks, fail fast, improve innovation yield, encourage new ideas... and be tightly aligned with Intel's business units."
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.