The module is all photonic, right off the silicon which has a self-guided fibre optic link on chip with the fibre mated into the module at the factory and this new low cost Corning optical connector replacing the higher cost CFP
I seriously, and I mean SERIOUSLY, doubt the module is "all photonic" and is only silicon, as being inferred by the hype.
The lasers are probably flip chipped and HAVE to be compound semiconductor devices.
Nothing new here that wasn't done by Honeywell and others (like Cray and Finisar) in the mid 1990's with POLO etc, and DARPA has been doing all this for over a decade in their all-optical computer and photonic backplane efforts - no mention of any kind of history in the article because it is all pumpy about Intel's "breakthrough", lest its shareholders get angry at the R&D level spent on this so far.
Then there's the question of interoperability and second sourcing. Will this be open sourced? Licensed? No mention of wavelength, reach, type of fiber, etc in 11 pages of drivel that's mostly, and very disappointingly, about a Facebook CPU CIRCUIT BOARD standard, and an unnecessary picture of a giant and a midget shaking hands to take up space where a diagram of the optical module guts should be.
Come on EE Times - more meat, less sawdust filler that's right off the press release.
elPresidente makes some good points in his post:
Silicon is not a good material to create light from although people are trying, however the efficiency of these light sources are so low today that they are not practical for a real communication system. So there is a 3:5 materal in this system somewhere as a photon pump for system. The rest of the elements: modulators, waveguides and recievers can all be in the silicon as Rick mentioned. Intel in the past as talked about making a hybrid chip from with a silicon wafer being bonded to a InP wafer: http://www.intel.com/pressroom/archive/releases/2006/20060918corp.htm. In the Luxtera case we build a micropackaged light source at waferscale (~1mm x 2mm) that gets placed on top of the chip, shinning down into the device to be the photon pump for the entire device. Luxtera Silicon Photonics are used in many of the AoC devices that TheMeasurementBlues mentioned. It has been proven as a very effective, low cost, high reliability way to make optical links, competiting and winning verse VCSEL alternatives with many added benefits in reach, relability and ability to integrate with large VLSI devices.
I also agree that this was announcement was very light on details, however this seems to be a very strategic move by Intel. Open Compute has gone from nothing 18 months ago to one of the most influential computing show in the world, and there are not very many compute shows any more. By announcing at Open Compute, they get there intentions and message out to the world in a more effective way than doing this launch at IDF. Intel stated at the show that more details would be coming later in the year.
While all of this was great for Silicon Photonics, what I believe people are really missing is why you would want to build a system this way. With all of the virtualization that is occuring and repurposing of racks hour by hour based on load, there is a tremendous about of bottlenecks in the system that create wasted HW resources (and electricity) due hitting the wall on compute, memory or I/O. With this type of architecture, now you can assign memory, CPU and storage within the rack, or even to adjacent racks. While Intel showed a typical rack configuration where you see CPU cards at the top, memory dimms in the middle and storage at the bottom, this is just because people will have a hard time visualizing it differently. In reality you could do this accross multiple racks and have a memory rack, CPU rack, storage rack. Now you can add and assign CPU compute, memory or storage however you like as these links will only have a few ns of latency, and could go 3-500m. Plus, now with all of these ARM solutions discussed at the show, you can now update to a new processor vendor/processor node at what ever cadence they are available, same is true for memory or storage. Everything is virtualized over the high speed fabric and the fabric is the only consitant part of the system. A good example is FCOE, which works very well, although it hasn't been tremendously successful for mostly non-technical reasons. If you don't believe in this architecture, look at the arguably the compute thought leader for the last 30 years. This is how IBM builds their P series systems and they know a few things about how to manage big data.