Sorry about that. My point is that the SerDes is not the majority of the power consumption. Fundamentally DRAM are low power BECAUSE their performance is so low. Once you unshackle them with a high performance interface like RLDRAM or HMC then the power goes up just as one would expect. Wide parallel IO are more energy efficient for 'short' interconnect, and as the frequency rises the definition of 'short' becomes shorter. SerDes are more efficient for longer interconnect. Of course it is possible to overdesign any interface and when that happens efficiency suffers.
The best interface is the one that does the job and nothing more. HMC is designed for chaining of cubes. When used in an unchained application there is unnecessary overhead which reduces the efficiency.
Well, I don't understand what you are arguing here, but my respond was to the question why the IC is so large and I think one need to have a large real-state for being able to handle the amount of heat generated by such memory.
An I am glad Micron is not living by the norms of the mambo-jumbo superstitious norms of the dark ages and acting as engineers who live on the 21st century when they build their products.
The power consumption you mention is comperable to a 36 chip registered DIMM and HMC delivers far more bandwidth and consumes less board space, routing and physical volume, even including a modest heat sink. 18W is really no big deal and well within the means of conventional cooling technology for a package that size.
Amusing that they released a package with 666 balls. Even Intel avoided that, when Pentium came out they rounded the clock rate from 66.666 down to 66, but when P-III came out at 666.666 they rounded up to 667. Is HMC the devils' memory? LOL. What is the power for the 4 link device running at max performance? 36W?
Well, the package that Pico used on their board is the "16 x 19.5, 666-ball FBGA" version of the Micron HMC, based on the (now open) power estimation tool from Micron, the device power is between 9W (one link, default corner, 2GB density) to 18.5W for a fast corner, and dual link. That means a lot of heat that needs to be taken care of.
Why all the guessing? The HMC 1.0 spec is public on HMC consortium website, and 2.0 for the next generation was just released. Compare the pinout of HMC to for example an FPGA with the same number of transceivers. Granted the FPGA has a bunch of parallel pins, but if you subtract those out the 31x31 for 64 lanes is not out of line. The package on the picocomputing board appears to be the small one (32 lanes or 2 links) It's not clear from the board layout if the devices are chained together or directly connected. There's a good picture of the board on the Xilinx Xcell blog on 11-19-2014.
As for power consumption SerDes can be power hungry if you have a high drive and highly configurable transceiver, but for something tightly controlled like a memory interface on HMC or the Serial interface on the MoSys Bandwidth Engine the serdes power is not a significant contributor. When you architect a DRAM to be really high performance it's going to consume a more power in total but still be more efficient per unit energy. Even without an NDA you can use the micron power estimators tools to compare their standard commodity DRAM devices to RLDRAM. At the maximum power of each the RLDRAM is roughly 4x the JEDEC DRAM simply because of the performance it delivers.
I believe the main issue with these devices is heat, even though they are "low power", the amount of heat generated on each device with 64 15Gbps transceivers needs a lot of area to cool off the device. Micron has a power estimate program for the HMC devices and if you have and NDA with them you can run it and do some calculation yourself.
Since the chips are stacked and the interface is serial, why the huge package? Since the reason clearly isn't pin count, I would only conclude that it is power or signal integrity... Are the HMC folks saying why?
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.