Breaking News
News & Analysis

DDR4 Heir-Apparent Makes Progress

3/11/2014 10:15 AM EDT
16 comments
NO RATINGS
More Related Links
View Comments: Newest First | Oldest First | Threaded View
Page 1 / 2   >   >>
msporer
User Rank
Author
Microsoft?
msporer   3/14/2014 7:48:04 PM
NO RATINGS
Microsoft is listed in this article and was once a developer member, but now they don't show up on HMCC.org. I wonder why not?

BitHead77
User Rank
Author
Re: DDR4 replacement??
BitHead77   3/14/2014 5:08:11 PM
NO RATINGS
2GB would make an interesting LLC (Last Level Cache), you could use a large cache line size (512-4K bytes) to optimize the transfer between DDR and HMC.  I suspect you would still see most of the power savings as the DDR memory lines would be idle most of the time.

Another application could be as cache on a rotating media disk drive, the HMC could be glued right to the top of the disk controller. 

alex_m1
User Rank
Author
Re: DDR4 replacement??
alex_m1   3/12/2014 5:36:22 PM
NO RATINGS
HMC -- 70 Percent Less Power, 8x the Transfer Rate of DDR4

http://www.dailytech.com/Micron+Samples+Hybrid+Memory+Cube+With+8x+the+Transfer+Rate+of+DDR4/article33446.htm

TarraTarra!
User Rank
Author
Re: DDR4 replacement??
TarraTarra!   3/12/2014 5:12:18 PM
NO RATINGS
I take it that HMC is lower in power which would reduce opex. However the cost of memory would be higher. Any data on how much power say 16GB of HMC would burn vs. DDR4. I don't think the aquisition cost of HMC could come close to commodity DDR4 memory. Also given the more exotic (for now) manufacturing techniques like TSVs etc are bound to hurt yield, further increasing cost.

I believe the cost of HBM will get lower but in order to do so it will need significant volumes and improvements in manufcaturing to get there. This can only happen if the large CPU vendor (Intel) gets on board. Without that this cannot replace DDR4.

 

 

 

TarraTarra!
User Rank
Author
Re: DDR4 replacement??
TarraTarra!   3/12/2014 5:03:31 PM
NO RATINGS
The math is off. You are mixing operating costs and savings that would appear over time to the aquisition cost of a server.

 

How much lower in power/GB would HMC be when compared to DDR4. Note that DDR4 reduces power fairly well over DDR3.

DougInRB
User Rank
Author
Re: how does this relate to servers?
DougInRB   3/12/2014 3:49:49 PM
NO RATINGS
As TanjB pointed out, the bandwidth/GB just doesn't make sense with so little memory in a fabric.  Why would I want to put 32GB on a server using HMC when I can get the same amount on a single DIMM - at lower cost and less physical space?

Until they actually get more GB/HMC, this looks like a great product for high speed switches and high performance FPGA-attached hardware accelerators, but not servers.

Look at Dell's and other's rack servers.  You can drop 512+GB of memory in them today - and many do.

alex_m1
User Rank
Author
Re: how does this relate to servers?
alex_m1   3/12/2014 2:41:29 PM
NO RATINGS
My guess is that a large enough part of the server market could manage with 16GB which comes down to 4 chained HMC's.  Seems possible.

Even 32GB might work.

DougInRB
User Rank
Author
Re: how does this relate to servers?
DougInRB   3/12/2014 2:22:23 PM
NO RATINGS
Even though you can chain them together, you are limited to 8 HMC parts per channel.  So, the CPU will need multiple channels to support a large memory server.  That's no problem - they already support multiple channels with far more signals required for DDR4 than for HMC.

The real problem is the size of the HMC.  They are huge (31mmx31mm)!  You can't cram enough of those on a motherboard or DIMM to get a server with 1.5TB of DRAM like you can with the 64GB DDR4 DIMMs that will be available later this year.

 

 

alex_m1
User Rank
Author
Re: how does this relate to servers?
alex_m1   3/12/2014 7:13:42 AM
NO RATINGS
Multiple HMCs can be chained together to appear as a single, mega-humongous memory.

TanjB
User Rank
Author
Re: how does this relate to servers?
TanjB   3/12/2014 1:09:39 AM
NO RATINGS
The point is that the chip stack is overkill for large space.  100GB/s bandwidth per 2GB cube is way overkill for building a server with large memory - 50 modules like that, what host chip will have the interconnect or even need it?  And you can't get much bigger cubes because that is the limit you get multiplying DRAM chip capacity x number of TSV layers possible.  So, the whole thing looks optimized for small-memory scenarios.

Where is the server equivalent, or is this simply not coming to a server any time soon?

 

Page 1 / 2   >   >>
Most Recent Comments
michigan0
 
SteveHarris0
 
realjjj
 
SteveHarris0
 
SteveHarris0
 
VicVat
 
Les_Slater
 
SSDWEM
 
witeken
Most Recent Messages
9/25/2016
4:48:30 PM
michigan0 Sang Kim First, 28nm bulk is in volume manufacturing for several years by the major semiconductor companies but not 28nm FDSOI today yet. Why not? Simply because unlike 28nm bulk the LDD(Lightly Doped Drain) to minimize hot carrier generation can't be implemented in 28nm FDSOI. Furthermore, hot carrier reliability becomes worse with scaling, That is the major reason why 28nm FDSOI is not manufacturable today and will not be. Second, how can you suppress the leakage currents from such ultra short 7nm due to the short channel effects? How thin SOI thickness is required to prevent punch-through of un-dopped 7nm FDSOI? Possibly less than 4nm. Depositing such an ultra thin film less then 4nm filum uniformly and reliably over 12" wafers at the manufacturing line is extremely difficult or not even manufacturable. If not manufacturable, the 7nm FDSOI debate is over!Third, what happens when hot carriers are generated near the drain at normal operation of 7nm FDSOI? Electrons go to the positively biased drain with no harm but where the holes to go? The holes can't go to the substrate because of the thin BOX layer. Some holes may become trapped at the BOX layer causing Vt shift. However, the vast majority of holes drift through the the un-dopped SOI channel toward the N+Source,...

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed