Whether direct shrinking, more bits per cell, or more vertical layers, the complexity and associated cost is countering the normally expected decrease of cost per bit. The bit quality is degraded. So of course, managing so many bits in one place would place a lot of demand on the controller. The controller cost should therefore be considered in the system cost to be fair. Then flash won't look so cheap anymore.
@resition That is an unusually through review, quite interesting.
SSD controllers have a history of being stripped down to basics. A couple of years ago they were typically an ARC processor, stepped up to ARM these days. The ECC is a major IP block but it probably is more a problem of having rights to a competent design than it is a large number of gates.
In the PCIe era we are going to see "ordinary" drives in the 1TB range have thruputs in the 1.5 to 2.0 GB/s range, about 3x faster than SATA-3. The NVMe command set is inherently efficient to handle, but still the controller will need to track the sector mappings, plan block erases in advance, infer sequentiality in user patterns, monitor wear levelling, as well as move data. So we might see some pressure to move to the 28..35 nm range, probably an LP process, in the next wave of controllers. Especially as the flood gates seem to have opened for 28nm capacity and new chips seem to come out every day. With more mature design pipeline and less cost/delay on production it won't be long.
I wonder what Novachips is using for the HLSSD controller? That seems pretty fancy.
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.