Whether direct shrinking, more bits per cell, or more vertical layers, the complexity and associated cost is countering the normally expected decrease of cost per bit. The bit quality is degraded. So of course, managing so many bits in one place would place a lot of demand on the controller. The controller cost should therefore be considered in the system cost to be fair. Then flash won't look so cheap anymore.
@resition That is an unusually through review, quite interesting.
SSD controllers have a history of being stripped down to basics. A couple of years ago they were typically an ARC processor, stepped up to ARM these days. The ECC is a major IP block but it probably is more a problem of having rights to a competent design than it is a large number of gates.
In the PCIe era we are going to see "ordinary" drives in the 1TB range have thruputs in the 1.5 to 2.0 GB/s range, about 3x faster than SATA-3. The NVMe command set is inherently efficient to handle, but still the controller will need to track the sector mappings, plan block erases in advance, infer sequentiality in user patterns, monitor wear levelling, as well as move data. So we might see some pressure to move to the 28..35 nm range, probably an LP process, in the next wave of controllers. Especially as the flood gates seem to have opened for 28nm capacity and new chips seem to come out every day. With more mature design pipeline and less cost/delay on production it won't be long.
I wonder what Novachips is using for the HLSSD controller? That seems pretty fancy.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.