@Kinnar: but in case of a normal general purpose server everything goes on one location, in that case it will be required to consider the write endurance.
A normal, general purpose server has only one location? I doubt it.
And even if it is on the read-optimized SSD, see my comments about write limits on NAND flash. How long do you think it's likely to take before write endurance becomes a concern? Offhand, a rather long time.
I'd be concerned about timely updates, but that would likely be a matter of caching and asynchronous batch writes.
This is a very good solution!!, but separating the write and read operations in cloud environment will be less feasible but playing with local operating systems stack it will be possible to separate the write and read locations on different hard disk drives. But this will be matter of some research and development.
But when a data is read from a server which is defined as read intensive server, it simultaneously requires to write something on the server that will be the log statistics keeping or updates of the content, yes for a database warehouse there will be different locations for storing this data, but in case of a normal general purpose server everything goes on one location, in that case it will be required to consider the write endurance.
For NAND flash, I believe the current limits are abourt 100,000 writes per cell before it goes bad. And the controller circuitry is designed to transparently move data from failing cells, mark them bad, abd remap, so that what you see is graceful degradation as total drive capacity decreases. In most cases, I'd expect the flash drive to be removed and replaced with a bigger, faster, better performing unit before degradation is even noticeable. (That's 100,000 writes per cell. How long will it take for any particular cell to be written to 100,000 times? How many cells are in a drive? How long will it take for wear to be noticeable? Offhand, a long time.)
Because of that, if I'm a server admin, I'm less concerned with write endurance than write speed.
Unless you are doing OLTP with lots of database updates, reading the data quickly will be far more important than writing it, so flash optimized for fast reads can be attractive.
For read intensive operations the number of writes that will be needed would be far less that typical and the drive would probably become obsolete before reaching the point of failure. The bigger problem I can see if with the focus of read performance and possible less write performance. Performing updates to files could cause the writes to interfere with the high performance reads.
There is a write before there is something to read. However, no doubt, an article on the web probably are read a million time while there is only 1 write. The benefit of read fast will definitely improve user experience. What's the speed improvement vs the write penalty?
Depending on the penalty, when a web service is being development with both read and write, one of the many challenges lays on how a cloud system is designed so that read and write are going to different servers.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.