FPGA's are hardly different from ASICs in this context, besides being reprogrammable. Either way, with FPGAs or ASICs we're talking about implementing designs that specifically target the acceleration of particular processing tasks in a datacenter.
GPUs, on the other hand, have a lot of library support for parallel high-performance computing.
An FPGA or an ASIC could potentially offer lower power and/or lower cost acceleration -- once you have decided exactly what to implement in that silicon and have developed & debugged the library functions to allow your existing software stack ot access the new silicon.
With GPUs, all of the exists already, which is why I titled my post "GPUs before ASICs."
Seems like the argument is for software processing units. Identify the core functions that are currently limiting software bandwidth like garbage collection and fix them with specialized hardware functions...
Let's not call these hardware elements ASICs let's call them SPUs.
@Rick: not so fast! @AZskibum does have a point in applications similar to those that FBook is pushing thru its Wedge reference design. Wedge does include micro servers that can benefit from GPU's, thermal effects notwithstanding.
Yes Rick, and with others also modifying their server and storage profile the market for data centers, warehouses, server and storage is pretty exciting for coming years. The goal will be to make on-chip communication work by integrating silicon based photonic devices for faster communication and computation.