Google should be careful in getting into chip design as its not their forte. They good with software and think twice before getting so selective in hardware business. In hardware business losses are huge.
If you look at high end blades in some big database systems, they're combining off-the-shelf multi-core x86 chips with FPGA-based hardware to accelerate distributed database processing. Google is bound to have problems that can be solved in a similar fashion.
But they're also really big, and the cost, in parts, power, and speed limits, of FPGAs may be suggesting that custom silicon is the answer. And that kind of thing gets even more interesting if you build it into your CPUs, saving power, eliminating any bus or communications bottlenecks between the two processor areas, etc.
Or maybe it's just plain old ARM chips they can buy from multiple sources.
If you consider all the things Google is investing in besides their bread & butter search & data centers, there are lots of reasons for them to have an in-house IC design organization. I agree, this small team is just the tip of the iceberg.
It's those deep learning algrotihms that could take Google into the future with AI and all that comes with it. They already use a broad range of algorithms for their searches. Now with DeepMind and with parallel processing cores, possibly running on silicon that they themselves make, the sky is the limit.
I also believe best for cloud providers to work with and compliment, microprocessor and other silicon design producers, especially ARM 64 design producers, aiding too add some necessary production economic volume to that business equation. Even if held captive by customers so long as the design development makes complimentary financial sense. In a concentrating industrial setting drawing that line somewhere just makes sense, on the cost expertise question for sustainable industry, trade, employment, supporting gross domestic product, even at you know who.
I have also speculated x86 decode engine hard in ARM, instruction look up tables, that are certainly available from the other guess who, yet others are known working on breaking that aspect of Intel monopoly once again. That would certainly require the deep pockets of public cloud provider's too keep Intel legal, and financial guilds at bay.
Noteworthy data analytics, and on batch processing, this analyst has determined from constant audit, multiple acceleration approaches and techniques, that are nascent realities of heterogeneous compute platforms capable of entering high end Xeon space.
@rick I don't think you can accelerate mapreduce much, only the operations it does which change by application.
Google uses a variety of algorithms for search. Some of those are machine learning and esp. deep learning algorithms which are quite new and are kinf of breakthrought articial intelligence. For those ASIC's could become cheaper and lower power than GPU's , see . Those same algorithms are also usefull/critical in glass, robots,phones and other places that require AI.
Another option cache server(memcached) acceleration. recently FPGA's shown great promise , and asic could do better.
There are also other search algorithms that could benefit, but accelerating those throught hardware is quite an old idea(and one could use FPGA/GPU) ,so we should ask why now ?
Right, it could serve many masters there. And like others have suggested, Google's group might best be used to set up the architectures (HW,FW,SW) and then partner with providers to implement their visions. Some of those could be proofs of concept, others released as open source, others kept as proprietary though I think the last segment would be small. The more Google can get their ideas used by the world (thus building scale and driving down costs), the more ads they can sell into the world.