HALF MOON BAY, Calif. — Moore’s law is not keeping pace with the growing needs of the still-young market for cloud services, a Google manager told executives at the annual Industry Strategy Symposium here. He called for a suite of innovations in processors, memories, interconnects, and packaging.
“The slowdown in Moore’s law and the growth of cloud services has brought us to an inflection point,” said Prasad Sabada, a senior director of operations who oversees sourcing for Google’s data center hardware. “The game is changing again and we need the industry to respond in a meaningful way.”
Specifically, he called for processors optimized to reduce latency in context switching and other operations key to Google’s actual workloads. “We’ve seen many processors optimized for Spec [a synthetic benchmark], but at Google, our workloads differ significantly from Spec.”
Google also wants memory chips with lower latency. “We can get as much bang for the buck improving memory latency as processor performance,” said Sabada, pointing to promising work on new memory architectures.
Nearly a year ago, rival Facebook came out in support of Intel’s 3D XPoint memories, which promise improvements over today’s NAND flash. Intel started limited sampling of the chips late last year.
In interconnects, today’s typical “processor bus has a lot of overhead accessing I/O and accelerator devices” and is not suited to emerging memory architectures, he said. In addition, optical interfaces such as silicon photonics are needed to link servers in the data center.
Sabada called out IBM’s OpenCAPI interface as one effort it supports. He did not mention two separate efforts launched last year, CCIX and GenZ, for open interfaces for accelerators and storage-class memories, respectively.
Next page: Google seeks lower-cost 2.5D chip stacks