SAN JOSE, Calif. – Intel fellow and wunderkind Matthew Adiletta gave a history dating back to 2006 of how the company developed its Atom-based microserver. But he failed to shed much light on the future of the new processor which will face competition from a half dozen 64-bit ARM server SoCs in 2014.
“Intel will soon launch Centerton, a 64-bit, [dual core Atom processor] with ECC and server features—it’s just the first step and the road map coming will be compelling,” said Adiletta who helped develop Intel’s first network processor and now directs a lab that pioneered the Atom server work. Adiletta spoke on an Intel call on Thursday to brief the press on server strategy.
Hewlett-Packard said earlier this year that Centerton will be the first of several processors it will use in a new low power server family. Intel said it will roll a 22 nm version of the chip in 2013 called Avoton.
Industry reports said Avoton will sport a new out-of-order Atom core along with a DDR3 memory controller, multiple Gbit Ethernet MACs and support for serial ATA and PCI Express 2.0. Longer term, Intel has said it is working on a new cluster interconnect for all its server processors using technology acquired from Cray and others.
Adiletta confirmed Intel plans out-of-order Atom cores and more integrated SoCs with lower idle power levels, but would not give details.
“A lot of performance can be gained very quickly as we add sophistication to Atom cores,” Adiletta said. “We are finally doing well now with SoC integration and tool suites for integration with the right IP blocks,” he said
Initial Intel projections that low power microservers might grow to ten percent of the overall server segment are a “reasonable first approximation based on the parallel workloads this is well suited for, but the software is evolving,” he said. “Frankly I don’t think we know but if it gets to be more [than ten percent]…and I think our customers don’t know either,” he added.
Adiletta’s work toward microservers began in 2006 when Pat Gelsinger, then manager of Intel’s server group, asked him to meet with a Wall Street CTO who used blade servers. “I quickly appreciated the need for density for ease of cabling, management and quickly getting compute online,” Adiletta said.
In 2007, he started comparing performance per watt characteristics, developing integrated CPU cards using Atom and Core 2 Duo processors. When management asked for external validation of his findings, he took the Atom board to Andy Bechtolsheim, a serial entrepreneur who co-founded data center companies including Sun Microsystems and Arista Networks.
“It was a fun meeting--Andy reviewed all information and then shook his head,” Adiletta said. “He said it hurt his head to think of all the opportunities if we could realize this,” he recalled.
Researchers at Carnegie Mellon used the boards for some of their studies in 2009 of fast arrays of so-called wimpy processors. By then Intel clearly saw some server workloads such as Hadoop that use many small highly parallel chunks of code would benefit from an Atom server.
“More chefs in the kitchen helps up to a point depending on what’s served,” he said.
By 2009, Intel started working on Centerton and its follow ons. “Intel has a head start in microservers, we are very bullish and have an excellent road map,” he said.
Intel cannot use conflicting propaganda. On one hand, 22 nm rollout need not be so fast due to not enough demand or too much inventory at 32 nm needed to clear out, according to Intel. On the other hand, if Intel wants design wins against ARM it needs to roll out 22 nm across the board much faster, even if it means dumping the old 32 nm earlier than expected.
Guess I was thinking about the "mobile" Atom, that's arriving in 2015:
But this also shows how out of touch Intel is with the reality. I thought they were going to make Atom a "prime" chip for their fabs and whatnot, and give them as much priority as their Core chips, so they can compete better with ARM.
So when why are they releasing the mobile chip much later than the server now? I guess the server one will be much less inefficient? But then, how is it supposed to compete with ARM in servers? Yeah, I don't see this ending very well for Intel, in either market.
Whether 32 or 64 bit ARMS on blade is viable high margin business & can compete with Xeon in server with system management & targeted acceleration.
And its not really issue of wimpy ARM, but crippled ARM given need for architectural enhancement that can make a StrongARM®. ARM architectural license is advantageous over design license.
Cautionary note, Intel is an ARM customer. Design producers who want too compete in server vs. Intel must own control over processor architecture. On Intel’s field it could result in severe fandango to count on ARM. Option to hedge knowing payroll must be met, staff have house payments, children too send to college must be considered.
Bet on your design capabilities on leading advantages ARM architecture offers on path toward network in processor integration.
ARM community places scalar ARM at ½ per of Intel dual issue. ARM 64 bit super speculated closing processing gap on freq verse ATOM. Two 32 bit ARM 1.1 GHz quads equal one Xeon 2620 hexa 2.0 GHz in an Intel loaded molecular docking benchmark; http://www.lowpowerservers.com/?p=141
For multiple ARMS on blade analyst suspects will
reach into high end XEON product performance and price rungs. Verse Xeon 2620 analyst calculates Calxeda Energy Core value at $171 to $200 per component.
Subsequently octa ATOM presents solely an initial low power barrier meant to protect higher power Xeon product and price voids certainly into E3, into E5 & even 46xx for massive dense where NIC in SOC across fabric VM mode is aimed to resolve Xeon power utilization issue.
ARM power clocking islands are an advantage verse Intel architecture with insight mixed on how long it will take Intel to gain parity in chip power management.
Time will tell if dense ARM achieves aim of Xeon parity performance but that concept is spreading.
ARMs on blade is a viable high margin business.
Spending more money to be slower ... that dog don't hunt. Servers are about latency and Watt-hours, not Watts. If you do a calculation on a processor that uses half the power, but take more than twice the time, you have slowed the application and paid more for that privilege. That's where microservers fall down. They'll probably find some application, just as blade servers have, but this looks like a preemptive strike by Intel.
Servers are essentially reliable data pumps, so the 'core' debate is a red herring.
Things like ECC will matter, as will low energy memory, and there also has to be a sweet spot in servers, where you match pump-throughout to the average user, not to some brag-contest peak.
That sweet spot is where you will find the lowest J/GB served, and that is what customers will pay for.
It is less about bragging rights and more about efficiency.