@KB3001: Indeed Intel's multi hundred dollar Xeon processors may someday be a thing of the past (at least in volume servers) with the advent of multiple players selling integrated chips for the low tens of dollars
There is definitely a market for non-Intel servers, as customers have long tired of the Intel monopoly. AMD's x86 has fallen far behind, remarkably leaving ARM's licensees as the most viable contenders. But, there is no need for a 32-bit server in China or anywhere else. It was the only ARM ISA available when Calxeda got started, but it could never live as a product, so was at best a proof-of-concept, stepping stone to 64-bit. We will start to see ARM 64-bit servers emerge slowly in 2014, but too slowly for a company like Calxeda, that has to survive solely on that ramp. The needs of the server market are changing too, where Intel's historical strength of single thread performance is not a must-have for many of the new web server workloads. So, I see ARM eventually making a strong business in severs, but, yes, it is all about software, and that will take time.
You are challenging a perceived assertion with an assertion :) logic tells us that for LAMP servers, at least, custom SoC solutions based on ARM processors should perform better than Intel's on PPW. But even if we do not accept that, what about Performance Per Dollar? For the good of the end customer we need multiple providers. Intel's profit margins in the server market are ungodly and only competition can address that.
In response with some independent of Intel network assessment.
Tarra Tarra on fabrics, they're ahead of time on data center and enterprise adoption of something new by 2 to 3 years. Two to three years is the lithography advantage of exponential switch ports in silicon. Moore's axiom at work.
Intel has in the works but only fabicators can afford switch port integration ahead of silicon economics.
Consider cost : price of 10GbE switch ports within SOC total die area verse a dedicated top rack switch controller. Then please justify the cost price of integration into processor server SOC.
Fabric integration at 32 bit ARM code density, for LAMP Stack and cool storage, makes a Strong ARM. In this application a very fast network switch.
Solving large problems at good performance for watt ARM 32 can do now. And 64 bit if not held back by Intel Network. The target application is after hours batch processing. Or anything that has to do with ARM control plane and coprocessor application processing plane in real time.
DMcCunney, benchmarks in a microprocessor renaissance should not measure Intel performance through a system revolution. Data Centers and IT like their favorite benchmarks. ARM displacement of the Intel standard has nothing to do with Intel sales paradigm and everything to do with the evolutionary ARM sales paradigm.
That is all about end customer requirements.
And the hype wagon, including myself, did press for ARM server before whole product, because ARM server supports a microprocessor renaissance and the development jobs that evolve around it.
If your placed outside the Intel development circle, by Intel sales, ARM server is a good place to invest your development time and effort.
The opportunity in Arm Server is software first and foremost system hardware integration.
Think Novell Network VAR but now for Data Center.
The ARM software systems opportunity is optimizing raw ports of Intel Linux operating systems for ARM customer specific application's, capturing that IP for reuse and resale. That's where the ARM channel money is.
Calxeda has invested $105 million in their development and is implementing on road map toward 64 bit. Will ARM Capital support Calxeda?
Rick, ARM is in denial, for all the Intel network insurgent and kiss up to Intel too sell something reasons.
Marvell recognized a year ago that 2014 was not a year for ARM Server.
Marvell was also first to recognize the production economic necessity of a standard reference platform in 2011.
Probably the most intriguing thing about Calxeda shutting down is what happened to the Fabric. To elaborate, Calxeda had two things that were innovative - one was the use of existing power efficient 32b cores from the mobile space and applying them to server workloads, but the second was the ability to connect a large set of such processors using their "scale-out" fabric interconnect. I always thought that was the more important part of their story and the distinguishing feature.
I can only speculate here:
- Maybe the big OEMs did not adopt Calxeda's fabric pretty much negating their advantage. I am not sure if the moon-shot from HP used their fabric. I suspect it did not.
- Did their fabric not provide the scalability or performance needed for large clusters
- Maybe proprietary fabrics are not welcome in the data-center and existing solutions are sufficient ( I dont see how though)
Sorry to see them go and they were definitely ahead of their time with some innovative ideas. Well fought Calxeda! Well fought!
For sure, HPC benchmarks will show that even though ARM is much better at the overall PPW metric, when it comes to solving large problems / large data operations, Intel still beats ARM-based processors in PPW metric
PPW is not the be-all and end-all. What matter is lowest power at good performance. Calxeda was able to demonstrate PPW benefit for static web-tier work-loads but what they probably found was that it was not a large enough market to justify a business case around.
But that is not to say that custom ARM 64b cores from AppliedMicro, Broadcom etc will come out with significantly better performing cores.
For sure, HPC benchmarks will show that even though ARM is much better at the overall PPW metric, when it comes to solving large problems / large data operations, Intel still beats ARM-based processors in PPW metric.
I think you're correct, and the server market won't be monolithic. Whether Intel architecture or ARM architecture will get the nod will depends upon the expected role of the server. If what I want is a compute server, doing serious number crunching where I need results sooner rather than later, Intel may get the nod. Likewise, if I'm doing something like hosting a large Oracle database where I have a terabyte of tables in RAM and I want the fastest possible performance on queries and updates, Intel may get the nod.
If I'm someone like Google or Facebook, building out data centers with thousands of servers where the key is distributed processing, no one server will carry a really heavy load, and time to complete any particular function is less critical, I may opt for power savings over raw performance. As my load grows, I just add more servers, but more servers require more power, with increased costs for the power to run the servers, and the power run the cooling systems to keep the servers within an acceptable temperature range.
It will ultimately come down to money, and which approach offers the most bang for the buck, but what bang is will vary by customer and use case.
I expect a lot of amusement when 64 bit ARM CPUs start becoming available in quantity, with all manner of benchmarks purporting to show the superiority of one architecture over another. The key questions will be "What do the benchmarks measure?" and "How relevant are particular numbers to the intended application?"
DMcCunney: good analysis! I am of the opinion that the hype wagon went way before the horse in case of low power servers replacing Intel-based ones in the compute nodes. I remember seeing a demo of a 1RU server with 32 of the ARM processors last year where I challenged them to run Linpack BLAS C-version and compare with Intel-bases servers using Green500 power / watt (PPW) metric. Unfortunately I could not convince them to show the comparison! For sure, HPC benchmarks will show that even though ARM is much better at the overall PPW metric, when it comes to solving large problems / large data operations, Intel still beats ARM-based processors in PPW metric.
I imagine that "emerging market" also includes geographically emerging market such as China.
You may assume that "emerging market" is "anywhere in the world where servers will be deployed in quantity."
China is a huge market, period, so it makes perfect sense that Calxeda execs should pay a visit to Beijing. It would be a place to meet possible customers that might buy their solutions, and more important, to build essential relationships with the Chinese government to insure they were allowed to sell their products in China.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.