SAN JOSE, Calif. – Russian engineers are developing software to run x86 programs on ARM-based servers. If successful, the software could help lower one of the biggest barriers ARM SoC makers face getting their chips adopted as alternatives to Intel x86 processors that dominate today’s server market.
Elbrus Technologies has developed emulation software that currently delivers 40 percent of native ARM performance. The company believes it could reach 80 percent native ARM performance or greater by the end of 2014. Analysts and ARM execs described the code as a significant, but limited option.
[Get a 10% discount on ARM TechCon 2012 conference passes by using promo code EDIT. Click here to learn about the show and register.]
A growing list of companies—including Applied Micro, Calxeda, Cavium, Marvell, Nvidia and Samsung—aim to replace Intel CPUs with ARM SoCs that pack more functions and consume less power. One of their biggest hurdles is their chips do not support the wealth of server software that runs on the x86.
The emulation code from Elbrus Tech could help lower that barrier. The team will present a paper on its work at the ARM TechCon in Santa Clara, Calif., Oct. 30-Nov. 1.
The team’s software uses 1 Mbyte of memory. “What is more exciting is the fact that the memory footprint will have weak dependence on the number of applications that are being run in emulation mode,” Anatoly Konukhov, a member of the Elbrus Tech team, said in an e-mail exchange.
The team has developed a binary translator that acts as an emulator, and plans to create an optimization process for it.
"Currently, we are creating a binary translator which allows us to run applications," Konukhov said. "Implementation of an optimization process will start in parallel later this year--we're expecting both parts be ready in the end of 2014."
"The major concern for us is lack of software developers with binary translation expertise," he added. "This is also the reason for us to estimate project release in late 2014."
The Elbrus Tech software uses a parallel compilation process and stores translations in volatile memory to decrease overhead when starting up. The binary translator will have "several levels of optimization for 'cold' and 'hot' regions of code," said Konukhov.
Work on the software started in 2010. Last summer, Elbrus Tech got $1.3 million in funding from the Russian investment fund Skolkovo and MCST, a veteran Russian processor and software developer. MCST also is providing developers for the project.
I think this is a nice project for some research work but not so much for a startup company. I wonder what they aim to monetize by making the emulator run x86 apps. The world of internet is becoming more and more ISA and OS agnostic where the only winning combination is performance and power.
It think this is interesting but not a game changer. Emulation has been around a long time.
For instance, I still have and use a Palm OS PDA. Original Palm devices used Motorola Dragonball processors. As devices required more power, they outgrew the Dragonball, and Palm and competitors shifted to ARM devices. To avoid making the vast amount of existing Palm software incompatible, Palm implemented a hardware compatibility layer. Motorola instructions were intercepted and converted on the fly to ARM instructions. In fact, it wasn't possible to write fully native apps. You wrote and compiled to Motorola code, but you could write "ARMlets" in native code to speed up critical operations.
Granted, the ARM CPUs were a *lot* faster than the Dragonball chips they supplanted (My ARM device runs at 200mhz. The last Dragonball device ran at 33mhz), so the overhead of emulation wasn't a factor.
It will be here, but that may not be critical. It will depend on exactly what X86 apps will run under it, and I would expect the emulation to improve over current levels, so while it might not be as fast as native code, it might be fast *enough*.
And it would be a stopgap in any case: it's a fair assumption that if ARM gains traction in the server space, proprietary X86 apps will be rewritten/recompiled for ARM to sell into that space.
Several very good comments here. As some note, in the area of cloud platforms, the software assets are owned by the data center. So this type of emulaton technology is less relevant. However if you imagine an Enterprise severl where there is a mix of apps written in a high level (and therefore relatviely easy to recompile and run natively) language and an (annoying) application where only the x86 binary is available. Here, this type of translation is useful. And the performance hit will lessen. Trust me.
x86 emulation has been attempted in the past including on big iron DEC Alpha; always fell short on overhead.
Although can yield an interesting engineering lesson, especially in Nx486 to 586 example on
reverse engineering x86 instruction set for implementation in RISC hard wired decoder.
Not seeing the compelling anywhere. Power efficiency is a fiction if it takes more total energy to go your computation.
So it emulates at 40% of ARM, which is, in turn a percentage of x86. By the time you get done with the calculation, you have run for so much longer that the total energy (operating cost dollars) is greater than just running on x86.
On the other hand, at a cutting edge supercomputer research lab in the US, we were given by a team from Moscow state University a floppy disk which you would stick into your 286 and it could derive Feynman diagrams to tree level from a Lagrangian. Yeah early 1990s it was...
This does reduce the barrier to adoption for ARM in the server space though. I agree that the future is less sticky as far as ISA's go with most of the could applications running on higher abstracted platforms that hide the underlying h/w. But it is a compelling story to be able to say to the datacenter operators that they can run their existing software (albiet at lower performance) and all the new stuff with great power efficiency.
I just don't understand how an x86 emulation on ARM can help. ARM lags in performance, but makes it up in its market segment by superior power efficiency. Emulating x86 on such slower chips is bound to be slow-squared. The only reason to do it would be legacy applications, but the whole point of the post-PC era is that it's no longer the Wintel monopoly: backwards compatibility requirement is over.
The new software is portable---either because it's Open Source (like Android) or because it is written in Java or HTML5, or is really a light client running against a cloud-based back end.
Look at Transmeta---they couldn't make the emulation win, even though they emulated on a custom-fit microarchitecture rather than general ISA like ARM. Intel is just too good at improving their product for those that needed it.
The bottom line is, those sorry chaps who cannot move away from x86 will use Intel. Those who can port will make a decision based on cost, performance, power consideration, etc. I'll just say that I just got my $30 Raspberry running yesterday. It's slowish, but it runs off battery and did I mention that the entire computer costs $30?
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.