When I visited the Elbrus Laboratories in the early 1990s I was shown big iron Russian supercompters made out of 1970s and 80s vintage technologies. I was told that we built these out of necessity from vacuum tubes, and "they work". Russian/Soviet science and technology has come a long way, and has ways to go. It's gratifying that Elbrus is tackling ARM software and is presenting at the SV venue.
On the other hand, at a cutting edge supercomputer research lab in the US, we were given by a team from Moscow state University a floppy disk which you would stick into your 286 and it could derive Feynman diagrams to tree level from a Lagrangian. Yeah early 1990s it was...
We too saw some very innovative software from Russian coders in the 1990's and even in the 80's on the BBC Micro.
The Acorn Risc Machine (ARM) computer of the mid 80's onwards had many emulators including x86/Microsoft emulations and they worked wellenough for many companies to run legacy software on. You can still pick these up on ebay for very little money (go for the RiscPC with latest version of RISC OS) I am sure they would surprise a few people - no they would surprise a lot of people. Especially given that everything was developed on a shoe string. Spend a few hundred pounds on a trip back in time and think where things could be now if Acorn had got the right investment!
I think the need for x86 compatibility is overblown and rapidly diminishing. Only legacy closed source applications have a need for such binary compatibility. Cloud infrastructure runs largely on open source technologies such as Linux and couldn't care less whether the instruction set is x86 or ARM. Moving an application from x86 to ARM is usually only a "make" away.
I just don't understand how an x86 emulation on ARM can help. ARM lags in performance, but makes it up in its market segment by superior power efficiency. Emulating x86 on such slower chips is bound to be slow-squared. The only reason to do it would be legacy applications, but the whole point of the post-PC era is that it's no longer the Wintel monopoly: backwards compatibility requirement is over.
The new software is portable---either because it's Open Source (like Android) or because it is written in Java or HTML5, or is really a light client running against a cloud-based back end.
Look at Transmeta---they couldn't make the emulation win, even though they emulated on a custom-fit microarchitecture rather than general ISA like ARM. Intel is just too good at improving their product for those that needed it.
The bottom line is, those sorry chaps who cannot move away from x86 will use Intel. Those who can port will make a decision based on cost, performance, power consideration, etc. I'll just say that I just got my $30 Raspberry running yesterday. It's slowish, but it runs off battery and did I mention that the entire computer costs $30?
Yeah, and this is exactly the reason why THIS happens: When I run FLASH (= Adobe misery) my PC almost gets on fire, all due to the damned protocol and layer stackers out there. Why don't you software programmers not do a proper job?
Whoa, what's bitten you? Flash running your PC hot has exactly NOTHING to do with this. Flash is just a piece of junk that needs to die. Even Adobe realizes this--now if only web developers did too...
Recompiling source to native code is the most efficient thing to do, and that's what I was talking about. Emulating a different instruction set in real time is not efficient.
This does reduce the barrier to adoption for ARM in the server space though. I agree that the future is less sticky as far as ISA's go with most of the could applications running on higher abstracted platforms that hide the underlying h/w. But it is a compelling story to be able to say to the datacenter operators that they can run their existing software (albiet at lower performance) and all the new stuff with great power efficiency.
Not seeing the compelling anywhere. Power efficiency is a fiction if it takes more total energy to go your computation.
So it emulates at 40% of ARM, which is, in turn a percentage of x86. By the time you get done with the calculation, you have run for so much longer that the total energy (operating cost dollars) is greater than just running on x86.
x86 emulation has been attempted in the past including on big iron DEC Alpha; always fell short on overhead.
Although can yield an interesting engineering lesson, especially in Nx486 to 586 example on
reverse engineering x86 instruction set for implementation in RISC hard wired decoder.
Several very good comments here. As some note, in the area of cloud platforms, the software assets are owned by the data center. So this type of emulaton technology is less relevant. However if you imagine an Enterprise severl where there is a mix of apps written in a high level (and therefore relatviely easy to recompile and run natively) language and an (annoying) application where only the x86 binary is available. Here, this type of translation is useful. And the performance hit will lessen. Trust me.
It think this is interesting but not a game changer. Emulation has been around a long time.
For instance, I still have and use a Palm OS PDA. Original Palm devices used Motorola Dragonball processors. As devices required more power, they outgrew the Dragonball, and Palm and competitors shifted to ARM devices. To avoid making the vast amount of existing Palm software incompatible, Palm implemented a hardware compatibility layer. Motorola instructions were intercepted and converted on the fly to ARM instructions. In fact, it wasn't possible to write fully native apps. You wrote and compiled to Motorola code, but you could write "ARMlets" in native code to speed up critical operations.
Granted, the ARM CPUs were a *lot* faster than the Dragonball chips they supplanted (My ARM device runs at 200mhz. The last Dragonball device ran at 33mhz), so the overhead of emulation wasn't a factor.
It will be here, but that may not be critical. It will depend on exactly what X86 apps will run under it, and I would expect the emulation to improve over current levels, so while it might not be as fast as native code, it might be fast *enough*.
And it would be a stopgap in any case: it's a fair assumption that if ARM gains traction in the server space, proprietary X86 apps will be rewritten/recompiled for ARM to sell into that space.
I think this is a nice project for some research work but not so much for a startup company. I wonder what they aim to monetize by making the emulator run x86 apps. The world of internet is becoming more and more ISA and OS agnostic where the only winning combination is performance and power.
Fergie 65; on performance will lesson under emulation; exactly. If you mean by binary that the performane hit will lesson for applications
recomplied for 'specific x86 instruction calls" exactly. Hardware substitution learning always starts addressing core instruction translation equired for target application.
FYI, Here's some analysis that was apparently cut from the current version of the story:
“It’s very possible this could be useful technology in a 2014 time frame,” said Kevin Krewell, senior analyst with market watcher Linley Group (Mountain View, Calif.).
Startup Transitive Technology helped Apple and SGI transition to the x86 from PowerPC and MIPS respectively before Transitive was acquired by IBM. Emulation “could be used for migration, but not as a long term strategy,” Krewell said.
Emulation is typically used when the new architecture has higher performance than the old one, which is not the case—at least today--moving from the x86 to ARM. “By the time this software is out in 2014 you could see chips using ARM’s V8, 64-bit architecture,” Krewell noted.
“That said, you will lose some of the power efficiency of ARM when doing emulation,” Krewell said. “Once you lose 20 or more percent of efficiency, you put ARM on par with an x86,” he added.
Does anyone happen to know where I might be able to find this paper that was presented last week at ARM TechCon? I am currently taking a graduate level computer architecture course and I would love to present this article to the class. Any information anyone might be able to offer would be much appreciated.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.