When I visited the Elbrus Laboratories in the early 1990s I was shown big iron Russian supercompters made out of 1970s and 80s vintage technologies. I was told that we built these out of necessity from vacuum tubes, and "they work". Russian/Soviet science and technology has come a long way, and has ways to go. It's gratifying that Elbrus is tackling ARM software and is presenting at the SV venue.
I think the need for x86 compatibility is overblown and rapidly diminishing. Only legacy closed source applications have a need for such binary compatibility. Cloud infrastructure runs largely on open source technologies such as Linux and couldn't care less whether the instruction set is x86 or ARM. Moving an application from x86 to ARM is usually only a "make" away.
I just don't understand how an x86 emulation on ARM can help. ARM lags in performance, but makes it up in its market segment by superior power efficiency. Emulating x86 on such slower chips is bound to be slow-squared. The only reason to do it would be legacy applications, but the whole point of the post-PC era is that it's no longer the Wintel monopoly: backwards compatibility requirement is over.
The new software is portable---either because it's Open Source (like Android) or because it is written in Java or HTML5, or is really a light client running against a cloud-based back end.
Look at Transmeta---they couldn't make the emulation win, even though they emulated on a custom-fit microarchitecture rather than general ISA like ARM. Intel is just too good at improving their product for those that needed it.
The bottom line is, those sorry chaps who cannot move away from x86 will use Intel. Those who can port will make a decision based on cost, performance, power consideration, etc. I'll just say that I just got my $30 Raspberry running yesterday. It's slowish, but it runs off battery and did I mention that the entire computer costs $30?
This does reduce the barrier to adoption for ARM in the server space though. I agree that the future is less sticky as far as ISA's go with most of the could applications running on higher abstracted platforms that hide the underlying h/w. But it is a compelling story to be able to say to the datacenter operators that they can run their existing software (albiet at lower performance) and all the new stuff with great power efficiency.
On the other hand, at a cutting edge supercomputer research lab in the US, we were given by a team from Moscow state University a floppy disk which you would stick into your 286 and it could derive Feynman diagrams to tree level from a Lagrangian. Yeah early 1990s it was...
Not seeing the compelling anywhere. Power efficiency is a fiction if it takes more total energy to go your computation.
So it emulates at 40% of ARM, which is, in turn a percentage of x86. By the time you get done with the calculation, you have run for so much longer that the total energy (operating cost dollars) is greater than just running on x86.
x86 emulation has been attempted in the past including on big iron DEC Alpha; always fell short on overhead.
Although can yield an interesting engineering lesson, especially in Nx486 to 586 example on
reverse engineering x86 instruction set for implementation in RISC hard wired decoder.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.