Up to 4GB RAM may be a limitation to support multiple VMs with sophisticated service. The power saving may come down to how many VMs that can be supported on a ARM based server vs that can be supported by a x86 based server.
here are a few more publicly announced Cortex-A licenses: Broadcom , NEC Electronics, NVIDIA, STM, Toshiba, Mindspeed Technologies, Freescale, Matsushita, Samsung, PMC-Sierra, Ziilabs.
Why are there no solutions like Tilera's 36 or 64 core products. Are ARM's per core licensing costs too high ? 32 bit address limitations ? There are a lot of multi-core non ARM product our there, Cavium Networks (MIPS), Netlogic (MIPS), Freescale (PPC), Azul (Java VM), Plurality ...
2 or even 4GB of RAM is sufficiant for many, many server tasks. I think a system based on that SoC is on par with the Pentium III Tualatin quad-processor systems build in 2001-2002. These system also had just 2 or 4 GB of RAM. But nevertheless served well.
Definitely interesting, but it seems problematic that you have four cores that can address only a total of 4GB of memory, e.g. 1GB per core. Might be limiting for a lot of server apps. In practice it might be worse, with the system only able to handle 2-3GB (that's been my experience with 32-bit, non-PAE Linux).
Replay available now: A handful of emerging network technologies are competing to be the preferred wide-area connection for the Internet of Things. All claim lower costs and power use than cellular but none have wide deployment yet. Listen in as proponents of leading contenders make their case to be the metro or national IoT network of the future. Rick Merritt, EE Times Silicon Valley Bureau Chief, moderators this discussion. Join in and ask his guests questions.