Not so well. TSMC is on track for FINFET, many are doing ARM architeture license (it is like multi CPU groups). Other than process technology advantage, intel have nothing. Toomuch of process, Kills innovations...
What are the new key enablers are coming out of INTC and MSFT R&D labs. When INTC moved out of Santa Clara for development, innovation died...
Having said that we want to make sure US companies to be op top list...so it is vital for Intel manufacturing to be number 1.
Good point. Even TI didnt see a busienss playing third to Qualcomm and Nvidia (and Exynos, and SE and).
I imagine Intel must have been the first one to knock on Amazon Kindle's door when TI ended Omap for tabs.
Intel cannot use conflicting propaganda. On one hand, 22 nm rollout need not be so fast due to not enough demand or too much inventory at 32 nm needed to clear out, according to Intel. On the other hand, if Intel wants design wins against ARM it needs to roll out 22 nm across the board much faster, even if it means dumping the old 32 nm earlier than expected.
It doesn't matter what instruction set it has. What matters is whether it is built to be low power. Seeing DDR3 is a red flag. If you want to build the next gen microserver, you need a wider, more power-efficient, short-wire interface from the CPU to the DRAM stack.
The software in servers is typically bottlenecked on the memory. If you want more power-efficient servers, you have to lower the nJ per byte on those interfaces.
Yawn. We already know Intel's 64 bit chip was delayed to 2015, a year after 64 bit ARM chips are starting to ship. And by the time they launch their dual core version, there will be quad core ARM versions. Sorry, Intel. You missed the boat. Again. Better luck in 2018.
Guess I was thinking about the "mobile" Atom, that's arriving in 2015:
But this also shows how out of touch Intel is with the reality. I thought they were going to make Atom a "prime" chip for their fabs and whatnot, and give them as much priority as their Core chips, so they can compete better with ARM.
So when why are they releasing the mobile chip much later than the server now? I guess the server one will be much less inefficient? But then, how is it supposed to compete with ARM in servers? Yeah, I don't see this ending very well for Intel, in either market.
Whether 32 or 64 bit ARMS on blade is viable high margin business & can compete with Xeon in server with system management & targeted acceleration.
And its not really issue of wimpy ARM, but crippled ARM given need for architectural enhancement that can make a StrongARM®. ARM architectural license is advantageous over design license.
Cautionary note, Intel is an ARM customer. Design producers who want too compete in server vs. Intel must own control over processor architecture. On Intel’s field it could result in severe fandango to count on ARM. Option to hedge knowing payroll must be met, staff have house payments, children too send to college must be considered.
Bet on your design capabilities on leading advantages ARM architecture offers on path toward network in processor integration.
ARM community places scalar ARM at ½ per of Intel dual issue. ARM 64 bit super speculated closing processing gap on freq verse ATOM. Two 32 bit ARM 1.1 GHz quads equal one Xeon 2620 hexa 2.0 GHz in an Intel loaded molecular docking benchmark; http://www.lowpowerservers.com/?p=141
For multiple ARMS on blade analyst suspects will
reach into high end XEON product performance and price rungs. Verse Xeon 2620 analyst calculates Calxeda Energy Core value at $171 to $200 per component.
Subsequently octa ATOM presents solely an initial low power barrier meant to protect higher power Xeon product and price voids certainly into E3, into E5 & even 46xx for massive dense where NIC in SOC across fabric VM mode is aimed to resolve Xeon power utilization issue.
ARM power clocking islands are an advantage verse Intel architecture with insight mixed on how long it will take Intel to gain parity in chip power management.
Time will tell if dense ARM achieves aim of Xeon parity performance but that concept is spreading.
ARMs on blade is a viable high margin business.
Spending more money to be slower ... that dog don't hunt. Servers are about latency and Watt-hours, not Watts. If you do a calculation on a processor that uses half the power, but take more than twice the time, you have slowed the application and paid more for that privilege. That's where microservers fall down. They'll probably find some application, just as blade servers have, but this looks like a preemptive strike by Intel.
Servers are essentially reliable data pumps, so the 'core' debate is a red herring.
Things like ECC will matter, as will low energy memory, and there also has to be a sweet spot in servers, where you match pump-throughout to the average user, not to some brag-contest peak.
That sweet spot is where you will find the lowest J/GB served, and that is what customers will pay for.
It is less about bragging rights and more about efficiency.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.