@Wilco: Ah, but speed has not been on the same track as p-erformance in PCs for years. Speed is staying roughloy constant to prevent current leakage from boiling over. One of the new performance metrics is cores per socket where Intel is at 4-20 and Apple is at....one? two?
Anyway, it's still juicy speculation an A8a that runs Macbooks and an A8b that runs iPhone and iPads. Maaaaaayyyybe!
Well, do you remember the last time when Intel provided a large speed boost? Nehalem to Haswell gave approximately 4.5% speedup per year over 5 years. That's peanuts compared to Apple's claimed improvement of about 75% per year over the last 6.5 years. I have a Sandy Bridge system and am not planning to upgrade for many more CPU generations. There is just no measurable improvement otherwise.
You're right that Apple has to add some of the missing functionality to their current SoC. However there are already ARM SoCs with SATA, PCI, GbE etc, so I don't think it is that much work, especially for a company with as much money and talent as Apple. And yes, rolling your own is actually better for the bottom line. If you wanted say 10 million chips, and they cost $300 to buy from someone else but you had the ability to make them yourself for $100 with an investment of 1 billion, what would you do? What if it were 100 million chips? Still keep writing those purchase orders?
This is really a big question. This looks like new Apple devices including Macs will have 64-bit processor. This will also unify iOS and OSx. We may have sudden surprise with new powerfull applications from Apple.
@Wilco: I don't think its a fiar characterization to say Intel has been standing still in performance while Apple has been making 2x leaps.
Also, in addition to performance there are a lot of compatibility issues, peripheral support and other plumbing Apple would have to install to make its own Mac SoCs. Not that they could not do that someday but it would not be trivial and may not be worth the effort when you mcan just keep writing purchase orders for lots fewer person-hours of work.
Rick, you do realize that getting from current top-end ARM to Haswell performance is a relatively small gap? It's less than a factor of 2 in IPC and less than a factor of 2 in frequency. Basically Intel has stood almost still in the last few years while ARM CPUs have doubled performance year after year.
Apple's new 64-bit CPU may well be a 4-way OoO and very close to Haswell in terms of IPC. However it's unlikely frequency will match as their main goal is low power (hence high IPC at a low clock). We'll soon get 20nm 64-bit ARM server CPUs which are aiming for high performance, so things will get very interesting in 2014.
@jdes: ah, virtusalization! Good thoight. It is sweeping the server and comms world in a hige way but so far we have only seen dabbling in mobile virtulzation and nwhat it could mean using osme hefty 64bit CPUs.
Hopefully it does not mean a phone instance my carrier charges my company for and one it charges me for!
If 64 bit space will allow true speech recognition, then I say it is high time that it becomes the norm.
Maybe then, we can truly get rid of these silly little virtual keyboards to the technology graveyard, where the old slide rules and typewriters went to retire!
And maybe then, there will remain no excuse left for programmers to hide behind excuses of why we are still slowing down the masses in achieving true and speedy communications that have been yearning to come out of the shadows for the past 3+ decades.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.