The article is talking about Intel getting into the wireless market, but not wireless technology; Intel is going to be in the handhelds and tablets that traditionally have been ARM based. I can't say what the power/performance tradeoff numbers are between Atom and the new/future generation of ARM cores, but I have an Atom based laptop that is not very impressive (as a laptop mind you). I think if the OS and system design delivers speed and long battery life then Intel will have a winner, if not then they will not achieve the success they aim for.
All 4 core APU for smartphone will start shipping using 28nm 2012. it still can't handle the full-HD 3D graphic. Smart phone application is still pushing for better & faster chips even in low power application. Your cost analysis is only right for foundries' node migration. It doesn't necessary right to compare Intel 22nm vs foundry's 32nm. Intel delayed two generation immersion litho (compared to foundries) introduction just to save cost. Intel does follow Moore's law in perfomance/cost/schedule so far. and multi-patterning is required in too many critical layers from 28/22nm in foundries, you don't need to be in 14nm.....
There are two monopoly words used in this article aimed at Intel. While ARM monopoly in the wireless sector is rewarded with words such as "maintain leading position". This aritcle is not meant to inform readers, but rather, to pre-empt Intel in order to pave the way for future lawsuits from ARM and AMD against Intel. I buy and support Intel products because of the jobs issue. The jobs created in the US from ARM and AMD is a drop in a bucket compare to jobs that Intel created. If you want to support jobs growth in the US, buy tech. products that contain Intel chips.
It's a lower margin business. It made no sense to use advanced technologies for the purpose of lower power consumption. 22 nm nodes' will be at least 40% more expensive than 32nm and 14 nm would be even much more when EUV litho kicks in. They should run the PC/server CPUs on new technology nodes and use the older generation technology at depreciated fabs to run lower margin products. Why can't Intel push the design architecture to get lower power same as ARM?