Slightly off topic, but central to the whole CPU upgrade issue:
I have an I7 (4 x dual core) 3GHz PC running WinXP and it takes about 3-4 minutes before it has fully booted. The same PC boots in around 2 minutes with Ubuntu 8.04.
I also have an Amiga 4000 with a 28MHz 68040 (single core) that runs Amiga OS and boots in about 38 seconds. In opening word processing documents and web browsing and many other common tasks it is as fast as the Wintel machine. Obviously CAD is a different issue.
Both systems are running multi tasking OS's but there is a dramatic contrast in USABLE performance there.
Of course the Amiga OS is much simpler, but the point I'm making is that the OS is now at a point where it is an encumbrance, requiring say 1000 times processor improvement to achieve the same average user experience.
Now obviously I've taken a bit of artistic license here, but the general gist holds true, i.e. my Wintel box should be booting in under 10 seconds and not sometimes take 10 seconds to open an explorer window.
Win-X and Linux and a lot of application developers have failed the user, because to halve their development time they have quadrupled the time the user spends (or would have needed to spend) while dramatically increasing the necessary cost (based on what a simpler machine would now cost) both to the users and the environment.
My Wintel PC consumes some 90W (in a cu.ft of space) and the Amiga around 20W, but as it could probably be put in an ASIC and only consume 10W (in a few cu. Inches), you see where my environmental cost lies.
Wouldn't it have been more interesting to tie this artcle to the one on Intel releasing multi-cores with INTEGRATED GRAPHICS. Where does Nvidia go when the Graphics are all done in the processor??
I think Jensen Huang reads his future accurately, if Intel bringingn the graphics on board, then Nvidia must bring the processor in as well to compete.
The ARM market is too fragmented. Too many cpu versions, too many vendors and too many peripheral/packaging options. I don't see Intel quaking in their boots just yet.
But, I would like to see Intel/AMD dropping their prices. Prices have remained the same for over a year. Seems AMD and Intel may have reached some sort of detante.
Hope Nvidia comes out with a 16 core server processor running at 4GHz+. Then we can talk.
Unfortunately, the prevailing trends of software development make "light and efficient" code rather unlikely. As the generation that was weaned on heavy use of type inference and generics has matured, the focus has shifted from performance to rapid development cycles and an attitude of "fast enough"; hence the explosion of fast and loose scripting languages such as Python and Ruby. They're not _bad_ languages per se, but we mustn't fool ourselves that we're anywhere near metal performance at that point.
In parallel, the expectations of the consumer have grown. It used to be "good enough" to have email, a single-window browser with linear history, and a CD Player app. Now, SD and HD video is the lion's share of network traffic, every new desktop is running composited on 3D surfaces, and people are discovering that they can _do_ more that just email and browse.
No, the revolution is now in scaling the parallelism and reprogramming decades of single-threaded habits. Next it will be in efficiency of power and materials (this parallelism has synergy with both). Then it will be durability and ubiquity. The next fifteen years are going to be awfully interesting.
(Please don't misunderstand; I feel your pain. Straight to my C bones.)
I think this, all of the other high-end ARM activity and the Intel Atom speak to the question of "when is 'it' good enough?"
Cars realistically don't need to travel any faster than about 80 miles per hour (here in the US, anyway). Cars can easily be built that go much faster, but an efficient engine that will propel its load at around 80 MPH is good enough for most users. Excess horsepower and torque is just a luxury needed by few. Specialized applications still require something different, but that doesn't change the "good enough for the masses" factor.
Average CPUs are probably at or past that point and lower-end CPUs like the Atom and newer ARMs are really close. The vast majority of users need security and desktop/web productivity applications. That being the case, we're about to see a much more distinct division in the CPU market.
We'll have Atom and ARM processors for typical productivity use, processors for server use and compute intensive processors for gaming and analysis. On the periphery of that set, we'll have embedded processors below and specialized number-crunching processors above.
A key requirement in the "good enough" segment will be the OS efficiency. I have a ten year-old Celeron laptop that originally came with Windows 98. Obviously, it won't run any of the newer Windows versions nor will it handle fully-loaded Linux distributions. However, it did quite well on Win 98 and is still serviceable with stripped down Linux.
If the OS vendors will keep their code light and efficient, the mass market will quickly open up to the ARM processor sand we'll have a competitive environment like we haven't seen in the CPU wars for many years.
nVidia is not the only one attacking this market. Marvell and Qualcomm, which had ARM architecture licenses long ago and well established CPU core development team, probably well under its way to crack open x86 dominant PC/laptop/server market. At end, i think the only winner is consumer.
I think Intel should consider licensing ARM architecture to implement ARM processor in its leading fab technology.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.