IBM's been doing this for years: http://en.wikipedia.org/wiki/Blue_Gene
Anyone know of Tilera (http://www.tilera.com) recently purchased by EZChip? Another multi-core MIT spin-off.
HW is good, but being able to exploit this "correctly" requires rather smart Software that doesn't exist. Tilera and other multi-core folks have been playing with this for years, still not there. . . Even today, with ARM interconnects, SW can only do so much to exploit the HW. Until HW and SW get together (i.e. make a hole in the fence) - we'll never fully utilize all the CPU power we have. We'll just continue to waste it with more cores running crappy software. . . .
As the Saying Goes: "It's the software, stuipd". . . .
Your'e right if they fail, but just like the other guys , these guys think they have the software right this time. If they do have siomething new and validate their software through testing, then they will release the Verilog hardware description, so at least they are doing things in the right order!
It's funny that people always whinge about how software is the problem, that sw isn't taking advantage of the hw. I'd like to propose the opposite: very little sw needs highly parallel hw. As a person who works with highly parallel systems every day, I always cringe when someone asks why their desktop system doesn't run 4x faster on a 4-core system. The answer is mainly "because you don't need it to". Or a little less sarcastically: you have tiny sections of workload that may be inherently parallel enought to take advantage of highly parallel hardware, but most of your time is spent waiting on the user, or IO or memory.
There are plenty of workloads that can effectively utilize a 36-core chip today, AND DO SO. Heck, 36c isn't even a bit system these days - it's doable in a simple commodity dual-socket server. Not to mention Phi or GPUs.