The big problem with this argument is that we don't know how the human brain works yet so how on earth can we simulate it? The second problem is that even if we did know how it worked the human brain dissipates about 25W, or about 1 millionth of what an exaFLOP computer would require. This limitation means that very few of these artificial "brains" will be built until our technology can rival the efficiency of the brain. Even assuming Moore's law applied (doubling performance every 18 months) it would take 36 years from the moment we have a "brain-computer" to the moment it was as efficient as a human brain. Moral of the story ... don't hold your breath!
Step 1: You make a beefburger, it doesn't moo.
Step 2: Yet make a series of bigger and bigger beefburgers, they still don't moo.
Step 3: You put multiple beef burgers in the same bun, it still won't moo.
Conclusion: It's not the speed, size or amount of beef cells you have, it's how is it connected together that determines whether it will moo or not.
I.e. True AI won't be achieved by having greater processing power alone. We need to solve the hard problem of working out the right architecture first.
I see a key distinction between "expected / known" and "unexpected / unknown" problem solving. Computers are complete champions in arithmetic - most of us happily hand over such tasks because the computer doesn't get tired, distracted, or careless. On the other hand, open ended inference problems are much more efficiently solved by humans - perhaps using the computer as an information retrieval engine to gather appropriate data. As computers "learn new tricks", that boundary will continue to shift.
Do you have something else in mind?
The amount of parallelism within a single thread is often fairly limited and modern processors with multiple issue and speculative execution are bumping up against diminishing returns. If you go to multiple threads, it simply becomes a tradeoff of whether it is more efficient to make a single "core" execute more and more threads or simply replicate the core.
More transistors means more parallelism. If the basic model of computing as sequences of logical and arithmetic functions is retained, one can carve up that parallelism at different levels but the results remain largely just an issue of optimization rather than some radically better vision.
I've always argued along the lines of "We can build planes but we can't build birds." We achieve the same function, but through different means. Machine intelligence will be different than human intelligence. Planes don't fly with the grace of birds. That gray squishy thing in our skulls is not just driven by the interconnections, but is also influenced by the chemicals flowing through it. We forget things, the computers wouldn't. But is that forgetfulness part of how we function? I look forward to true AI, but I don't expect it to be something that can really pass a Turing test, anymore than I expect a plane to perch in a tree.
Wondering how they haven't all really solved the problem with these great Geniuses thinking about it.
Title should read: "self-procalmed multi-core experts panel don't know know jack about artifical intellience".
switching to multi-core design is really an admission of the intellectual bankrupcy... well.. we can't really figure out how to make a better microprocessor, its too acedemically difficult to think about with the types of people we hire these days, so let's just do the obvious thing and plunk down as many of them on a chip as we possibily can fit.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.