SAN JOSE, Calif. – Will the rapidly increasing processing power being enabled by many-core processors cause the advent of machines with super-human intelligence, an event sometimes referred to as the singularity?
That was the question put to a panel of some of the best minds in multicore processor theory and design assembled on Tuesday (March 27) by analyst Jon Peddie at the Multicore DevCon, part of DESIGN West being held here this week. A small but enthusiastic audience was there to listen.
Peddie set up the debate by referencing Vernor Vinge, a science fiction writer, who had predicted that computing power would be equivalent to human processing power by about 2023. One particular aspect of the concept of the singularity is that once machines either in single units or collectively exceed human intelligence there may be an explosion of machine learning advancement that it would not be possible for humans to fathom, by definition, making the singularity a kind of event horizon.
Another extrapolation of computing progress had 2045 as the year in which it might be possible to buy a machine with the processing power of human brain for $2,000 in 2045.
Pradeep Dubey of Intel parallel computing labs illustrated the progress by saying that a petaflops supercomputer can already simulate a cat's brain. A human brain has 20 to 30 times more neurons and 10,000 times more synapses, he said. The synapse to neuron ratio is close to 10,000 to 1, he qualified in email. At the time Dubey commented the complete simulation of human brain is only a matter of a 5 or 6 years away. "Exaflops could simulate a human brain," he said.
Dubey said there are currently three approaches: simulate the process with a neuron- and synapse-level model; ignore brain architecture and treat the problem as a data and statistical problem; or to build hardware that mimics neurons and synapses.
However, the simulation of the brain is not the same as thinking nor does it generate the emotional intelligence we see in human beings, said Ian Oliver director of Codescape development tools at processor licensor Imagination Technologies Group plc. "We probably have the wrong memory model. The human brain is non-deterministic. It operates on the edge of chaos," he said.
Oliver pointed out that the use of genetic algorithms to derive FPGA designs through evolution produced much more brain-like architectures but were not readily usable in the real world and as such computer and human intelligence appeared to be distinct.
Mike Rayfield, vice president of the mobile business unit at Nvidia argued that the number of processor cores is a red herring. But Intel's Dubey countered that more cores is better saying that massive data engines can capture correlations if not causality. He pointed out that machines can already do some things far better than humans, which is the reason they exist. "We can build planes but we can't build birds," he said.
The big problem with this argument is that we don't know how the human brain works yet so how on earth can we simulate it? The second problem is that even if we did know how it worked the human brain dissipates about 25W, or about 1 millionth of what an exaFLOP computer would require. This limitation means that very few of these artificial "brains" will be built until our technology can rival the efficiency of the brain. Even assuming Moore's law applied (doubling performance every 18 months) it would take 36 years from the moment we have a "brain-computer" to the moment it was as efficient as a human brain. Moral of the story ... don't hold your breath!
Step 1: You make a beefburger, it doesn't moo.
Step 2: Yet make a series of bigger and bigger beefburgers, they still don't moo.
Step 3: You put multiple beef burgers in the same bun, it still won't moo.
Conclusion: It's not the speed, size or amount of beef cells you have, it's how is it connected together that determines whether it will moo or not.
I.e. True AI won't be achieved by having greater processing power alone. We need to solve the hard problem of working out the right architecture first.
I see a key distinction between "expected / known" and "unexpected / unknown" problem solving. Computers are complete champions in arithmetic - most of us happily hand over such tasks because the computer doesn't get tired, distracted, or careless. On the other hand, open ended inference problems are much more efficiently solved by humans - perhaps using the computer as an information retrieval engine to gather appropriate data. As computers "learn new tricks", that boundary will continue to shift.
Do you have something else in mind?
The amount of parallelism within a single thread is often fairly limited and modern processors with multiple issue and speculative execution are bumping up against diminishing returns. If you go to multiple threads, it simply becomes a tradeoff of whether it is more efficient to make a single "core" execute more and more threads or simply replicate the core.
More transistors means more parallelism. If the basic model of computing as sequences of logical and arithmetic functions is retained, one can carve up that parallelism at different levels but the results remain largely just an issue of optimization rather than some radically better vision.
I've always argued along the lines of "We can build planes but we can't build birds." We achieve the same function, but through different means. Machine intelligence will be different than human intelligence. Planes don't fly with the grace of birds. That gray squishy thing in our skulls is not just driven by the interconnections, but is also influenced by the chemicals flowing through it. We forget things, the computers wouldn't. But is that forgetfulness part of how we function? I look forward to true AI, but I don't expect it to be something that can really pass a Turing test, anymore than I expect a plane to perch in a tree.
Wondering how they haven't all really solved the problem with these great Geniuses thinking about it.
Title should read: "self-procalmed multi-core experts panel don't know know jack about artifical intellience".
switching to multi-core design is really an admission of the intellectual bankrupcy... well.. we can't really figure out how to make a better microprocessor, its too acedemically difficult to think about with the types of people we hire these days, so let's just do the obvious thing and plunk down as many of them on a chip as we possibily can fit.