LONDON – Up to a million ARM processor cores are going to be linked together to simulate the workings of the human brain in a research project in the U.K. Chips, designed at Manchester University and manufactured in Taiwan, form the building blocks for a massively parallel computer called SpiNNaker (Spiking Neural Network architecture). The specialized chips, based on an old ARM instruction set architecture, were delivered to the university last month where they have subsequently passed functionality tests.
SpiNNaker is a joint project between the universities of Manchester, Southampton, Cambridge and Sheffield and has been funded with a £5 million (about $8 million) government grant. Professor Steve Furber of the University of Manchester has been studying brain function and architecture for several years, but is also well known as one of the co-designers of the Acorn RISC Machine, a microprocessor that is the forerunner of today's ARM processor cores.
"We have small simulations running now, and will be scaling up over the next 18 months," said Professor Furber.
There are about 100 billion neurons with 1,000 trillion connections in the human brain. Even a machine with one million of the specialized ARM processor cores developed at Manchester would only allow modeling of about 1 percent of the human brain, the researchers said.
Neurons in the brain transmit information as analog electrical spikes. In the SpiNNaker machine these will be modeled as packets of descriptive data. The neuronal processing of these spikes is then run as models or virtual neurons running on the ARM processors. The architecture and use of packetized digital data means that SpiNNaker can transmit spikes as quickly as the brain with many fewer physical connections.
An original test chip was designed by Professor Furber's team in 2009 but the latest implementation includes 18 ARM processors per silicon die which come packaged with a memory die and have a power budget of about one watt. The chip has been manufactured by UMC (Hsinchu, Taiwan) in 130-nm CMOS. It has a complexity of about 100 million transistors although this is mainly in 55 32-kbyte SRAM blocks distributed across the die, Professor Furber said.
The accompanying memory die is a 1-Gbit DDR SDRAM from Micron Technology Inc. (Boise, Idaho) that operates at up to 166-MHz. These were sourced as known good die and then had packaged with the SpiNNaker ARM die in a 300-BGA package, Professor Furber said.
"We don't know how the brain works as an information-processing system, and we do need to find out. We hope that our machine will enable significant progress towards achieving this understanding," said Professor Furber, in a statement.
ARM has been supporting the SpiNNaker project since it was approached in 2005 by providing its processor and physical IP to the team.
From what i understand , this is about understanding how to build a 1000 trillion link neuron communication system, and less about understanding how the total brain works.
But we are improving in understanding how the brain works. As one example of this improvement, is the technique of "Deep Learning", which is based on neuroscience. It uses a single a algorithm that can learn diverse tasks (mainly in perception) just by feeding it a lot of data. the same algorithm basically can learn speech recognition , image recognition , text understanding and other tasks at a really good accuracy , competing with best algorithms designed by experts.
I "recall" reading some medical literature that sleep provides the normal defragmenting and refresh process by which the brain reemphasizes desirable memories.
Such theoretical assessments aligns with technical processes experienced in computer programming ( memory leaks and stack pointer overruns, etc ), even if the human brain has a large memory capacity, it is the ability to recollect at will, for a successful life, and not one with overburden flow of regretful and sad memories.
Your suggestion does circumvent the unsupported notion that the brain processes information. However your proposal that the networks be taught is very un-brain-like. Brains are not taught, rather they teach themselves.
Yes, we do need hard data you're totally correct. So where is the hard data for the assertion that the brain is an information processing system. I'm aware of none, but if you know of some, please post a link.
I'm sure they will. In the early 80's, a new technology called lcd screens came out. They were very low resolution and only gray scale (or amber scale).
I suggested that progress would improve them to color and higher resolutions. The response at the time was "IN YOUR DREAMS".
Well, look at it now!
Thanks for your response.
This is a well known idea @wirenom...people have tried to use programmable hardware to be trained or evolved on its own by learning...it didn't produce anything useful yet, most research papers are actually doing this in software which misses the point (as software is ultimately executed on a digital computer)...My bet is that in 10 years you will see something interesting in this field...Kris
P.s All above development are based on CMOS based analog circuits, nanotechnology is not yet useful for that functionality, it still tries to deliver one transistor, but it will get there eventually
Although I am impressed, and overwhelmed, by the sheer size of this marvel, I can't help thinking of another approach.
Why expend such massive amounts of digital silicone to duplicate an analog function?
I read all the time of nanotechnology and how small op amps and digital gates are being made.
Why not manufacture opamp based neurons with about 100 conductive synapses each. Each could be programmed to perform in predefined ways.
Then put them together by the thousands, or millions, and start teaching them?
wouldn't the end product (knowledge gained) be far more valuable than how a million cpu's did something?
I don't see much material in the news or blogs concerning neural chips or research, so I just thought I'd ask experts like you.