Image recognition insights are often buried in the unreported details of detection thresholds, false positives and false negatives. First (regarding detection thresholds): how does the neuron's sensitivity compare with a human? Secondly (false positives): does the neuron ever fire on an image in which a human viewer cannot see the cat (and a cat is not believed to be present)? If so, are there any cues regarding what makes that image seem catlike? What mimics a cat and confuses the neuron? Finally (false negatives): does the neuron ever miss cats that seem to be within the normal detection sensitivity of the neuron? These instances may lead to some cues regarding the parameters that the neuron is depending upon that we don't use as heavily. These are the cats that are "camouflaged" from the neuron.
I was also thinking of Skynet, but feel we don't have to worry until the Google network can *herd* cats. If that happens, it's game over for us carbon-based units (to borrow a phrase from another sci-fi dynasty). ;-)
All kidding aside, this is an interesting development in cognitive computing and I expect we'll see many more to come from the likes of Google.
My understanding is that the network does now know the word "cat."
But if you show the network any image that YOU think is of a cat, one neuron, or pattern of neurons always fires in response. And it you show it an image of anything that YOU think is not of a cat, that neuron, or sub-network, does not respond.
Nonetheless it is learning by itself, and like neuromorphic system. And the experiment could be extended to cross matching the images with spoken words.
Interesting! Learning by itself, what a cat is!
It's like a baby learning and relating images with words. I suppose the neural system was able to "hear" the audio of the videos and relate "cat" to the images of cats? If no tags were used... how was the relation established?
Seems indeed we're in the cognitive computing time.
I think Google make the point that most people have made networks with 10s of millions of connections while they have created a network support a billion connections.
It may be that scale is important in neuromorphic systems and the good stuff only starts to happen when you get above 1 billion connections.
Neural Networks trained using a set of test data identifies a pattern! There is nothing radically new here. Just that Google could afford to pay for a large 16KCPU server to support the large number of nueral connections.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.