REGISTER | LOGIN
Breaking News
News & Analysis

Microsoft, Google Beat Humans at Image Recognition

Deep learning algorithms compete at ImageNet challenge
2/18/2015 08:15 AM EST
14 comments
More Related Links
View Comments: Oldest First | Newest First | Threaded View
Page 1 / 2   >   >>
junko.yoshida
User Rank
Author
Image recognition, pattern matching, etc.
junko.yoshida   2/18/2015 9:33:12 AM
NO RATINGS
Things like image recognition, deep learnig are clearly one of the hottest areas many tech companies are working on.

We just posted a story about NeuroMem, a startup which is using the massively parallel computing for pattern matching. (http://www.eetimes.com/document.asp?doc_id=1325690)

Has anyone at IBM, or Microsoft, talked about the limitations of sequential computing architecture when running algorithms that are essentially so parallel in nature?

R_Colin_Johnson
User Rank
Author
Re: Image recognition, pattern matching, etc.
R_Colin_Johnson   2/18/2015 12:02:39 PM
NO RATINGS
Everybody doing neural networks today using Deep Learning, including IBM and Microsoft are using the parallel architectures that fit the problem so well. IBM is building its own specialized parallel machines for Deep Learning with neural networks and has designed its own Corelets language to program it. IBM's mimics many aspects of the way the human brain works--like the digital spiking outputs--but most of the contestants in the ImageNet Challenge, including Microsoft, are just simulating the synatptic weights between neurons with floating-point numbers on standard parallel architectures, and getting good results. (You'll never build and electronic brain that way though.) Several others are using DARPA money to emulate the brain in great detail and the E.U. has the most ambitious undertaking called The Human Brain Project. What do you think is more important, short-term goals like the ImageNet Challenge, or long-term goals like an electronic brains for robots?

Terry.Bollinger
User Rank
Author
Re: Image recognition, pattern matching, etc.
Terry.Bollinger   2/18/2015 1:44:43 PM
NO RATINGS
Both long-term massive project and short-term challenges play valuable roles in research, but for developing a deep theory of cognition with accompanying devices to implement that theory, I'd place my money on the short-term challenge approach. If those challenges are also available to individuals working within big projects or big research organizations, so much the better, since that situation allows creative people to make good use of the deep resources of such organizations.

Short-term challenges are important because the deepest insights into hard problems usually arise from individuals following their own curiosity, often with flagrant disregard for the status quo. Challenges that allows such individuals to focus on hard problems give them a chance to follow their different drummers. And sometimes it works! Every so often, they scout out some unique path no one knew existed, and so end up giving everyone access to an entirely new and unexpected world of opportunties.

Both John Bell and his quantum inequality and Kary Mullis with polymerase chain reaction are good examples of this different-drummer effect.

Why do large organization have trouble getting similarly creative? Once reason is collective bias, that is, the tendency for everyone in a large group or community to reward each other, often subtly and unintentionally, for following the same traditional drummer.

For example, as Junko noted, understanding human cognition and intellitgence currently suffers from a sort of aren't-computers-great bias that unconsciously encourages most of us to default to sequential problem solution methods, rather than parallel ones.

But our biases go far deeper than that: We are also precision and perfection biased, expecting every computation to give an exact number and the same result every time. Alas, from DNA up through human intelligence, biological systems just don't work that way. Bacteria for example provably use sloppiness and diversity as a valuable component of their calculation processes, in ways that we are just barely starting to understand. This same intentional use of unexpected diversity can also be seen in many other biological processes, notably including neural voting processes.

Our overall understanding of such biological examples is expanding rapidly these days, and with that expansion I fully expect to see some of those folks who listen to different drummers come up with startling new insights into the fundamental nature of intelligence and cognition. I think the next few years will be very interesting indeed.

 

R_Colin_Johnson
User Rank
Author
Re: Image recognition, pattern matching, etc.
R_Colin_Johnson   2/18/2015 2:08:21 PM
NO RATINGS
Terry.Bollinger: You make some very good points--its almost as if computers are not a good model on which to build electronic brain's. Brain's are sloppy--and individual can't remember long-numbers easily, for instance--companies like MicroBial Robotics, for instnace, are basing their brain-like aspirations not on silicon, but on real biological materials. Maybe we should stop trying to make computers biological-like, and just exploit them for what they do best? Then let the biologista use synthetic life principles to build the robotic brains of the future.

Terry.Bollinger
User Rank
Author
Re: Image recognition, pattern matching, etc.
Terry.Bollinger   2/18/2015 2:25:09 PM
NO RATINGS
Colin, yes, the hybid area is getting very interesting, isn't it? There are all sorts of interesting research threads these days that are looking into how we can exploit and use existing biological systems, from the molecular level up through sophisitcated organisms.

One interesting problem is communications: Folks like John Mattick in Australia who are the deepest into the onging revolution in how to interpret DNA -- the "it's NOT junk DNA" revolution -- are aware of the implications for cognitive research, but are still fundamentally biochemists. Their native languages and those of electrical engineers and computer scientist just don't mesh well, and part of the reason is that very different model of computation used by biology.

R_Colin_Johnson
User Rank
Author
Re: Image recognition, pattern matching, etc.
R_Colin_Johnson   2/18/2015 3:44:49 PM
NO RATINGS
Terry.Bollinger: You are so right--the language gap is huge. The E.U.'s The Human Brain Project is trying to bridge the gap as are companies like Intel, Microsoft and Autodesk that latter of which has just introduced a tool called Project Cyborg which directly completes with Microbial Robotics toolsets which which it has already created ViruBots and BactoBotsIntel is just getting started with its Smart Wet Lab Assistant and Microsoft is a little further along with its own toolset for the Genetic Engineering of Cells. So with Microbial Robotics coming from the biological side and Intel, Microsoft and Autodesk coming from the electronic side, there is bound to be a common language emerging soon 

 

Bert22306
User Rank
Author
Not really unexpected
Bert22306   2/18/2015 5:38:36 PM
NO RATINGS
I'm a firm believer that if humans can build an algorithm, then computers will handily beat humans at running it. This is partly why I believe self-driving cars are in our future.

Interesting comments about how we want computers to give strictly repeatable results, whereas nature did not design the human brain that way. Matter of fact, that was one of the early constraints placed on "AI." Those paying for development of AI systems expected deterministically repeatable outcomes for a given set of inputs, which meant that AI turned out to be, more or less, complicated "rule-based" programs.

I think though that creative thinking depends largely on non-deterministic outputs for given sets of inputs. That's how some people come up with novel approaches to solving a problem, instead of everyone just doing the same thing everyone did in the past. Much like evolution depends on less than perfect error correction algorithms in genetics.

R_Colin_Johnson
User Rank
Author
Re: Not really unexpected
R_Colin_Johnson   2/18/2015 5:52:45 PM
NO RATINGS
Bert22306: Good points all. We might expect our robots to perform tasks deterministically--the same way everytime--but for tasks like image recognition, none of these algorithms returned the same results. What's more, the database is skewed to be "easy" for algorithms. In fact, Sun told me that all of these algorithms are good at identifying big categories, such as "farm animals," but not good at distinguishing a cow from a sheep, much less different breeds of cows or sheep. Of course, any given learning algorithm could be trained to look for almost any type of distinguishing characteristics, but we are a long way off from "beating humans" at distinguishing arbitrary items.

mhrackin
User Rank
Author
Heuristic Programming!
mhrackin   2/19/2015 1:23:06 PM
NO RATINGS
Way back in the 1950's, Prof. Marvin Minsky of MIT (who should be recognized as the father iof AI theory), described what he called 'Heuristic Programming."  This was essentially what is described above: the concept is a program that "learns' by adaptively analyzing its results and then adapts weighting factors in its recognition algorithms.  Over time, as the progam "gains experience" it will become better at its tasks, "learning" that mimics human thinking (in a very idealized way).  One of my roommates was a student of Prof. Minsky in 1962-3.  This was the same time my roommate had access to the PDP-1 in the basement where we would spend MANY hours playing "Space War."

R_Colin_Johnson
User Rank
Author
Re: Heuristic Programming!
R_Colin_Johnson   2/19/2015 1:42:57 PM
NO RATINGS
mhrackin: Yes, Marvin Minsky--the father of AI--was there at the beginning with his "Heuristic Programming" description, along with others like Donald Hebb who outlined the neural networks that have been modernized into the "Deep Learning" variety used today. BTW my first programming experience was on a PDP-8 where my class project was to write binary micro-code for a specified instruction set

Page 1 / 2   >   >>
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed