Yes, I agree. Too much money does not bring happiness, but brings out the worst in people. I single now, but have been married to two rich women--and their whole families were obcessed with one-upping their friends.
Avinash Jois: "what does a 0,04% improvement ...mean"
It means that the multi-billion dollar king-of-the-search engine does not want to be one-uped by Microsoft. Also consider that the test databases were intentionally made "easy". For instance, they were not asked to distguish a cow from a bull, much less what kind of cow.
mhrackin: Yes, Marvin Minsky--the father of AI--was there at the beginning with his "Heuristic Programming" description, along with others like Donald Hebb who outlined the neural networks that have been modernized into the "Deep Learning" variety used today. BTW my first programming experience was on a PDP-8 where my class project was to write binary micro-code for a specified instruction set
Way back in the 1950's, Prof. Marvin Minsky of MIT (who should be recognized as the father iof AI theory), described what he called 'Heuristic Programming." This was essentially what is described above: the concept is a program that "learns' by adaptively analyzing its results and then adapts weighting factors in its recognition algorithms. Over time, as the progam "gains experience" it will become better at its tasks, "learning" that mimics human thinking (in a very idealized way). One of my roommates was a student of Prof. Minsky in 1962-3. This was the same time my roommate had access to the PDP-1 in the basement where we would spend MANY hours playing "Space War."
Bert22306: Good points all. We might expect our robots to perform tasks deterministically--the same way everytime--but for tasks like image recognition, none of these algorithms returned the same results. What's more, the database is skewed to be "easy" for algorithms. In fact, Sun told me that all of these algorithms are good at identifying big categories, such as "farm animals," but not good at distinguishing a cow from a sheep, much less different breeds of cows or sheep. Of course, any given learning algorithm could be trained to look for almost any type of distinguishing characteristics, but we are a long way off from "beating humans" at distinguishing arbitrary items.
I'm a firm believer that if humans can build an algorithm, then computers will handily beat humans at running it. This is partly why I believe self-driving cars are in our future.
Interesting comments about how we want computers to give strictly repeatable results, whereas nature did not design the human brain that way. Matter of fact, that was one of the early constraints placed on "AI." Those paying for development of AI systems expected deterministically repeatable outcomes for a given set of inputs, which meant that AI turned out to be, more or less, complicated "rule-based" programs.
I think though that creative thinking depends largely on non-deterministic outputs for given sets of inputs. That's how some people come up with novel approaches to solving a problem, instead of everyone just doing the same thing everyone did in the past. Much like evolution depends on less than perfect error correction algorithms in genetics.
Colin, yes, the hybid area is getting very interesting, isn't it? There are all sorts of interesting research threads these days that are looking into how we can exploit and use existing biological systems, from the molecular level up through sophisitcated organisms.
One interesting problem is communications: Folks like John Mattick in Australia who are the deepest into the onging revolution in how to interpret DNA -- the "it's NOT junk DNA" revolution -- are aware of the implications for cognitive research, but are still fundamentally biochemists. Their native languages and those of electrical engineers and computer scientist just don't mesh well, and part of the reason is that very different model of computation used by biology.