Ray Kurweil offers his take in the Wall Street Journal today, riffing on his "Singularity" work.
Bottom line from his perch is machines won't take over; we'll harness them effectively. We shall see.
Watson's use of 2880 processors and immense amount of storage to mimic just one part of what the human brain does is a testament to the design of the brain because it gives us another look at the scale of complexity needed to master the Jeopardy! problem. Watson isn't designed to drive, so it only emulates part of what we do, while requiring orders of magnitude more power to do it. Next time you read about the latest "powerful" computer, you have permission to snort milk through your nose.
You missed the point of Watson all together.
No one in the Watson team will say that "we are replacing the brain" and knows that this is just the first step in a (very) long journey.
Rather, it opened the door on a brand new chapter in human-machine interaction and the ability of the computer to absorb and use information.
The real point in artificial intelligence is to pursue an understanding of the ultimate "computing machine": us humans. And, though we may not achieve the ultimate, we learn a lot about novel ways to use automation and radically improve our lives.
The real payoff is a better world for us humans.
Moore's Law has several decades to go before we see cheap computers that rival the Human brain in processing power. I think it naive to expect Human level performance from any system still orders of magnitude away from Human. OTOH, we are probably getting a preview of Google 2.0
Computers are never going to mimic the human brain, anymore than airplanes mimic birds. Birds and planes fly, but they do so by different methods, and so it will be when we have computers doing human tasks.
Watson's performance was indeed impressive. However, I don't think he was that much "smarter" then Jennings or Rutter. Once the "ring in" was enabled (when Trebeck spoke the last word of the clue), Watson had an insurmountable advantage. He could respond in microseconds. The humans' response slowed by their nervous system and finger muscle latency, which is at least several milliseconds.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 1
The above article says, ‘we really don’t have a clue as to how the human brain (or those of most other animals) really "works". ‘
WRONG! Although we do not understand many of the details, we have good plausible theories for how the brain accomplishes much of what it does. And there is no reason to think we have to copy that much of the details of how the brain works to copy it’s important functions.
Brain science has made increasingly accelerated advances in the last twenty years. As a result there is much more knowledge in the field than even most researchers are aware of. One example of such recent progress is the paper “Learning a Dictionary of Shape-Components in Visual Cortex:...”, by Thomas Serre of Prof.Tomasa Poggio’s group at MIT. It describes a system that provides human-level performance in one limited, but impressive, type of human visual perception (http://cbcl.mit.edu/publications/ps/MIT-CSAIL-TR-2006-028.pdf ). The Serre-Poggio system learns and uses patterns in a generalization and composition hierarchy. This allows efficient multiple use of representational components, and computations matching against them, in multiple higher level patterns. It allows the system to learn in compositional increments. It also provides surprisingly robust invariant representation. Such invariant representation is extremely important because it allows efficient non-literal matching, pattern recognition, and context appropriate pattern imagining and instantiation. Such non-literal match and instantiation tasks have ---until recently --- been among the major problems in trying to create human-like perception, cognition, imagination, and planning.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 2
Although it is different than the Serre-Poggio system, the system described in Geoff Hinton’s Google Tech Talk at http://www.youtube.com/watch?v=AyzOUbkUf3M demonstrates a character recognition architecture that shares many of these same beneficial characteristics --- including a hierarchical, scalable, and invariant representation/ computation scheme that can be efficiently and automatically trained. The Hinton scheme is quite general, and can be applied to many types of learning, recognition, and context sensitive imagining. The architecture described by Jeff Hawkins et al. of Numenta, Inc. in “Towards a Mathematical Theory of Cortical Micro-circuits” (http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000532 ) also shares the concepts of hierarchical memory and invariance, and provides a potentially powerful and general computational model that attempts to describe the functioning of the human cortex in terms of its individual layers.
Similar amazing advances have been made in understanding other brain systems ---including those that control and coordinate the behavior of, and between, multiple areas in the brain --- and those that focus attention and decide which of competing actions to take or consciously consider.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 3
These advances, and many more, provide enough understanding that we can actually start experimenting with designs for powerful artificial minds. It’s not as if we have exact blue prints. But we do have a good overview, and good ideas on how to handle every problem I have ever heard mentioned in regard to creating roughly brain-like AI. As Deb Roy, of MIT, once agreed with me after one of his lectures, there are no problems between us and roughly human-level AI that we have no idea how to solve. The major problem that exists is the engineering problem of getting all the pieces to fit and work together well, automatically, and within a commercially-viable computational budget. That will take experimentation.
In fact, the major remaining barrier hindering the achievement of human level AI, is not a lack of theories for how to builds such an artificial brain, but rather the lack of hardware with the extreme computational and representation power of the supercomputer between the ears of every intelligent human. The human brain performs the equivalent of read-modify-writes to over 100 trillion different memory locations a second. I am not aware of any current supercomputer that can accomplish this. Most processor-to-main-memory channels can only do a read modify write at roughly 10mhz, so 10 million memory channels and 500 – 2000 terabytes of RAM would be required to match the human brain at this capability.
I don’t think we have to match the human brain byte-for-byte to match it, but I do think computers several orders of magnitude more powerful than those currently used for AI research will be required. Once we have computers with such power, it will not take that many years of experimentation to duplicate the functions of the human brain.
For more information on these theories go to http://www.int4um.com/ , from which several of the paragraphs above have been copied.
Sorry, I have to disagree with the conclusions drawn from those papers. How does the brain store what it does? Analog format? Digital? Binary? What sort of resolution? What sampling rate? How does the brain search and retrieve its contents? How does the brain form and re-form links and pathways?
Seems to me any computer system we buildl will be a digital embodiment of the models we have postulated. But that still doesn't mean that we know how the brain is doing what it does.
I agree with BicycleBill. And furthermore, Watson had the advantage of an accumulated data entry transfer from how many - hundreds, thousands, millions of people over how many years? If I could download Wikipedia into my brain, I would have access to a heck of a lot of trivia. But the people Watson was competing with only had their individual experiences to learn from. A human being is way more fascinating than Watson.
And even if we figure out the brain, that's still just the "what", not the "who". Perception as a subjective act requires a subject perceiver.
Is this a surprise result? This reminds me of the 19th century when men on horses would lose to steam engines. Jeopardy is challenging but in the end is nothing more than fast database access. Put the computer in a monster truck it controls and challenge it to race to the grocery store against a kid on a horse and watch the multimillion dollar marvel crash and burn while the kid is enjoying a Coke at the store.
I am not at all impressed with Watson's so-called "accomplishments". Watson accomplished nothing. Everything Watson knew was compiled by humans. If Watson could learn language or the manual dexterity of a child like on its own, I would be much more impressed. As it is, beating humans at Jeopardy is no more impressive than a car moving faster than humans or horses. Big deal.
Watson's achievement on Jeopardy while seemingly impressive is little more than massive computing power applied to a context-free micro-world problem. Such micro-worlds are divorced from the everyday world in which humans operate and despite the speed with which information was retrieved, they do not imply any understanding of the context in which that information might be useful.
Like the airplane, Watson represents a next generation tool which can and will be of immense value to it's users however it is still a very long ways away from any true intelligence.
The brain is extraordinarily good at contextual pattern recognition, especially after being exposed to a 'pattern'.
There are plenty of tests to prove this. I just took one test where in all the words in a paragraph were mis-spelled except first and last letter placement. I found I could read it correctly almost as fast as I could when the words were correctly spelled. And I had not seen the paragraph before the test.
Recognizing roads etc. among other visual clutter is similar to the above test in that the roads etc. have some boundary conditions in the pattern (which is just as important as the whole pattern itself) that allows us to recognize the road etc.
All of us can usually pick out the sound of someone's voice that we recognize whether we actually know them or see them. The pattern is not just the words or their cadence or turn of phrase used, but also the tonal characteristics of that person's voice.
And we all know that placing a datum in context allows easier human memory storage and recall.
The day that we truly understand all the low-level and overall workings of the brain will be the day that may be the beginning of the end for us. Beings will then eventually be "created" (physically, in a computer, or whatever). The more I think about it, the more I hope mankind never truly understands how the brain works.
Almost certainly, the wrong person(s) would get their hands on this knowledge/technology to do bad things with it. How could you ever defend against a never-ending stream of beings that have been "programmed" to do bad things against certain individuals, races, or humanity itself. I don't like even thinking about it. The world would never be the same again (if it even manages to exist for long).
I am simultaneously impressed and not impressed with Watson. As a tool to search for data asked in questions it is impressive even though it takes a room full of equipment to do this feat. I am not impressed by the anthropomorphizing that is happening on what is a glorified talking search engine. You hear about weak AI and strong AI and all that rot. If anything Watson is very weak AI. While watching the Jeopardy games I could greatly predict which question Watson would get easily and what would cause Watson to choke. If the question was a fill in the blank (1st order), Watson won. If the question was not a fill in the blank, like the president/airport/city connection question, Watson failed and the humans won. People assume Watson understands what it is saying, but it is simply the Chinese Room experiment, Watson is chugging lots of algorithms that is more like a brute force expert system with statistics searching a database. AI experts and engineers need to let the public know that Watson is a great tool and advancement in speech manipulation and searching for data, but it is not AI as they expect to understand what it is saying or what is being said to it.
Odd that no one has mentioned Gerald Edelman's work (starting back in 80s and continuing to this day) on cortical microcolumns, "darwinian" evolution of computation patterns, processing reified in structure.
Edelman and team stress that true intelligence (what many commenters rightfully highlight) depends on an integrated worldview (not just fast encyclopedia access.) Edelman's many generations of his "Darwin" robots are specifically designed to integrate "search" with "senses" and "goals" in the physical world.
One of the Darwins, in particular, I *would* expect to see at the country store, enjoying the coke with that kid.
"Watson"? Great for Web 2.0, and I welcome it.
I think we still need more growth/change in the computing paradigm before we can "store" memory and "algorithms" fully reified in the dynamic structure the way it's done in the brain.
But we're close. I expect to see it in my lifetime, and being old enough to have used punchcards, I don't think it's going to be that long....
Edelman's books are really great.
You can start by skimming Neural Darwinism,
The Remembered Present, and Wider Than the Sky
and deciding what grabs you.
His work with Mountcastle "The Mindful Brain"
is seminal, as is Mountcastle's work in the
Half a century of work, and we're finally getting some wisdom on this topic.
Hmm! Kind of how long it takes a single human to develop wisdom, eh? :-)
Watching the Watson Jeopardy caused a lengthy discussion afterwards, in our household.
Most of us already saw the segments on "NOVA Science Now" and on regular "NOVA". Comparisons were made throughout those programs about using Watson-like computers for medical diagnosis.
After seeing Watson fall down on the first Final Jeopardy, the thing that bothered me was how badly he/it blew it. Would you trust something that doesn't know Toronto is not a U.S. city, to diagnose your illness?? I wouldn't!
Of course medical diagnosis has the advantage that the computer doesn't REPLACE the human; it only provides suggestions, which the living/breathing doctor can accept or reject. So maybe it makes sense there.
But in other applications, I think Watson just proves that even the best computers cannot be trusted to make actual decisions. Filtering through millions of resources, and making suggestions to humans, OK. Beyond that, NO!
The experience also pointed out the need for some rules-based programming. Apparently, the programmers didn't bother to tell Watson that the Final Jeopardy category (unlike all the Regular and Double Jeopardy categories) needs to be taken literally. The Final-J category is almost always spot-on, without puns or other obfuscations. That could have prevented Watson from making such a blunder on that clue.
It was also interesting to see Watson's second and third choices, and the few cases where he/it was still revising his choices after the 3 seconds were up. I wish they had shown that for Final Jeopardy too.
It's also interesting that taking more time (for Final-J) doesn't seem to make Watson any "smarter". You would have thought that taking 30 seconds instead of 3, would make its response better, but it didn't in this case.
Either that, or the extra time allowed Watson to OVER-think the problem.
I always have to smile when I hear someone claim that we know a lot about how the brain works and that we soon may know how to duplicate it. Our level of knowledge about the brain could, when transposing it to the field of electronics, be compared to someone who can take thermal images of a processor while it is doing different operations, so they can tell which areas activate when you stimulate certain inputs. Scientists can also inject signals to certain nodes and see how it changes the system. They understand that there are connections between neurons, like someone might understand that the metal layers on a chip are there to connect the different circuits. However, who would ever claim that a person that could do these things to analyze a chip really "understands" electronics? Such knowledge barely scratches the surface and would surely be insufficient to duplicate the system.
We have no clue how even the most basic building blocks (neurons) work. Anybody probed a neuron and made a truth table describing its operation? Do they work like transistors? Gates? Lookup tables? Complex state machines? Their functionality isn't understood beyond the most basic level that they have inputs and outputs and come in different shapes and sizes.
Still, amid the utter lack in understanding of the most complex system ever discovered, the ignorant feel completely justified in their claims that blind chance is all that was needed to produce it.
In any casual or final analysis, Bill's right: Watson is a passingly interesting exposition of technology supported by many groups of technicians, but nothing more.
Consider that not only have we no inkling of brain dynamics and control, i.e., function, we have no useful definition of life itself (arguably a necessary prerequisite).
Moreover, disciplines in the so-called life sciences have no common basis (language, theory, thumb rules, hand signals, etc.) for discussing processes of life that each addresses. The only candidate theory having (marginal) scientific credibility is the eighty-year-tired modern synthesis (survival of the fittest + mendelian [i.e., simplistic, non-epigenetic] genetics).
When revealed, the (abstracted) mechanism of life/mind will likely be met with vertiginous inklings of obviousness cyclically truncheoned by ballistic spasms of disbelief, particularly among certain EEs, geneticists and condensed-matter physicists.
Thanks, Bill, for your clear thinking and nice read!
Nice writing. :-)
Agree w/you about "modern synthesis". It's crap, mostly. You might be (very) interested in a recent article by Woese and Goldenfeld (leading and major names in bio and phys, resp.) titled "Life is Physics" and constructive suggestions to finally, please, overturn the creaking death-rattle modern synthesis.
If you find even that too mainstream, I'd suggest Stephen Talbott's excellent technical series on all this over at netfuture.org
Yus AI gurus, pls excuse my simplification ...
The 2 principles of AI are Pattern Recognition and Rules. ie If you see SOMETHING, do THIS. If SUMMAT ELSE, do THAT.
Us humans do a lot of 'learning' like that too. For these, a machine can be made to appear 'human' and do a better job. It's only a matter of time when the ability to match a zillion patterns with a few zillion rules makes the machine 'better'. Chess and Jeopardy are easy examples. Strategy has patterns and rules too.
What removes the 'A' from AI is formulating these rules in the first place. (Which suggests most humans aren't intelligent either!)
- Put the computer in a monster truck it controls and challenge it to race to the grocery store against a kid on a horse and watch the multimillion dollar marvel crash and burn while the kid is enjoying a Coke at the store.
This is a much better test. Intelligent beings enjoy challenges but need rewards too; something better than a Coke. They also have vanity and a monster truck is hardly an appropriate vehicle for a super brain. Perhaps ...
Interesting that no one in this forum mentioned the notion of consciousness. That’s the big enigma – consciousness. And the ultimate question is: could a machine be ever conscious? Don’t try to hastily answer this question. Please first read about consciousness. There are so many scientists involved in the research of this topic. It’s not clear if we’ll ever be able to figure out how it emerges (from the brain?) – for me understanding and seeing clearly in my mind “what is” consciousness is like trying to see what is in the center of a black hole or in the atom-sized big bang in the first nanoseconds after the bang…Bravo, bravissimo to the IBM team for their Watson. These guys are real talents. Still the chasm between Watson and human consciousness remains as profound and dark as it has been since time immemorial.
Not long ago I saw a film about 5 Russian soldgers pushing a 50 ton tank uphill (during WWII) because they had to - it was a matter of life or death. Nowadays more than 50 people tried to push the same weight at the same site and they failed.Coming to my point: emotions, feelings, what are they made of? -hormones flowing in the blood, brain waves...or...?? This is the second huge enigma: the interaction between the emotional and logical Homo Sapiens. Happy Watson feeling nothing. Still I don't envie you...
When I was six years old, my mother could hand me some money and tell me to get a quart of milk. Just think of how much human effort would be required to program a robot for this simple task. I learned by following my mother around. I knew that milk was food, that it would be in a refrigerator at a grocery store, where a grocery store was, how to walk there and back, how to purchase it, to put the milk into the refrigerator at home and to hand my mother the change. I could even bring back the right type of milk because I knew what type was usually in the refrigerator at home.
The best things about the human brains is that they are reproduced by millions every year by a simple human reproduction chain. For a Watson to be able to reproduce another Watson may still take some centuries. Another unique feature of the human brain is that it evolves by itself and can master any art, science , culture, technology if it is brought up in that surrounding. The programming and learning process is automatic. It is just no match to that Watson machine.
The Other Tesla David Blaza5 comments I find myself going to Kickstarter and Indiegogo on a regular basis these days because they have become real innovation marketplaces. As far as I'm concerned, this is where a lot of cool ...