Yus AI gurus, pls excuse my simplification ...
The 2 principles of AI are Pattern Recognition and Rules. ie If you see SOMETHING, do THIS. If SUMMAT ELSE, do THAT.
Us humans do a lot of 'learning' like that too. For these, a machine can be made to appear 'human' and do a better job. It's only a matter of time when the ability to match a zillion patterns with a few zillion rules makes the machine 'better'. Chess and Jeopardy are easy examples. Strategy has patterns and rules too.
What removes the 'A' from AI is formulating these rules in the first place. (Which suggests most humans aren't intelligent either!)
- Put the computer in a monster truck it controls and challenge it to race to the grocery store against a kid on a horse and watch the multimillion dollar marvel crash and burn while the kid is enjoying a Coke at the store.
This is a much better test. Intelligent beings enjoy challenges but need rewards too; something better than a Coke. They also have vanity and a monster truck is hardly an appropriate vehicle for a super brain. Perhaps ...
In any casual or final analysis, Bill's right: Watson is a passingly interesting exposition of technology supported by many groups of technicians, but nothing more.
Consider that not only have we no inkling of brain dynamics and control, i.e., function, we have no useful definition of life itself (arguably a necessary prerequisite).
Moreover, disciplines in the so-called life sciences have no common basis (language, theory, thumb rules, hand signals, etc.) for discussing processes of life that each addresses. The only candidate theory having (marginal) scientific credibility is the eighty-year-tired modern synthesis (survival of the fittest + mendelian [i.e., simplistic, non-epigenetic] genetics).
When revealed, the (abstracted) mechanism of life/mind will likely be met with vertiginous inklings of obviousness cyclically truncheoned by ballistic spasms of disbelief, particularly among certain EEs, geneticists and condensed-matter physicists.
Thanks, Bill, for your clear thinking and nice read!
I always have to smile when I hear someone claim that we know a lot about how the brain works and that we soon may know how to duplicate it. Our level of knowledge about the brain could, when transposing it to the field of electronics, be compared to someone who can take thermal images of a processor while it is doing different operations, so they can tell which areas activate when you stimulate certain inputs. Scientists can also inject signals to certain nodes and see how it changes the system. They understand that there are connections between neurons, like someone might understand that the metal layers on a chip are there to connect the different circuits. However, who would ever claim that a person that could do these things to analyze a chip really "understands" electronics? Such knowledge barely scratches the surface and would surely be insufficient to duplicate the system.
We have no clue how even the most basic building blocks (neurons) work. Anybody probed a neuron and made a truth table describing its operation? Do they work like transistors? Gates? Lookup tables? Complex state machines? Their functionality isn't understood beyond the most basic level that they have inputs and outputs and come in different shapes and sizes.
Still, amid the utter lack in understanding of the most complex system ever discovered, the ignorant feel completely justified in their claims that blind chance is all that was needed to produce it.
Watching the Watson Jeopardy caused a lengthy discussion afterwards, in our household.
Most of us already saw the segments on "NOVA Science Now" and on regular "NOVA". Comparisons were made throughout those programs about using Watson-like computers for medical diagnosis.
After seeing Watson fall down on the first Final Jeopardy, the thing that bothered me was how badly he/it blew it. Would you trust something that doesn't know Toronto is not a U.S. city, to diagnose your illness?? I wouldn't!
Of course medical diagnosis has the advantage that the computer doesn't REPLACE the human; it only provides suggestions, which the living/breathing doctor can accept or reject. So maybe it makes sense there.
But in other applications, I think Watson just proves that even the best computers cannot be trusted to make actual decisions. Filtering through millions of resources, and making suggestions to humans, OK. Beyond that, NO!
The experience also pointed out the need for some rules-based programming. Apparently, the programmers didn't bother to tell Watson that the Final Jeopardy category (unlike all the Regular and Double Jeopardy categories) needs to be taken literally. The Final-J category is almost always spot-on, without puns or other obfuscations. That could have prevented Watson from making such a blunder on that clue.
It was also interesting to see Watson's second and third choices, and the few cases where he/it was still revising his choices after the 3 seconds were up. I wish they had shown that for Final Jeopardy too.
It's also interesting that taking more time (for Final-J) doesn't seem to make Watson any "smarter". You would have thought that taking 30 seconds instead of 3, would make its response better, but it didn't in this case.
Either that, or the extra time allowed Watson to OVER-think the problem.
Odd that no one has mentioned Gerald Edelman's work (starting back in 80s and continuing to this day) on cortical microcolumns, "darwinian" evolution of computation patterns, processing reified in structure.
Edelman and team stress that true intelligence (what many commenters rightfully highlight) depends on an integrated worldview (not just fast encyclopedia access.) Edelman's many generations of his "Darwin" robots are specifically designed to integrate "search" with "senses" and "goals" in the physical world.
One of the Darwins, in particular, I *would* expect to see at the country store, enjoying the coke with that kid.
"Watson"? Great for Web 2.0, and I welcome it.
I think we still need more growth/change in the computing paradigm before we can "store" memory and "algorithms" fully reified in the dynamic structure the way it's done in the brain.
But we're close. I expect to see it in my lifetime, and being old enough to have used punchcards, I don't think it's going to be that long....
Edelman's books are really great.
You can start by skimming Neural Darwinism,
The Remembered Present, and Wider Than the Sky
and deciding what grabs you.
His work with Mountcastle "The Mindful Brain"
is seminal, as is Mountcastle's work in the
Half a century of work, and we're finally getting some wisdom on this topic.
Hmm! Kind of how long it takes a single human to develop wisdom, eh? :-)
I am simultaneously impressed and not impressed with Watson. As a tool to search for data asked in questions it is impressive even though it takes a room full of equipment to do this feat. I am not impressed by the anthropomorphizing that is happening on what is a glorified talking search engine. You hear about weak AI and strong AI and all that rot. If anything Watson is very weak AI. While watching the Jeopardy games I could greatly predict which question Watson would get easily and what would cause Watson to choke. If the question was a fill in the blank (1st order), Watson won. If the question was not a fill in the blank, like the president/airport/city connection question, Watson failed and the humans won. People assume Watson understands what it is saying, but it is simply the Chinese Room experiment, Watson is chugging lots of algorithms that is more like a brute force expert system with statistics searching a database. AI experts and engineers need to let the public know that Watson is a great tool and advancement in speech manipulation and searching for data, but it is not AI as they expect to understand what it is saying or what is being said to it.
Almost certainly, the wrong person(s) would get their hands on this knowledge/technology to do bad things with it. How could you ever defend against a never-ending stream of beings that have been "programmed" to do bad things against certain individuals, races, or humanity itself. I don't like even thinking about it. The world would never be the same again (if it even manages to exist for long).
The day that we truly understand all the low-level and overall workings of the brain will be the day that may be the beginning of the end for us. Beings will then eventually be "created" (physically, in a computer, or whatever). The more I think about it, the more I hope mankind never truly understands how the brain works.
The brain is extraordinarily good at contextual pattern recognition, especially after being exposed to a 'pattern'.
There are plenty of tests to prove this. I just took one test where in all the words in a paragraph were mis-spelled except first and last letter placement. I found I could read it correctly almost as fast as I could when the words were correctly spelled. And I had not seen the paragraph before the test.
Recognizing roads etc. among other visual clutter is similar to the above test in that the roads etc. have some boundary conditions in the pattern (which is just as important as the whole pattern itself) that allows us to recognize the road etc.
All of us can usually pick out the sound of someone's voice that we recognize whether we actually know them or see them. The pattern is not just the words or their cadence or turn of phrase used, but also the tonal characteristics of that person's voice.
And we all know that placing a datum in context allows easier human memory storage and recall.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight Ė as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.