Ray Kurweil offers his take in the Wall Street Journal today, riffing on his "Singularity" work.
Bottom line from his perch is machines won't take over; we'll harness them effectively. We shall see.
Watson's use of 2880 processors and immense amount of storage to mimic just one part of what the human brain does is a testament to the design of the brain because it gives us another look at the scale of complexity needed to master the Jeopardy! problem. Watson isn't designed to drive, so it only emulates part of what we do, while requiring orders of magnitude more power to do it. Next time you read about the latest "powerful" computer, you have permission to snort milk through your nose.
You missed the point of Watson all together.
No one in the Watson team will say that "we are replacing the brain" and knows that this is just the first step in a (very) long journey.
Rather, it opened the door on a brand new chapter in human-machine interaction and the ability of the computer to absorb and use information.
The real point in artificial intelligence is to pursue an understanding of the ultimate "computing machine": us humans. And, though we may not achieve the ultimate, we learn a lot about novel ways to use automation and radically improve our lives.
The real payoff is a better world for us humans.
Moore's Law has several decades to go before we see cheap computers that rival the Human brain in processing power. I think it naive to expect Human level performance from any system still orders of magnitude away from Human. OTOH, we are probably getting a preview of Google 2.0
Computers are never going to mimic the human brain, anymore than airplanes mimic birds. Birds and planes fly, but they do so by different methods, and so it will be when we have computers doing human tasks.
Watson's performance was indeed impressive. However, I don't think he was that much "smarter" then Jennings or Rutter. Once the "ring in" was enabled (when Trebeck spoke the last word of the clue), Watson had an insurmountable advantage. He could respond in microseconds. The humans' response slowed by their nervous system and finger muscle latency, which is at least several milliseconds.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 1
The above article says, ‘we really don’t have a clue as to how the human brain (or those of most other animals) really "works". ‘
WRONG! Although we do not understand many of the details, we have good plausible theories for how the brain accomplishes much of what it does. And there is no reason to think we have to copy that much of the details of how the brain works to copy it’s important functions.
Brain science has made increasingly accelerated advances in the last twenty years. As a result there is much more knowledge in the field than even most researchers are aware of. One example of such recent progress is the paper “Learning a Dictionary of Shape-Components in Visual Cortex:...”, by Thomas Serre of Prof.Tomasa Poggio’s group at MIT. It describes a system that provides human-level performance in one limited, but impressive, type of human visual perception (http://cbcl.mit.edu/publications/ps/MIT-CSAIL-TR-2006-028.pdf ). The Serre-Poggio system learns and uses patterns in a generalization and composition hierarchy. This allows efficient multiple use of representational components, and computations matching against them, in multiple higher level patterns. It allows the system to learn in compositional increments. It also provides surprisingly robust invariant representation. Such invariant representation is extremely important because it allows efficient non-literal matching, pattern recognition, and context appropriate pattern imagining and instantiation. Such non-literal match and instantiation tasks have ---until recently --- been among the major problems in trying to create human-like perception, cognition, imagination, and planning.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 2
Although it is different than the Serre-Poggio system, the system described in Geoff Hinton’s Google Tech Talk at http://www.youtube.com/watch?v=AyzOUbkUf3M demonstrates a character recognition architecture that shares many of these same beneficial characteristics --- including a hierarchical, scalable, and invariant representation/ computation scheme that can be efficiently and automatically trained. The Hinton scheme is quite general, and can be applied to many types of learning, recognition, and context sensitive imagining. The architecture described by Jeff Hawkins et al. of Numenta, Inc. in “Towards a Mathematical Theory of Cortical Micro-circuits” (http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000532 ) also shares the concepts of hierarchical memory and invariance, and provides a potentially powerful and general computational model that attempts to describe the functioning of the human cortex in terms of its individual layers.
Similar amazing advances have been made in understanding other brain systems ---including those that control and coordinate the behavior of, and between, multiple areas in the brain --- and those that focus attention and decide which of competing actions to take or consciously consider.
The Other Tesla David Blaza5 comments I find myself going to Kickstarter and Indiegogo on a regular basis these days because they have become real innovation marketplaces. As far as I'm concerned, this is where a lot of cool ...