Watson's achievement on Jeopardy while seemingly impressive is little more than massive computing power applied to a context-free micro-world problem. Such micro-worlds are divorced from the everyday world in which humans operate and despite the speed with which information was retrieved, they do not imply any understanding of the context in which that information might be useful.
Like the airplane, Watson represents a next generation tool which can and will be of immense value to it's users however it is still a very long ways away from any true intelligence.
I am not at all impressed with Watson's so-called "accomplishments". Watson accomplished nothing. Everything Watson knew was compiled by humans. If Watson could learn language or the manual dexterity of a child like on its own, I would be much more impressed. As it is, beating humans at Jeopardy is no more impressive than a car moving faster than humans or horses. Big deal.
Is this a surprise result? This reminds me of the 19th century when men on horses would lose to steam engines. Jeopardy is challenging but in the end is nothing more than fast database access. Put the computer in a monster truck it controls and challenge it to race to the grocery store against a kid on a horse and watch the multimillion dollar marvel crash and burn while the kid is enjoying a Coke at the store.
I agree with BicycleBill. And furthermore, Watson had the advantage of an accumulated data entry transfer from how many - hundreds, thousands, millions of people over how many years? If I could download Wikipedia into my brain, I would have access to a heck of a lot of trivia. But the people Watson was competing with only had their individual experiences to learn from. A human being is way more fascinating than Watson.
And even if we figure out the brain, that's still just the "what", not the "who". Perception as a subjective act requires a subject perceiver.
Sorry, I have to disagree with the conclusions drawn from those papers. How does the brain store what it does? Analog format? Digital? Binary? What sort of resolution? What sampling rate? How does the brain search and retrieve its contents? How does the brain form and re-form links and pathways?
Seems to me any computer system we buildl will be a digital embodiment of the models we have postulated. But that still doesn't mean that we know how the brain is doing what it does.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 3
These advances, and many more, provide enough understanding that we can actually start experimenting with designs for powerful artificial minds. It’s not as if we have exact blue prints. But we do have a good overview, and good ideas on how to handle every problem I have ever heard mentioned in regard to creating roughly brain-like AI. As Deb Roy, of MIT, once agreed with me after one of his lectures, there are no problems between us and roughly human-level AI that we have no idea how to solve. The major problem that exists is the engineering problem of getting all the pieces to fit and work together well, automatically, and within a commercially-viable computational budget. That will take experimentation.
In fact, the major remaining barrier hindering the achievement of human level AI, is not a lack of theories for how to builds such an artificial brain, but rather the lack of hardware with the extreme computational and representation power of the supercomputer between the ears of every intelligent human. The human brain performs the equivalent of read-modify-writes to over 100 trillion different memory locations a second. I am not aware of any current supercomputer that can accomplish this. Most processor-to-main-memory channels can only do a read modify write at roughly 10mhz, so 10 million memory channels and 500 – 2000 terabytes of RAM would be required to match the human brain at this capability.
I don’t think we have to match the human brain byte-for-byte to match it, but I do think computers several orders of magnitude more powerful than those currently used for AI research will be required. Once we have computers with such power, it will not take that many years of experimentation to duplicate the functions of the human brain.
For more information on these theories go to http://www.int4um.com/ , from which several of the paragraphs above have been copied.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 2
Although it is different than the Serre-Poggio system, the system described in Geoff Hinton’s Google Tech Talk at http://www.youtube.com/watch?v=AyzOUbkUf3M demonstrates a character recognition architecture that shares many of these same beneficial characteristics --- including a hierarchical, scalable, and invariant representation/ computation scheme that can be efficiently and automatically trained. The Hinton scheme is quite general, and can be applied to many types of learning, recognition, and context sensitive imagining. The architecture described by Jeff Hawkins et al. of Numenta, Inc. in “Towards a Mathematical Theory of Cortical Micro-circuits” (http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000532 ) also shares the concepts of hierarchical memory and invariance, and provides a potentially powerful and general computational model that attempts to describe the functioning of the human cortex in terms of its individual layers.
Similar amazing advances have been made in understanding other brain systems ---including those that control and coordinate the behavior of, and between, multiple areas in the brain --- and those that focus attention and decide which of competing actions to take or consciously consider.
WE KNOW MUCH OF WHAT WE NEED TO KNOW TO BUILD HUMAN LEVEL AI – Part 1
The above article says, ‘we really don’t have a clue as to how the human brain (or those of most other animals) really "works". ‘
WRONG! Although we do not understand many of the details, we have good plausible theories for how the brain accomplishes much of what it does. And there is no reason to think we have to copy that much of the details of how the brain works to copy it’s important functions.
Brain science has made increasingly accelerated advances in the last twenty years. As a result there is much more knowledge in the field than even most researchers are aware of. One example of such recent progress is the paper “Learning a Dictionary of Shape-Components in Visual Cortex:...”, by Thomas Serre of Prof.Tomasa Poggio’s group at MIT. It describes a system that provides human-level performance in one limited, but impressive, type of human visual perception (http://cbcl.mit.edu/publications/ps/MIT-CSAIL-TR-2006-028.pdf ). The Serre-Poggio system learns and uses patterns in a generalization and composition hierarchy. This allows efficient multiple use of representational components, and computations matching against them, in multiple higher level patterns. It allows the system to learn in compositional increments. It also provides surprisingly robust invariant representation. Such invariant representation is extremely important because it allows efficient non-literal matching, pattern recognition, and context appropriate pattern imagining and instantiation. Such non-literal match and instantiation tasks have ---until recently --- been among the major problems in trying to create human-like perception, cognition, imagination, and planning.
Watson's performance was indeed impressive. However, I don't think he was that much "smarter" then Jennings or Rutter. Once the "ring in" was enabled (when Trebeck spoke the last word of the clue), Watson had an insurmountable advantage. He could respond in microseconds. The humans' response slowed by their nervous system and finger muscle latency, which is at least several milliseconds.
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.