It’s too early to say just how far neural networks will take us, argued the most bullish member of the panel, Ilya Sutskever, co-founder and research director of OpenAI and a former research scientist at Google Brain.
“These models are hard to understand. Machine vision, for example, was incomprehensible as a program, but now we have an incomprehensible solution to an incomprehensible problem,” he said.
Although algorithms about back propagation at the core of neural networking have been around for years, the hardware to run them has only been available recently. New architectures in the works for neural nets promise that “in the next few years, we’ll see amazing computers that will show much progress,” added Sutskever.
Speaking in a separate panel, Doug Burger, distinguished engineer working on FPGA accelerators at Microsoft’s Azure cloud service, agreed. “Despite being at the peak of the hype curve, neural networks are real … there’s something deep and fundamental here [that] we don’t fully understand yet.”
Startups, academics, and established companies are working on processors to accelerate neural nets, many using vector multiplication matrices with reduced precision, he noted. “That will play out over three or four years, and what will come after that is really interesting to me.”
Fellow panelist Norm Joupi agreed. The veteran microprocessor designer and lead of the team behind Google’s TPU accelerator called neural nets “one of the biggest nuggets” in computer science today.
Michael I. Jordan, a machine-learning expert at Berkeley, was the bear in the AI panel. Computer science remains the overarching discipline, not AI, and neural nets are a still-developing part of it, he argued.
“It’s all a big tool box,” he said. “We need to build the infrastructure and engineering [around neural nets, and] we are far away from that. We need to have systems thinking with math and machine learning.”
Like other speakers, he pointed to human reasoning capabilities outside the scope of neural nets. “Natural language processing is very hard. Today, we are matching string to strings, but that’s not what translation is.”
For example, he noted enthusiasm in China over chatbots. The automated conversation agents can engage humans, but without support for abstractions and semantics, they can’t say anything that’s true about the world.
“We are in an era of enormous learning, but we are not [at AI] yet,” he concluded. Nevertheless, he agreed that neural nets are significant enough that they need to become a part of a revised computer science curriculum.
— Rick Merritt, Silicon Valley Bureau Chief, EE Times