SAN FRANCISCO — At a conference hosted by Silicon Valley Forum last week, industry experts projected the future of AI, machine learning, and deep learning.
“This notion that evolution ends with humans is silly,” keynoter Steve Jurvetson, partner and managing director of DFJ, told attendees. “I think what humans really mean is we don’t want to compete with something smarter than us in our lifetime. I think you can shift our selfish sense of supremacy to a symbolic trajectory of progress.”
Some speakers said progress in AI would give systems the ability to talk to each other quickly and simply, while others believe the ability to reason and make inferences will be the true differentiator in intelligence. Regardless, Citrix Startup Accelerator’s chief technologist Michael Harries said, any entrepreneurs that aren’t familiarizing themselves with AI have “rocks in their heads.”
According to Modar Alaoui, AI’s immediate future lies in ambient intelligence in smartphones and smart cars. Alaoui is the founder and CEO of Eyeris, which develops artificial intelligence for facial recognition. Several speakers said they would like to see artificially intelligent robots or computers that learn without being told, then “self-tune” after solving a problem.
“Robots have an ability to adapt to their environment; they have the ability to learn. But the ability to go on and extend that model is really intelligence. I think we will see that, but that’s the jump we haven’t made,” Kevin Albert, CEO and co-founder of robotics startup Pneubotics, said during a panel discussion.
Jeff Hawkins, CEO and co-founder of Numenta, a firm that has developed a computational framework for AI, said:
Intelligence shouldn’t be measured by any particular task. What characterizes intelligence is extreme flexibility… building a flexible learning system. [Some AI is] focused on being human-like; our work here is not being human-like at all. It’s about understanding the general principles of intelligence that we can apply to all kinds of problems.
While there is a variety of ways to attack the development and fine tuning of artificial intelligence -- including training machines “like children,” according to panelists -- Hawkins believes reverse engineering the neural cortex is the fastest way to intelligent machines. Neuroscience has shown that language and touch work on the same principles, and Hawkins expects a machine’s abilities to unfold in a similar way once scientists are able to tap inherent potential.
“Once we understand those principles of the neocortex, we can modify them -- we don’t need to be true to evolutionary biology,” Hawkins said. “We still have so much to learn about the basics of how biology works. Progress is incremental but also exponential. We’re going to finish this off in less than five years, I believe.”
If the thought of enlightened machines in the next five years is too much, Hawkins assured attendees that artificial intelligence isn’t inherently dangerous. The ability to self-replicate is dangerous, however.
A new skill set may be required as work on AI continues, with programming coming to the forefront. AI developers are working with unknown subsystems that are “big black-box Legos,” IBM Seeker of Awesomeness John Wolpert said.
“You do more teaching than you do programming. The technical skills required are more social; I want more liberal arts majors for this stuff. You still need lots of skills, but not to work with the systems,” Wolpert said.
Shahin Farshchi, partner at Lux Capital, said he would like to see a “level of obsessive focus on products and solving specific problems that we see in software.” Specific use cases for deep learning and artificial intelligence could hasten the development process.
— Jessica Lipsky, Associate Editor, EE Times