Machine learning on center stage at GoogleIO
ANTWERP, Belgium – Google announced a bold and broad strategy centered on machine learning at its annual developer event. So far, its support in the OEM community appears modest at best.
The search giant has shifted from a “mobile-first” to an "AI-first” strategy, said chief executive Sundar Pichai in a keynote at Google IO. “In an AI-first world, we are rethinking all our products,” he said, announcing a new group, Google.ai, which will develop tools and applications to make machine learning more widely available.
Pichai also announced Google’s second-generation TensorFlow Processing Unit (TPU) as key to its future data center architecture. “We are rethinking our computing architecture again…We want Google Cloud to be the best cloud for machine learning,” he said, echoing hopes of rivals such as Amazon, Facebook and Baidu.
Demos of new features in Google’s translation and photo services were among the most impressive efforts using machine learning. “All Google came from understanding text and Web pages, so the fact we can understand speech and images [with neural networks] has profound implications for us,” he said.
Separately, Google previewed the next version of Android as well as a new variant, Android Go, for entry-level smartphones with as little as 512 Mbytes storage. It also hinted at plans for an OEM program for Google Home that will spawn third-party products by the end of the year that compete with rival Amazon’s Echo and Dot.
The move comes at a time when all big data centers are reinventing themselves for a world in which neural networks are delivering new capabilities to recognize speech, images and more.
Amazon has already sold as many as 10 million devices using its Alexa voice interface and has created a strong OEM program for it. Last month, Facebook showed ways it will deliver to smartphones augmented reality and other features using machine language. Microsoft is testing machine learning services accelerated on the Catapult FPGAs it is now putting on all its new servers.
The keynote was short of new OEM products. Among the exceptions:
- Asus will launch a smartphone using Google’s Tango platform for augmented reality
- HTC and Lenovo will ship this year DayDream VR headsets that come with custom electronics built in
- Samsung’s Galaxy 8 will get a software upgrade to support Daydream in its GearVR
- LG announced a line of appliances that will support Google Home and Assistant including a washing machine and dryer, refrigerator, oven range, air purifier, air conditioner and robotic vacuum
No doubt OEMs are trying to sort out which of several emerging machine learning services they want to support.
Apple was out early with Siri, but Amazon has the most traction with Alexa. Google seems to be stepping on the gas with its Assistant and Home. Microsoft’s Cortana is among other competitors along with options emerging from China’s big three data centers.
Next page: Android gets bigger and smaller
Android gets bigger and smaller
As part of its AI-first strategy, Google is developing a mobile variant of its machine learning framework. TensorFlow Lite will soon be part of the framework’s open source code
Google is developing neural network APIs for the mobile version that will let developers tap into DSPs and other mobile accelerator cores and chips. Last month, Qualcomm, Nvidia and others showed support for Facebook’s new Caffe 2 framework for mobile devices.
Separately, Google released a beta preview of Android O, the next version of the mobile OS, pitching it as a major upgrade. Perhaps more significantly it announced a streamlined variant for entry-level smartphones targeting emerging markets that are expected to generate most of the growth in the slowing smartphone market.
Android Go will work on handsets with as little as 512 Mbytes storage. It includes applications and features optimized for low-cost handsets that may have intermittent data links and users who have limited data budgets. Google also released guidelines for developing apps for the devices that are not expected to start shipping until next year.
Some two billion Android smartphones and tablets are now active. They downloaded 82 billion apps last year.
Google’s Android Wear is now supported by 24 watch brands, although Apple still ships the brunt of smart watches. Interestingly, Google claims Chromebooks make up 60 percent of K-12 laptops, a market Apple used to dominate, and one that is likely moving to tablets and handsets.
On another front, CEO Pichai said third-party products will ship before the end of the year, certified to work with Google’s Assistant and Home products. Thanks to neural nets, the word error rate on Google’s voice recognition services is now down to 4.9 percent from 6.1 percent in December, he said.
Interestingly, the Google Assistant is now available on the iPhone, Pichai announced to cheers from an audience at the Shoreline Amphitheater, a concert venue near Google headquarters where the event was held. The venue is a short drive from Apple’s “spaceship” headquarters still under construction.
Google announced several new services coming soon for its Home product. They include proactive assistance based, for example, on events on a personal calendar and changing traffic or weather conditions. A hands-free calling service will be available in the U.S, and Canada, and the Home device will be able in the future to send Web data to supported Chromecast TVs and Android smartphones.
A new service called Google Lens showed the most impressive uses of machine learning.
Lens recognizes objects on a smartphone camera and responds with information about it, such as a related Web page. In one demo, a user took a picture of a menu in Japanese that Lens translated. In another it removed an obstructing fence from a picture of a Little Leaguer at bat.
“We are clearly at an inflection point with vision,” said Pichai.
— Rick Merritt, Silicon Valley Bureau Chief, EE Times