MADISON, Wis. – In the not-so-distant past, Computex Taipei, an international computer expo, was all about PCs. Taiwan’s original design manufacturers (ODMs) grew up with Computex and put Taiwan on the map as a global PC hub. Companies that profited from the Taiwan ODMs’ success were Intel and Microsoft, who played key roles in defining PC technology.
Fast forward to 2017.
This week, Nvidia is coming to Computex, in hopes of replicating what Intel and Microsoft achieved a few decades ago with the PC market. Nvidia’s sole focus is domination in the new era of “accelerated computing."
Nvidia defines accelerated computing as the increased use of a graphics processing unit together with a CPU to accelerate deep learning, analytics and engineering applications.
On Monday (May 29), Nvidia unveiled "a partnership program with the world’s leading ODMs -- Foxconn, Inventec, Quanta and Wistron -- to more rapidly meet the demands for AI cloud computing.”
Through a partner program built around Nvidia’s hyperscale GPU accelerator for AI and cloud computing, Nvidia hopes to provide each ODM with “early access to the Nvidia HGX reference architecture, Nvidia's GPU computing technologies and design guidelines,” according to the company.
The GPU giant's move on the partnership program with Taiwanese ODMs isn’t about winning a server market. Taiwan already builds all the servers in the world, thanks to companies including Intel and Google who have worked with Taiwanese ODMs for years.
Nvidia wants to push its GPU-compute chassis further into data centers and cloud computing servers that Taiwanese ODMs are already making.
GPU-accelerated computing offloads compute-intensive parts of the application to the GPU, while the rest of the code runs on the CPU. Nvidia's goal is to enable accelerated computing “everywhere” from labs to academia and small and medium businesses, explained Keith Morris, senior director of product management for Accelerated Computing at Nvidia. This will help “democratize” the use of such applications as deep learning, Artificial Intelligence and machine learning, which rely on rapid acceleration of parallel codes, he added.
A number of companies have been wooing Taiwanese ODMs for years, said Paul Teich, principal analyst at Tirias Research. “Intel has been working with most of these server ODMs (Foxconn, Inventec, Quanta and Wistron) for years -- they are the leading cloud server ODMs because of their relationship with Intel,” he noted.
Next page: Drive for standardization