REGISTER | LOGIN
Breaking News
News & Analysis

Startup Tips Machine Learning Chip

Wave Computing aims to leapfrog Nvidia in AI
9/21/2016 08:00 AM EDT
3 comments
NO RATINGS
1 saves
More Related Links
View Comments: Newest First | Oldest First | Threaded View
spike_johan
User Rank
Author
Re: A question
spike_johan   9/22/2016 8:42:18 PM
NO RATINGS
Thanks Rick for your rapid reply.

Yes, ASICs are all about custom silicon. So are FPGAs, except FPGAs are rewritable. I guess I am still wondering if the inherent logic blocks of an FPGA (or unchangeable ASIC) is where the heavy lifting is done via processing the algorithm.

rick merritt
User Rank
Author
Re: A question
rick merritt   9/22/2016 7:22:57 PM
NO RATINGS
Hi spike_johan

Here's my 30,000 foot layman's vew but I welcome other readers to jump in and school us both:

The success researchers have had with new machine-learning algorithms such as convolutional neural nets have set data center giants like Google, Facebook, Microsoft on fire for the possibilities in analytics

See http://www.eetimes.com/document.asp?doc_id=1327567&page_number=2

The algorithms are run against large data sets in a compute-intensive intensive process called training to cull patterns. Some do this work in software on CPUs, some want the fastest GPUs for best performance (see http://www.eetimes.com/document.asp?doc_id=1328464) and others are doing architectures tailored for it such as Wave and Nervana mentioned here.

Google developed an ASIC to accelerate its TensorFlow version of the algorithms:

See http://www.eetimes.com/document.asp?doc_id=1323500

Separately, Microsoft and Badu have implemented FPGA cards in servers to handle specific jobs such as accelerating search, network processing and cryto.

See http://www.eetimes.com/document.asp?doc_id=1323500

This broader uses of accelerators in data centers was one of the motivations for Intel to buy Altera. That move motivated all their rivals to gear up for acclerators, too.

See http://www.eetimes.com/document.asp?doc_id=1329734

In short, there's a lot of related and somewhat orthognal things going on here.

Hope this helps

Rick

 

 

 

 

spike_johan
User Rank
Author
A question
spike_johan   9/22/2016 10:07:44 AM
NO RATINGS
Rick, thanks again for another informative article on the rapid transformation that is underway in the machine learning/AI space. This brings me to my question.

If I have been understanding what I have been reading these past few months, is all of this new custom silicon based [sort of] on FPGA/memory closely clustered with a CPU(s) on an SoC where specific machine learning algorithms are configured in the logic blocks of the FPGA? And to where other tasks which can be offloaded from software have been done so to run faster on hardware accelerators?

Thanks if you can shed any further light on where my thinking on this subject presently is. I have been out of the game for almost five years and I find it fascinating - if I am right - how early technology like FPGAs have been relegated these new tasks.

Most Recent Comments
Olaf Barheine
 
realjjj
 
EELoser
 
realjjj
 
EELoser
 
realjjj
 
realjjj
 
perl_geek
 
sranje
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed