Yu-Hsin Chen (below, right) of MIT explains his Eyeriss accelerator at an ISSCC demo session. The 65nm chip got results that compared to those from a 28nm Nvidia Tegra while being ten times more power efficient, Chen claimed.
He demoed the chip using AlexNet which has popularized convolutional neural networks now being widely used in data centers for a variety of jobs including image recognition.
Chen’s chip used a data flow approach, distributing control and storage to a 14 x 12 array of processing elements connected to an on-chip network using both point-to-point and multicast techniques. “Moving data is expensive so three types of data reuse are employed in the array,” Chen said.
Next page: Panasonic sees better than your eye