Breaking News
News & Analysis

# Google’s TPU Hit a Tight Sked

Short schedule "answers a lot of questions"
5/19/2016 08:25 AM EDT
NO RATINGS
Page 1 / 2 Next >
User Rank
Rookie
Future of Chips: Google's TPU and Diamond Computer chips
5/26/2016 8:46:47 AM
NO RATINGS
When I heard about google is fabricating a new custom chip for machine learning system, I was excited. One would ask, why? This chip is not just another chip faster than any other chip in the world, it is way too different. Google has not revealed much information about the said chip but it is expected to be utilized in their upcoming major projects which involve Artificial Intelligence and Cloud.

User Rank
Author
Lower precision is deceptively important
5/23/2016 10:04:20 AM
NO RATINGS
"Jouppi ... [says] they are more efficient than standard parts because they use 'lower precision only using as many bits as needed.' "

There is a bit of a reflex reaction in computation-rich tech disciplines is to assume that higher precision calculation is always better. Some of us who date back to the slide ruler era (barely in my case) may view this issue a bit differently.

What we tend to do now in far too many calculation chains is either: (a) effectively ignore uncertainty, or (b) calculate the location of, say, Detroit down to a pebble in a playground, and then use another equally detailed number to "back off" and say "well, really, what I meant was somewhere within much less precise ten-mile radius circle around that pebble." The latter assertion still has the continuing irony of being very precise in how it describes that circle of uncertainty around that precisely located pebble.

There's a simpler path: Make the bits reflect only what you know, and leave the ambiguity in. If you can do that well and efficiently at the hardware level -- and that is by no means a given -- then when you work through a long chain of calculations, you find that you are getting good result both faster and at potentially hugely less energy costs. Why? Because you are not at every step in potentially millions of steps taking 10 steps forward, then 9 steps (or worse) back. Such continual zig-zagging in precision space is a superb way to chuck huge percentages of total computational effort and device energy right out the entropic window.

With their tensor flow approach, Google and Jouppi are simply acknowledging this rather grotesque inefficiency and making a real effort to deal with it. That makes it a lot more powerful than I think folks may realize at first glance.

And finally, many old-school slide ruler users may find this theme surprisingly familiar. They know from hands-on experience that if you can arrange it so a small number of low-precision calculations are well-matched to the inherent uncertainty of much of the physical world, you can quickly and at very low computational cost arrive at impressive, useful, and remarkably accurate results. The value of matching calculation effort to levels of real knowledge is even more true for emerging topics such as machine cognition, for which very few "facts" are known with 100% certainty.

User Rank
Author
Re: We may need a new name for these chips
5/19/2016 7:43:52 PM
NO RATINGS
@rpcy Great observation. AI is really taking us into unchartered territory for processor designers. Fun!

User Rank
CEO
5/19/2016 7:05:36 PM
NO RATINGS
Want to see privacy concerns, check out this new API https://developers.google.com/awareness/overview#fences_and_snapshots

With something like that every app will check on the user as much as it is allowed to. 25 apps spying on the user and wasting resources won't be much fun. It's absurd to do it this way. A (piranhas) army without a general, what could go wrong.

User Rank
Author
We may need a new name for these chips
5/19/2016 1:18:12 PM
NO RATINGS
When Eckert, Mauchly and von Neumann worked out how a machine could perform a series of operations on data, based on fetching and executing a sequence of instructions, it was a breakthrough that changed the world. If we reserve the word "computing" to mean that sequence of instruction executions, then we need a new name when talking about machinery that operates in some other way.

I don't yet know how Norm's TPU's work, but if they are neuro-inspired, perhaps they don't fetch and execute instructions from a stored program. Finding ways to perform the equivalent of computations without using the stored program paradigm was also the subject of Dan Hammerstrom's work at DARPA over the past 4 years.

Time for someone to coin a new term. Norm?

User Rank
Author
5/19/2016 11:21:26 AM
NO RATINGS
Now I wonder how Norm might have felt about his chip being used to speed making inferences into deep contextual understanding of users.

Google CEO Sundar Pichai couched it as a great assistant, the helpful agent the industry has been pursuing for years. But as a good NPR piece pointed out this morning it also raises concerns in Europe about an expanding monopoly. Privacy concerns, too.