PARIS — At a time when “Deep Learning” isn’t just hot but approaching the hype-cycle’s boiling point, nobody should be surprised at the emergence of another deep-learning, vision processing startup.
This one is called ThinCI (pronounced “Think-Eye”), founded by Dinakar Munagala, an accomplished engineer/architect with an Intel pedigree.
Surprising about ThinCI (El Dorado Hills, Calif.), however, is its well-heeled, big-name backers with credible technological expertise, and a unique “massively parallel architecture” which Munagala describes as an engine “purposely built for vision processing and deep learning.”
Munagala promises that his patent-pending chip architecture can bring “two orders of magnitude improvements in performance” compared to other deep learning/vision processing solutions.
ThinCI, after operating in a garage on a shoe-string budget for six years, is emerging from stealth mode this week. It recently snagged two big automotive tier ones as institutional investors and secured a roster of who’s who in the tech industry as private investors.
The two tier ones signed up are DENSO International America, Inc., and Magna International Inc.
Private investors include: Dado Banatao, chairman ThinCI board of directors and managing partner, Tallwood Venture Capital; Dadi Perlmutter, former executive vice president, general manager of Intel Corp.’s Architecture Group; Jürgen Hambrecht, chairman of the Supervisory Board of BASF SE and Member of Supervisory Board of Daimler AG; and several others of similar stature.
Perlmutter, asked why invest in ThinCI, told EE Times, “Through all my career I significantly appreciated simplicity and flexibility. I always preferred approaches that went away from brute force, and looked at the bottlenecks of a new computing problem, and found ways to eliminate the bottlenecks by finding new approaches. ThinCI has done just that.”
While other solutions are limited to moving data in and out to feed a big, hungry computing engine, Perlmutter described ThinCI computing as “tailored to Deep Learning graph analysis.” He said it “eliminates by a huge factor unnecessary access to memory.”
Its end results? “This results not only in speeding up the computation but reducing cost and power,” he added.
Munagala told EE Times that he quit Intel six years ago with ambitions to develop a new chip architecture that can meet the needs of next-generation technologies, such as deep learning.
ThinCI, however, has yet to disclose details of its processor architecture. The company describes it “a revolutionary graph streaming processor.”
Munagala explained to EE Times that this is “a massively parallel architecture designed to process multiple compute nodes of a task graph at the same time.”
Deep learning, in essence, is based on a set of algorithms that try to model high level abstractions in data by using a deep graph with many processing layers, composed of multiple linear and non-linear transformations.
What’s unique about ThinCI’s architecture appears to be in the way it handles a deep graph.
Instead of processing data sequentially through a deep graph, with multiple processing layers, “ThinCI’s architecture streams data through the entire graph using extreme parallelism,” explained Munagala.
But that’s only half of the story.
Next page: On-die graph execution