MADISON, Wis. — ThinCI is among many startups that have sprung up in the past 18 months claiming breakthroughs in the realm of deep-learning processors.
The Eldorado, Calif.-based company, with a sizable team in India, appears confident that it can stand head and shoulders above its competitors. It is in the midst of taping out its first processor, and the company’s partnerships and business models are well advanced.
ThinCI (pronounced “Think-Eye”) will unveil next week at Hot Chips details of its high-performance processor, Graph Streaming Processor (GSP), billed as a “next-generation computing architecture.”
Meanwhile, Denso, a large Japanese tier one and a key investor in ThinCI, last week revealed that it has established a new subsidiary to design and develop semiconductor IP cores for key components necessary in automated driving. The architecture of a new chip is being jointly developed with ThinCI, Denso announced during a press conference in Japan. Denso calls it a Data Flow Processor (DFP), describing it as “very different from CPU or GPU.”
EE Times last week caught up with Dinakar Munagala, ThinCI’s CEO, and discussed his company’s latest developments and his view on the automotive/non-auto markets. Here’s an excerpt of our conversation.
EE Times: It’s been almost a year since EE Times talked to you. Where do you stand today with your chip development?
Mungala: We’re in the middle of taping out. Our first processor will be coming out by the end of Q3. Our chip architecture has been benchmarked through multiple engagements — by automotive and non-automotive companies, and it’s well received.
EE Times: Who’s your lead customer?
Mungala: It’s an automotive company, which we can’t name. We also have a few select customers in the other market segments.
EE Times: That lead customer you are referring to… is that Denso? Denso said its new subsidiary, called NSITEXE Inc., will license semiconductor IP cores optimized for in-vehicle applications to SoC manufacturers.
Mungala: Denso is our investor and partner. We have a development effort going on with Denso. However, that’s different from what we are doing here at ThinCI. I was referring to our own customers.
EE Times: So, are you saying that ThinCI won’t be doing the IP business?
Mungala: No, we’re not in the IP business. A lot of startups start with an IP business model because they can’t sell their own chips. But that’s not the case with us.
EE Times: So, what will you be selling?
Mungala: We’re initially focused on selling modules featuring our chips — like accelerator cards. Just as NVidia has succeeded in selling PCI-based GPU cards into servers and data centers, we will also launch PCI accelerator boards. The module would make it easy for our select customers to see the efficiency of our processor.
EE Times: Does that mean you are going after the data-center market?
Mungala: We can deploy our processor to both the cloud and the edge, but our focus is on edge computing. Our processor can be used as a co-processor to speed up data processing in the cloud. But it is ideally employed when placed much closer to where data is being generated — like right next to an ISP (image signal processor).
EE Times: So you are saying that both ThinCI’s technology and business models are scalable — from chips to modules, deep learning training to inference. And the processor’s applications are not limited to automotive…
Mungala: Yes. We think our processor can be used everywhere from sensors, vision and cameras, to microphones (speech), smart factor, smart retail, and semantic data analysis… because the processor can speed up the computation while reducing cost and power.
EE Times: Can you explain the architecture of your processor?
Next page: Denso's subsidiary