BRISTOL, England -- Graphcore Ltd., a startup based here developing a machine learning processor, is not ready to make details of its hardware architecture public, but CEO Nigel Toon and CTO Simon Knowles did discuss with EE Times Europe some of their thinking on a variety of technology and business issues.
Toon said Graphcore has 40 employees and the $30 million raised in the recently announced Series A would be used to complete the first design and for some limited expansion.
"We could have taken more but this is sufficient to get product out," said Toon. "We will keep the engineering based here in Bristol but there is scope for some customer support and business development roles in Silicon Valley, Seattle and China," he added.
Toon acknowledged that there is one other major technology company, besides Samsung and Robert Bosch, that contributed to the Series A funding . He said that company has chosen not to go public on the investment.
With regard to its Intelligent Processor Unit (IPU) Knowles commented: "We will release our technology in the second half of 2017. It is a brand new, from-scratch design."
Much of the team had previously worked with Knowles at Element 14 designing for wireline, and at Icera designing for wireless. Now the team is doing the same for machine learning.
What Graphcore has said about the IPU – on its website – is that it will include massively parallel, low-precision floating-point compute and a much higher compute density than other solutions. The IPU will hold the complete machine learning model inside the processor and have 100x memory bandwidth than other solutions.
This will be backed up with an IPU-Appliance intended to increase the performance of both training and inference by between 10x and 100x compared to contemporary systems; and the IPU-Accelerator, a PCIe card designed to plug into a conventional server computer to accelerate machine learning applications.
Knowles said: "It will be a very large chip. We have not taped out, and because it is a large chip we cannot really benefit from doing test circuits on shuttle runs. Fortunately, we [the team] have a long-standing relationship with TSMC and a very good track record of getting it right first time."
Knowles said that the design is being aimed at a 16nm FinFET process from TSMC. When asked whether that would be the 16FF+ or 16FFC (near-threshold voltage) process variants offered by TSMC, Knowles said: "TSMC offers several versions of 16nm FinFET," and indicated a final decision on which one has not been taken yet.
Next page: Sorting through AI software stacks