PORTLAND, Ore.—The first automated software-to-chip dream came out of the closet Monday (April 23), when Algotochip Corp. (Sunnyvale, Calif.) claimed to be able to produce a system-on-chip (SoC) design from a C-code specification in just eight to 16 weeks.
"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms. "Our solution provides the appropriate RTL generated from C-ocde for SoC.
Algotochip said its technology, announced at the Globalpress Electronics Summit 2012 in Santa Cruz, Calif., generates all aspects of a solution including the software, firmware and hardware from the designers C-code and test stimulus vectors. Padmanabhan, former co-founder and chief architect at ZSP (acquired by Verisilicon in 2006), where he created the first superscalar DSP, recruited software experts from Apple and elsewhere two years ago, and today unveiled its proprietary engine that accepts as input a C-code file and outputs a Graphic Data System II (GSDII) suitable for creating SoC.
Algotochip said it had proven its methodology at half dozen customers so far, but disclosed only one: MimoOn GmbH (Duisburg, Germany) whose mimoOn mi!, a mobile PHY for LTE, has already been successfully fabricated by TSMC. Two"what if" designs were produced in 12 weeks for MimoOn, one for TSMC's 40-nanometer process and a second for its 90-nanometer process, Algotochip said. The latter of which was chosen for its final SoC due to its lower power consumption, the company said.
Algotochip starts with designers C-code (left) the generates an application-specific programmable microcontroller (top), and digital signal processor (DSP) along with a memory management unit (MMU) and input/output which implements an SoC (click on image to enlarge).
Altotochip has achieved this—the Holy Grail of SoC designers—by virtue of suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochp's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property (IP) is owned completely by the customer—with no licenses required from Algotochip. If a designer wants to use a licensed core from ARM or other popular vendor, Algotochp can also accommodate on demand.
Algotochip's design flow first makes an analysis of the designers C-code from which it provides optimization suggestions for the C-code design, from which Algotochip generates a set of system specifications with options. Once the designer answers a questionnaire about their choices, Algotochip designs the base system architecture, and produces a complete SoC design including firmware and software which it then delivers to the designer in 8-to-16 weeks—from the day the C-code is delivered to Algotochip to the day the GDCII is delivered back to the designer.
Algotochip also claims its patented power-aware architecture tightly control leakage power for long battery life of mobile designs, and will also work with other implementation technologies besides SoCs, including DSPs, ASICs, ASSPs and FPGAs.
All you C-programmers out there unite! Now all you need to do is describe your application by a C program, plus a set of test stimulus vectors, and this startup can give you an foundry-ready SoC design in just a few weeks. Sound too good to be true? Yes it does. However, the principals have the credentials to back up the boast, and a customer list that includes a mobile LTE physical layer chip that is already on the market.
C code in and GDSII out? So we are to believe that in addition to a front end tool that analyzes the C code and does hardware-software partitioning and architecture optimization, they also have a synthesis engine, a place & route engine and a static timing analysis engine? One tool that replaces the entire tool chain for IC design?
Something said in the article sounds suspicious: "The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC..."
What "EDA system" and why? If this new tool outputs GDSII, then you're done -- unless the GDSII has DRC violations, or the underlying design has timing violations, etc. If the GDSII that is output by this tool isn't 100% ready for TSMC, they why do they bother generating it? A netlist that is ready for P&R would be a lot more useful than a GDSII file that is not quite ready for manufacturing.
A few things.
First off, this is not being sold as a tool. You send your C-algorithm along with some specifications (see below for an excerpt from their webiste), and then you are sent back a GDSII targeted for a particular Foundary + process, 8-16 weeks later. This is more of an accelerated consulting agreement. I agree with Frank that having an RTL netlist as well would be more useful.
Second, the figure in this article makes it clear that there is a particular architecture with a customized DSP, microcontroller, and peripherals. It isn't clear what the level of customization here is. This could range from very sophisticated program analysis mixed with behavioral synthesis and HW/SW partitioning to a template architecture which is hand-tweaked for the given application.
Finally, without more information it's unclear what the real approach is and how legitimate it is without more information. All of this said, very interesting.
--- Quoting their website at: http://www.algotochip.com/about.html
The following are the required information from the customer:
C-Code for the algorithm (Fully support ANSI C and no changes required to customers C-Code)
Test-Vectors to check the C-Code
Desired Fabrication House and Process
Desired Standard Library and Memory Compilers
Real-Time performance constraints
Target Area and Power
Testability Features (scan, bist etc)
Thanks for your interpretation Dr Trevorkian. I didn't check their website, just read the article, and my skepticism was raised by several comments including "suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochp's proprietary technology and tools."
So I interpreted it the same way you did, as a sort of consulting agreement, with the added twist of "we have some proprietary tools that we can't let customers have access to, but our guys know how to run them."
I should think a potential customer would be more comfortable if they just said they have a customizable VLIW or whatever it is, and proprietary tools to match, and that they also have in-house implementation experts and licenses for the usual IC implementation tools -- Synopsys, Cadence, Mentor, etc. -- because nobody in their right mind is going to sign off on a tapeout of some GDSII generated by a proprietary tool, with STA, DRC, LVS, power analysis & IR drop analysis, etc. also performed by a proprietary tool.
If they really have achieved what they claim, that is to take a C algo to GDSII in a few weeks, with comparable power/area as of traditional RTL design, then thats really awesome. However many companies have made similar claims in the past and failed pathetically in the market, which makes me a bit skeptical.
This sounds like a quick & dirty "disposable" mask data substitute for a traditional design house for those who don't have a vested interest in mask data maintenance and re-use. Unless these magical super-secret tools are eventually released into the wild, I don't see this approach getting very popular. Although it sounds impressive in theory, I think the real world will get in the way of their success. The design process is fundamentally iterative in nature, and anything designed using a linear approach is almost guaranteed to come up short.
Having managed ASIC designs for several years I doubt whether this new design technology can replace what tens or hundreds of designers per project are currently doing...sounds too good to be true...maybe for simple designs with far from optimum implementation...but if I am wrong these guys will swallow Cadence, Synopsys and Mentor in one scoop ;-)...Kris
A lot of software companies tried to lure us in believing that they have the secret to replace the known flow from marketing to GDSII. Some started only at RTL to GDSII like Monterey, some proposed solutions to generate schematic to layout with circuit optimization like Barcelona Design… Barcelona results were DRC and LVS clean ? Monterey disappeared, Barcelona became Sabio Labs and is now part of Magma/Synopsys Analog suit hopefully)… Good ideas don’t necessarily make good business models.
We have to read all these marketing hypes between the lines. They have to sell a product (or service) so they “promise” ALL the capabilities. Our job is to see how much of this technology is “sound”, and if we really have this specific type of design in our companies… Each of these new startups is trying to address a specific problem, and in some cases they do. The problem is the marketing/sales people are advertising a solution that will solve everything and save the day or a lot of money/resources.
DAC 2012 is coming, time to put such software (service) on the demo list. Let’s see what will be the conclusions in 3 months. Until then, no reason to argue, we don’t really have any facts… one or two successful users (customers) quotes can help, I did not see any yet… If it is only a service, it could be mathematical models and technology or just an army of low paid people somewhere in the world… We need more info to make any judgment…
Looks like it would be useful for application specific designs which need to move off FPGA or existing microprocessors for volume cost reasons, however it is likely to struggle if the clock speed is high. Consider generating a given waveform on a signal - if this has to be done by an RTL designer his state machine can be coded in gates and run off a high speed clock. For a microprocessor based solution, it needs to be coded in instructions and linked to the instruction clock rate which in turn is limited by the architecture of the microprocessor and all the other instructions that need to be implemented. There may be ways around this:
1) spot all such signals and build little RTL sections for each and then have them triggered by the microprocessor instructions
2) have a separate microprocessor for each set of signals and get the clock speed to match
But then it starts looking more custom than automated. Perhaps that explains the spread in times it takes to deliver a solution.
A long time ago I used a certain company's behavioural synthesis tool to try to implement a video scaler. The reps swore blind it was up to the task, but it then transpired that although it could quite happily model what I wanted to do, it could not get the performance because the dominant delay was in the "next state" logic and that was not part of what the tool could optimise for timing. In this current case, the next state logic is exactly what the microprocessor is doing it seems, so there are likely to be similar issues.
If what you want is just a faster, cheaper and lower power version of something you already do in C then that is ok, but I am sceptical it is an alternative for what people design chips in RTL for.
Interesting to read and learn about the process of making a SoC. Looks like 8 to 16 weeks is still a short time. I will keep an eye on the market to see if this is really a breakthrough. Though, I think like in most cases... it will work for some but not for others. We'll see.
They take C algorithms and test vectors as input. They did not say C program.
No doubt the C algorithms have to model the RTL design, so they take a design modeled in C as input to the EDA tool.
I am doing a similar thing with C# as a hobby project and it is not hard to do BUT the designer has to do the hardware design first.
The main advantage is that a SW IDE has much better debug/incremental compile capability than the HW EDA tools. I can step through the model and make changes at a breakpoint and continue with practically no delay as opposed to recompiling HDL and whatever else the tool needs to do.
So the hook is to let people assume the input is a C program, but in reality it is another HDL of sorts that is used to generate HDL that EDA can use. The logic design is first done by generating a restricted/subset of C.
Unless their hardware architecture is heavily constrained beforehand, I can't see this being the case.... unless they've managed to solve an NP-hard problem with today's computer technology and the rest of humanity does not know it yet :-)
This has been debated now for 20 years. I don't see one of the primary tradeoffs discussed: per unit cost optimization (ie, die area) of this method versus current design flow. A bigger die *might* be acceptable for low cycletime, low volume products. And how many of those exist out there?
I recently saw a demo of a Xilinx tool called Vivado High-Level Synthesis which seems to work pretty much like Algotochip's tool, but only for FPGAs. You define your application in C-code, then pull down menu options for implementation strategies, choosing options like use "FIFO", after which it creates RTL and gives you performance metrics. If the RTL does not meet spec, you pull down different options until it does. In the end, it sends a file over to the HDL tool that you can tweak to your heart's desire. This is probably the way Algotochip's internal tools works--only its engineers do the tweaking.
Nowadays, finding a high quality post is really difficult. I’d like also to thank my friend for giving me the url of your blog. Hope you appreciate my short comment.