PORTLAND, Ore.—The first automated software-to-chip dream came out of the closet Monday (April 23), when Algotochip Corp. (Sunnyvale, Calif.) claimed to be able to produce a system-on-chip (SoC) design from a C-code specification in just eight to 16 weeks.
"We can move your designs from algorithms to chips in as little as eight weeks," said Satish Padmanabhan CTO and founder of Algotochip, whose EDA tool directly implements digital chips from C-algorithms. "Our solution provides the appropriate RTL generated from C-ocde for SoC.
Algotochip said its technology, announced at the Globalpress Electronics Summit 2012 in Santa Cruz, Calif., generates all aspects of a solution including the software, firmware and hardware from the designers C-code and test stimulus vectors. Padmanabhan, former co-founder and chief architect at ZSP (acquired by Verisilicon in 2006), where he created the first superscalar DSP, recruited software experts from Apple and elsewhere two years ago, and today unveiled its proprietary engine that accepts as input a C-code file and outputs a Graphic Data System II (GSDII) suitable for creating SoC.
Algotochip said it had proven its methodology at half dozen customers so far, but disclosed only one: MimoOn GmbH (Duisburg, Germany) whose mimoOn mi!, a mobile PHY for LTE, has already been successfully fabricated by TSMC. Two"what if" designs were produced in 12 weeks for MimoOn, one for TSMC's 40-nanometer process and a second for its 90-nanometer process, Algotochip said. The latter of which was chosen for its final SoC due to its lower power consumption, the company said.
Algotochip starts with designers C-code (left) the generates an application-specific programmable microcontroller (top), and digital signal processor (DSP) along with a memory management unit (MMU) and input/output which implements an SoC (click on image to enlarge).
Altotochip has achieved this—the Holy Grail of SoC designers—by virtue of suite of software tools that interprets a customers' C-code without their having any knowledge of Algotochp's proprietary technology and tools. The resultant GDSII design, from which an EDA system can produce the file that goes to TSMC, and all of its intellectual property (IP) is owned completely by the customer—with no licenses required from Algotochip. If a designer wants to use a licensed core from ARM or other popular vendor, Algotochp can also accommodate on demand.
Algotochip's design flow first makes an analysis of the designers C-code from which it provides optimization suggestions for the C-code design, from which Algotochip generates a set of system specifications with options. Once the designer answers a questionnaire about their choices, Algotochip designs the base system architecture, and produces a complete SoC design including firmware and software which it then delivers to the designer in 8-to-16 weeks—from the day the C-code is delivered to Algotochip to the day the GDCII is delivered back to the designer.
Algotochip also claims its patented power-aware architecture tightly control leakage power for long battery life of mobile designs, and will also work with other implementation technologies besides SoCs, including DSPs, ASICs, ASSPs and FPGAs.
Nowadays, finding a high quality post is really difficult. I’d like also to thank my friend for giving me the url of your blog. Hope you appreciate my short comment.
I recently saw a demo of a Xilinx tool called Vivado High-Level Synthesis which seems to work pretty much like Algotochip's tool, but only for FPGAs. You define your application in C-code, then pull down menu options for implementation strategies, choosing options like use "FIFO", after which it creates RTL and gives you performance metrics. If the RTL does not meet spec, you pull down different options until it does. In the end, it sends a file over to the HDL tool that you can tweak to your heart's desire. This is probably the way Algotochip's internal tools works--only its engineers do the tweaking.
This has been debated now for 20 years. I don't see one of the primary tradeoffs discussed: per unit cost optimization (ie, die area) of this method versus current design flow. A bigger die *might* be acceptable for low cycletime, low volume products. And how many of those exist out there?
Unless their hardware architecture is heavily constrained beforehand, I can't see this being the case.... unless they've managed to solve an NP-hard problem with today's computer technology and the rest of humanity does not know it yet :-)
They take C algorithms and test vectors as input. They did not say C program.
No doubt the C algorithms have to model the RTL design, so they take a design modeled in C as input to the EDA tool.
I am doing a similar thing with C# as a hobby project and it is not hard to do BUT the designer has to do the hardware design first.
The main advantage is that a SW IDE has much better debug/incremental compile capability than the HW EDA tools. I can step through the model and make changes at a breakpoint and continue with practically no delay as opposed to recompiling HDL and whatever else the tool needs to do.
So the hook is to let people assume the input is a C program, but in reality it is another HDL of sorts that is used to generate HDL that EDA can use. The logic design is first done by generating a restricted/subset of C.
Interesting to read and learn about the process of making a SoC. Looks like 8 to 16 weeks is still a short time. I will keep an eye on the market to see if this is really a breakthrough. Though, I think like in most cases... it will work for some but not for others. We'll see.
Looks like it would be useful for application specific designs which need to move off FPGA or existing microprocessors for volume cost reasons, however it is likely to struggle if the clock speed is high. Consider generating a given waveform on a signal - if this has to be done by an RTL designer his state machine can be coded in gates and run off a high speed clock. For a microprocessor based solution, it needs to be coded in instructions and linked to the instruction clock rate which in turn is limited by the architecture of the microprocessor and all the other instructions that need to be implemented. There may be ways around this:
1) spot all such signals and build little RTL sections for each and then have them triggered by the microprocessor instructions
2) have a separate microprocessor for each set of signals and get the clock speed to match
But then it starts looking more custom than automated. Perhaps that explains the spread in times it takes to deliver a solution.
A long time ago I used a certain company's behavioural synthesis tool to try to implement a video scaler. The reps swore blind it was up to the task, but it then transpired that although it could quite happily model what I wanted to do, it could not get the performance because the dominant delay was in the "next state" logic and that was not part of what the tool could optimise for timing. In this current case, the next state logic is exactly what the microprocessor is doing it seems, so there are likely to be similar issues.
If what you want is just a faster, cheaper and lower power version of something you already do in C then that is ok, but I am sceptical it is an alternative for what people design chips in RTL for.
A lot of software companies tried to lure us in believing that they have the secret to replace the known flow from marketing to GDSII. Some started only at RTL to GDSII like Monterey, some proposed solutions to generate schematic to layout with circuit optimization like Barcelona Design… Barcelona results were DRC and LVS clean ? Monterey disappeared, Barcelona became Sabio Labs and is now part of Magma/Synopsys Analog suit hopefully)… Good ideas don’t necessarily make good business models.
We have to read all these marketing hypes between the lines. They have to sell a product (or service) so they “promise” ALL the capabilities. Our job is to see how much of this technology is “sound”, and if we really have this specific type of design in our companies… Each of these new startups is trying to address a specific problem, and in some cases they do. The problem is the marketing/sales people are advertising a solution that will solve everything and save the day or a lot of money/resources.
DAC 2012 is coming, time to put such software (service) on the demo list. Let’s see what will be the conclusions in 3 months. Until then, no reason to argue, we don’t really have any facts… one or two successful users (customers) quotes can help, I did not see any yet… If it is only a service, it could be mathematical models and technology or just an army of low paid people somewhere in the world… We need more info to make any judgment…
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.