SANTA CLARA, Calif.--Cultural complacency and "uncomplaining"
engineers are stunting EDA tool investment and preventing IC
companies from keeping up with quickening design complexity, a
senior engineering manager at chip vendor Nvidia Corp. said Tuesday (Jan. 29).
"Engineers don't complain enough," said Jonah Alben, Nivida's senior vice president of engineering, said during a keynote address at the DesignCon conference here.
"Engineers like myself tend to ultimately figure out a way to live
with whatever environment they're put into," he said. "They don't
speak up when everyone else is speaking up in the company about what
we want to see in the next-generation product."
That, combined with "cultural complacency"--struggling to navigate new
daily challenges and manage priorities--usually prompts companies to
tamp investment in EDA tools while under-staffing their
Nvidia engineering VP Jonah Alben after his DesignCon keynote.
"Despite the value of EDA, in general companies tend to under-invest
in it, and I'll put my company into that bucket," Alben said.
This is happening at time when there's a widening gap between the
growth rate of chip complexity and the ability of tools to, for
example, simulate them efficiently, Alben said.
"The problem is we're in this multicore era of CPU. It's good for
throughput...but for logic simulators...it's not been great era for
them in terms of their intrinsic speed of simulation," he said.
Alben cited the example of a 2008 CPU design that today would be
four times as big and complex, while simulation runs would takes
In the past, an engineer might have been running two hours worth of
simulation tests to find bugs.
"On any given day, you can find a bug, fix a bug. Now they might be
looking at eight hours to get feedback," he said. "This is
definitely a significant problem we're looking at, and it's only
going to get worse as we move forward."
IP is such a perishable item: 17 years ensured by the original patent system is far too long nowadays; I can't think of many 17-year old technologies in the high-tech area that still stand on their own.
Today, the speed and excellence of execution matter more than proprietary IP, whether trade secret or patented.
For verilog tinkerers, putting your design in the cloud is not an issue when the code is already hosted on github. What matters is access to the underlying IT infrastructure, simulation and verification tools at the lowest cost possible.
For IC companies that care a great deal about running local, it is straightforward to deploy any SaaS such as fortylines on a private cloud. We ship openstack-ready VMs in that case.
There is a trade-off between cost/ease-of-use/security that is different for everyone. Building EDA tools as cloud services enables the flexibility required to adapt to every situation.
I chuckled at the notion of "methodology staffing." In the IC development teams I have worked on, methodology improvements are usually tested, proven and adopted during the course of a product development -- and they are sometimes at odds with what the official corporate CAD empire-mandated methodology is supposed to be.
Rarely does one size fit all designs, but it is the nature of corporate CAD empires to roll out a one-size-fits-all methodology (actually two -- one for digital and one for AMS) for all designers to adopt.
A major issue in this love-hate relationship is that chip designers are competent EDA users but not necessarily EDA experts, while methodology teams are made of EDA experts who aren't (usually) chip designers.
Thus, whenever a new tool, script or methodology enhancement is introduced -- especially if it is mandated -- a designer's response is sometimes "tell me again how many tapeouts those guys have done? Yeah, that's what I thought!"
Verification and QA are crucial steps to make a good product. I am glad to hear it.
There is no doubt the complexity of multicore would make the verification task a lot harder and the time would be much longer. Tools become very critical to maintain the streamline of the "production line". I'm very interested in learning what steps Synopsys are doing to help the effort.
It is possible for a couple people to put together a web application with sign-up, payment processing, etc. in a week-end.
There has not been this kind of leverage and productivity gain coming out of the EDA and IC industries but it is coming.
You just have to look at Upverter(http://upverter.com/) and (shameless self-promotion) Fortylines(http://fortylines.com).
He is right. SPICE and HDL simulators are still single-threaded. EDA companies tend to buy the main components and add things onto them, they do not tend to innovate at the basic tool level.
Another issue is interoperability, it would be natural to link SPICE and HDL in a single run with multitasking but nobody will implement it.
Possibly the revenue model is against making the tools run faster since the big players would sell less seats.
1) engineers complain plenty in my experience - but I guess company culture has an effect upon that.
2) they also quite like to fix issues themselves using scripting etc cos they get a fix sooner and some EDA companies are quite slow to respond or just don't care unless big bucks are attached
3) simulation comes and goes as an issue. In the 90s it was a big bottleneck then cycle based sims, formal equivalence checking and STA came in and made them less relevant. Now chips have grown enough to take up the slack again
4) is simulation the best way to find the answer the designer is looking to answer? It was not the best way to check logic timings (STA is better), it was not the best way to check that synthesis had worked (equivalence checking is better). So do some of these sims have a better alternative eg protocol checking, assertion checking, formal methods based upon the intent of the design?
5) simulation is tough to break across processor cores - although this is easier with shared memory than it was with separate chips. Often the solution here is to simulate at a much lower level but that requires a finer granularity of models to check against and that may require a change in the the way the designers and architects work. NVidia used to (in the 90's) run sims against C models - but the C models were quite high level - perhaps they need to split these down a bit to speed stuff up?