Sounds similar to what we have done with VHDL's OSVVM methodology (see http://osvvm.org/ ). OSVVM is an intelligent testbench methodology that focuses on functional coverage modeling. It also allows the mixing of coverage driven randomization (we built randomization directly into the coverage model to do this), constrained random, directed, algorithmic, and file based test methods and can be adopted as needed without throwing out your current VHDL testbenches.
OSVVM consists of an open source set of VHDL packages that use protected types (VHDL-2002) and is supported by current VHDL simulation tools that support protected types (I have used it on both Mentor's and Aldec's simulators).
Does Breker support VHDL?
welcome to our website:
------- http://www.simpletrading.org/ --------
if you like to order anything you like.
please just browse our website Quality is our Dignity;
Service is our Lift.
"Constrained Random" (CR) is just a nice way of saying we are going to test the corner cases we can think of and see if we can get code coverage on the rest with some random stuff.
Most logic designs should verifiable with formal methods (and assertions are essential to that), CR is a fallback. The benefits of CR would be more pronounced if the underlying DUT simulation actually included all the effects of manufacture - like Silicon variability, and analog effects (thermal, power management, crosstalk etc). The SystemVerilog committees have studiously avoided adding support for that, so the benefits of CR are being somewhat oversold.
I still believe a great opportunity was missed with SystemVerilog. It's basically been a huge distraction; it's yet another slightly better procedural language when we already had two competent OO languages in C++ & Java.
What's needed is a radically different approach. We need to put our efforts into declarative requirements languages which are divorced (orthoganol if you prefer) from the procedural implementation and develop tool suites that parse the requirements to generate the procedural tests.
In a sense that's what PSL is about, however all the current formal languages suffer from a major, unresolved shortcoming - we don't currently have a way to know if we have enough 'rules' to fully describe a behavior. Put another way, we don't know if the boundary we've described for the operating envelope is continuous, or if it has holes.
That's where we ought to have put our efforts instead of allowing ourselves to be distracted by small productivity gains when we really needed a quantum leap.
Assertion checking has had a problem with tool support and methodology maturity. Especially with cryptic assertion syntax, good support for checking the assertions themselves is needed.
In general, more effort is being thrown into verification at RTL to make up for weaknesses earlier in the design flow.
you had me at:
"Enough of the sideshows – it’s time for some real advancement in functional verification!"
and sealed the deal with:
"It is ... totally bizarre when it is an inherently bad technology that gets adopted, along with requiring a new language, coverage model and methodology. But that is indeed what happened with constrained random test pattern generation."
I'm sure you'll have people chiming in on this, but my additional take on this is that the quality of code we write is what's driving the need for increasingly sophisticated/elaborate tools (i.e. intelligent testbench tools don't make much sense when you're design and testbench code is ill-equipped and/or garbage). "Trying harder" isn't going to help (I've received that advice before) so we need new ways to write code. A technique we can start using is test-driven development. catch the bugs as they are written instead of creating a bunch of bugs then finding/fixing them later. Write 10-20 lines of code at a time, test them to make sure they're right then move on. For software folks tdd is old hat. In hardware tdd would be necessary innovation.