order to draw conclusions why a random test failed we must retest the
very same test on older revisions. This means rerunning the failing
test, using the same seed, on older revisions, in order to identify when
the problems started to arise. This is the only way to be able to
compare the test results on older revisions with the test results on the
latest revision. Once you have rerun the same test on an old revision
then you will be able to do the same comparison as you would with a
directed test. If the same test with the same seed passes in an older
revision then you are able to identify that a regression has occurred.
If the same test and seed has always failed then you know this is a new
test. This new test may in turn either be catching a new corner case, or
alternatively it may be an illegal test. Either way you are now able to
distinguish between new tests and regression in quality.
Figure 2 Random testing with PinDown
through older revisions used to be a manual process, consuming
expensive engineering time, but this has now been automated in PinDown,
the automatic debug tool. PinDown can automatically debug any test
failure, both random and directed, down to the exact revision that
caused the failure and send the developers who cause the failure a bug
report before the night’s regression has even finished.
shows how PinDown operates on the flow of random test failures. The
stream of random failures are split into regressions and new tests,
where the regressions are diagnosed down to the exact revision that
caused the problem and a bug report is sent to the person who committed
each the error. This allows regression errors to be fixed fast and thus
allows the device and testbench to maintain high quality.
other category is new tests, i.e. tests that have always failed and are
consequently covering a new test scenario. These are not failing due to a
sudden regression in quality, which may lead to panic and holding the
release, but is new test coverage which is overall positive news.
setup solves the problem with using random tests in regression testing.
It allows you to keep running random testing with the upside of getting
good coverage without the downside of not being able to identify
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.