There are more ways to improve productivity in the verification process than simply making the simulation run faster. One of these is to cut down on the amount of time engineers spend working hands-on with the testbench itself, preparing it and coding Specman/e tests for it. It is common knowledge that the engineer’s task is to stock the testbench with these tests and rerun regressions to measure their effect as they grind their way toward coverage closure. But—are we bound to this slogging process, or is there a more productive way?
Coverage-Driven Distribution (CDD), an advancement developed by Infineon, goes beyond metric-driven verification (MDV) to solve this problem by “closing the loop.” It takes a process where an engineer needs to stop the flow repeatedly to meddle in the testbench and removes that part from the equation entirely via an algorithm that automatically puts Specman/e tests in the testbench to run. This is a deterministic, scalable, repeatable, and manageable process—and when you include the CDD (coverage-driven distribution) add-on, it becomes even more automated.
Now, instead of running a bunch of e tests with Specman/Xcelium and crossing their fingers, an engineer spends their time analyzing the coverage report and determines from there what needs to be tweaked for the best overall coverage. This adds a bit of time at the start, but the amount of time subtracted from the body of the verification process is significantly greater.
In short, this tool automates the analysis, construction, and execution parts of the verification process. An engineer defines what functional coverage would be for a given testbench, and then that testbench fills itself with tests and runs them.
The full paper describing this tool from Infineon—which won the “Best Paper Award” in the IP and SoC track at CDNLive—is available here.
Work Flow with CDD
The algorithm used by CDD works in four steps:
1. Read coverage information from previous runs. This helps the algorithm “learn” faster, and saves engineer time between loop iterations.
2. Detects coverage holes using that information: this results in a re-ordering of events.
3. Builds a database of sequences which sets coverage items to the location of holes. Then, collection events are triggered.
4. The database is used to apply stimuli to the DUT.
Figure A: The graphic below shows the work flow under the algorithm.
Results
As it turns out, this tool is proven to be effective: it has already been used in real projects to successful ends. That means that—assuming everything is configured correctly—a test can drive the correct sequence, using the correct constraints, by itself. It can then set the coverage items to the right value and trigger collection events, increasing coverage.
All of that is automatic.
There are some costs. A verification engineer still has to feed inputs to the algorithm to train it, specifically information regarding when to trigger a collection event, and how to set values for each item in a given coverage group. The biggest pull here is that the major downside of randomization—the uncertainty—is no longer an issue; it can be planned via the CDD algorithm. It also does not take away the advantages of randomization.
In total, in exchange for a bit more work in building the database and Specman/e sequences, you gain faster coverage closure, earlier identification of coverage groups not included in a given test generation, and a better understanding of the DUT itself.
Looking forward, this technology may expand to be able to derive information in the database—like sequences, constraints and timings—from previous runs, instead of just coverage information, which would further reduce the time an engineer spends manually interacting with the testbench. Beyond that, an additional machine-learning algorithm that uses the coverage model and “previous experience” may be able to create and drive meaningful stimuli to patch the remaining coverage holes, even further reducing engineer meddle-time.