Quantcast
Channel: Cadence Functional Verification
Viewing all articles
Browse latest Browse all 652

Improving Tests Efficiency Using Coverage Callback

$
0
0

When you go to the store, you walk until you get there, stop, get your groceries, and go back home. You do not start circling around the block for few rounds. You do not say “if I walk around the block really fast, I can save time”. It is clear that if you avoid circling the block at first place – you will save even more time.

Why don’t we adopt the same rationale in the verification process? Instead of thinking just about how to run faster, try to see if we can run less. If, for example, the test executes 10 million transactions in 10 hours, then instead (or in addition) of getting these transactions executed faster – let’s try to understand if we really need to run so many transactions. Perhaps the first 1 million got us to our goal.

You might say that with verification, unlike the shopping situation, there is no big “Grocery” sign telling me I arrived at my destination. Well, there isn’t such a sign, but there are ways to know that you reached your goal. What is the goal? With Coverage Driven Verification methodology, we define the goal with coverage. Inquiring the coverage model during the run might reveal such “you reached your goal” sign.

In a recent blog Specman Coverage Callback we introduced the new Coverage Callback API, which allows querying the coverage model whenever a coverage group is being sampled. That is – you can have full vision about were you stand regarding your coverage goal, during the test. In this blog we give some more details about various options you can use the coverage runtime information in order to improve the tests efficiency. The full code of the examples shown below is on github, next to other e utilities and examples.

Analyze coverage progress throughout the test


The coverage report we view at the end of the run gives us information about the coverage groups grade. It does not tell us how efficient we were in our journey – did we stop when we reached the goal, or did we continue circling around it.

Using the Coverage Callback API, you can get the information during the test. This means we know the relative contribution of different segments of the test to the coverage. A past blog, analyze-your-coverage-with-python-plot, showed how such information can be used to produce plots showing the coverage progress. The plot below was created in a test in which the coverage of data items (blue line) reached almost to its maximum around the first third of the test.  We can say that in term of data coverage – the test was very efficient in the first 200 cycles, and not very efficient after that.

That blog was using the old Coverage Scan API. Using the new Coverage Callback API, we can get same reports, but with no performance penalties. Here is the code doing this:

struct cover_cb_send_to_plot like cover_sampling_callback {
  do_callback() is only {
    if is_currently_per_type() {
      var cur_grade : real = get_current_group_grade();
      // Pass the name and grade to the plotter
      sys.send_to_graph(get_current_cover_group().get_name(),
                        cur_grade);
    };
  };
};

Sending the data to the plot app adds a nice touch, but it is not a must. You can also write to a text file. Here is, for example, a table created during one test, showing the coverage progress of three coverage groups. Note that each grade is recorded when the relevant group is being sampled, so not all samples are at same time.

in_data_cov

time 5 | 111 | 123 | 450 | 838 | 1062 | 1069 | 7924 | 53735 | 55560 | 66194 | 1018680
grade 14 | 29 | 37 | 38 | 38 | 38 | 38 | 38 | 41 | 41 | 43 | 44

power_cov

time 0 | 0 | 89 | 30038 | 132124
grade 37 | 37 | 37 | 37 | 38

fsm

time 0 | 0 | 0 | 0 | 0 | 8 | 14 | 45 | 939 | 20479 | 31278 | 67907
grade 5 | 5 | 5 | 5 | 5 | 5 | 14 | 14 | 16 | 16 | 40 | 40

After analyzing the reports, based on what you learn from the behavior of the past test, you can decide how to continue. You might decide, for example, to improve the tests that seem to run “full gas in neutral” by changing the constraints.

 

Take run time decisions, based on coverage progress


In the previous example, we talked about analyzing the coverage progress after the test (or the full regression) ends. But you also can take runtime decisions. One decision you can decide to take is to stop the test once you estimate that the test does not seem to add anything to the coverage; “if it didn’t get to any new area in the last two hours, most likely that it will not get any better if it continues”.

The following code implements a cover_sampling_callback struct, that compares the current coverage grade to the previous grade. If there is no change in the grade for more than X samples – it emits an event. The user of this utility can decide what to do upon this event, for example – to gracefully stop the run. Another decision might be to change something in the generation hoping that it will exercise area that were not exercised before.

The following is a snippet of the utility, that can be downloaded from cover_cb_report_progress.

 

struct cb_notify_no_progress like cover_sampling_callback {
  do_callback() is only {
    var cr_name    := get_current_cover_group().get_name();
    var group_info := items_of_interest.first(.group_name == cr_name);
    if group_info != NULL {       
      if group_info.last_grade == get_current_group_grade() {
        group_info.samples_with_no_change += 1;
      } else {
        group_info.last_grade = get_current_group_grade();
        group_info.samples_with_no_change = 0;
      };
      if group_info.samples_with_no_change >
         group_info.max_samples_wo_progress {
        message(LOW, "The cover group ", cr_name,
                " grade was not changed in the last ",
                group_info.samples_with_no_change, " samples " );
        // Notify of no progress
        emit no_cover_progress;
      };
    };
  };
};

// User code: when there is no progress, change the weight of
// illegal transactions.
// Note that the legal_weight field is static, so – can be written
// from one place, such as from the env unit. No need to accessing
// a specific instance of a transaction
<'
struct transaction {
    legal               : bool;
    // By default – most transactions are to be legal
    static legal_weight : uint = 90;
    keep soft legal == select {
        value(legal_weight)       : TRUE;
        100 - value(legal_weight) : FALSE;
    };
};

extend env {
  on cb.no_cover_progress {
    // Modify the static fields. From now on – all
    // transactions will have 50% probability to be illegal
    transaction::legal_weight = 50;
  };
};

 

We hope these examples intrigue your imagination. Stay tuned – next blog will show some more ideas for improving tests efficiency using Coverage Callback.


Viewing all articles
Browse latest Browse all 652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>