Quantcast
Channel: Cadence Functional Verification
Viewing all articles
Browse latest Browse all 652

Randomizing Error Locations in a 2D Array

$
0
0

A design team at a customer of mine started out with Specman for the first time, having dabbled with a bit of SystemVerilog. I can't reveal any details of their design, but suffice to say they had a fun and not-so-simple challenge for me, the outcome of which I can share. Unlike some customers (and EDA vendors) who think it's a good test for a solver to do sudoku or the N-Queens puzzle (see this TeamSpecman blog post http://www.cadence.com/Community/blogs/fv/archive/2011/08/18/if-only-gauss-had-intelligen-in-1850.aspx), this team wanted to know whether IntelliGen could solve a tough real-world problem...

The data handled by their DUT comes in as a 2D array of data bytes, which has been processed by a front-end block. The data in the array can contain multiple errors, some of which will have been marked as "known errors" by the front-end. Other "unknown" errors may also be present, but provided that the total number of errors is less than the number of FEC bytes, all the errors can and must be repaired by the DUT. If too many errors are present, it is not even possible to detect the errors, so the testbench must generate the errors carefully to avoid meaningless stimulus. It also needs to differentiate between marked and unmarked errors so that the DUT's corrections can be tested and coverage performed based on the number of each type of error.

This puzzle is rather more complex than the N-Queens one: we have multiple errors permitted on any single column or row in the array, and there are three possible states for each error: none, marked and unmarked. There is an arithmetic relationship between the error kinds: twice the number of marked errors than unmarked can be corrected. Furthermore, unlike the N-Queens, a test writer may wish to add further constraints such as clustering all the errors into one row, fixing the exact number of errors, or having only one kind of error.

First we define an enumerated type to model the error kind:

By modelling the 2D array twice, once as complete rows and once as complete columns, we can apply constraints to a row or column individually, as well as to the entire array. We only look at whether to inject an error, not what the erroneous data should be (this would be the second stage). I've only shown the row-based model here, but the column-based one is identical bar the naming.

The row_s represents one row from the 2D array, with each element of "col" representing one column along that row. The constraints on num_known and num_unmarked limit how many errors will be present. These are later connected to the column-based model in the parent struct.

The effective_errors field and its constraints model the relationship between the known and unmarked errors, whereby twice as many known errors than unmarked errors can be corrected.

Next we define the parent struct which links the row and column models to form a complete description of the problem. Here "cols" and "rows" are the two sub-models, and the other fields provide the top-down constraint linkage.

The intent is that the basic dimensions are set within the base environment, and the remaining controls are used for test writing.

Next, we look at the constraints which connect the row and column models together. The first things to do are to set the dimensions of the arrays based on the packet dimensions, and to cross-link the row and column models. These are structural aspects that cannot be changed. The rest of the constraints tie together the number of errors in each row, column, and the entire array. By using bi-directional constraints, we are allowing the test writer to put a constraint on any aspect. 

And that's it! With just that small amount of information IntelliGen can generate meaningful distributions of errors in a controlled way. Test writers can further refine the generated error maps with simple constraints that are actually quite readable:

 

Notice another little trick here: the use of a named constraint: "packet_mostly_correctable". This allows a test writer to later extend the error_map_s and disable or replace this constraint by name; far easier than figuring out the "reset_soft()" semantics and a whole lot more readable.

Note that for best results, this problem should be run using Specman 13.10 or later due to various improvements in the IntelliGen solver.

Steve Hobbs

Cadence Design Systems 


Viewing all articles
Browse latest Browse all 652

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>