Quantcast
Channel: Cadence Functional Verification
Viewing all 669 articles
Browse latest View live

New Specman Coverage Engine (Part II) - Using Instance-based Coverage Options for Coverage Parameterization

$
0
0

In the last coverage blog, we showed how the extensions of covergroups under when subtypes can help us write a reusable per-instance coverage.

We described a test case where a packet generator unit can create packets of different sizes. The packet generator unit has a field that describes the maximum size of any packet that can be generated by the packet_generator instance:

type packet_size_t: [SMALL, MEDIUM,LARGE,HUGE];

unit packet_generator{

    max_packet_size: packet_size_t;

    event packet_generated;

    cur_packet: packet;

    generate_packet() is{

        gen cur_packet keeping {it.size.as_a(int) <= max_packet_size.as_a(int)};

        emit packet_generated;

    };

};

 

We defined a covergroup that is collected per each instance of packet_generator, to ensure that each packet generator creates packets of all relevant sizes:

 

extend  packet_generator{

     cover packet_generated using per_unit_instance is{

        item p_size: packet_size_t = cur_packet.size;

     };

};

 

Then we refined the group's instances according to their actual subtypes, so that irrelevant packet sizes are ignored. This solution included setting a different fixed ignore condition for each subtype:

 

extend packet_generator{

    when SMALL'max_packet_size packet_generator{

        cover packet_generated is also{

            item p_size using also ignore = p_size.as_a(int) >

                                         packet_size_t'SMALL.as_a(int);

        };

    };

    when MEDIUM'max_packet_size packet_generator{

        cover packet_generated is also{

            item p_size using also ignore = p_size.as_a(int) >

packet_size_t'MEDIUM.as_a(int);

        };

    };

    // ... Same for other max_packet_size values

};

 

However, if we take a close look at the extensions under the subtypes, we can identify a uniform pattern for all the extensions:

 

item p_size using also ignore = p_size.as_a(int) >

                                         <max_packet_size field value of this subtype>

 

This pattern indicates that defining ignored values in a parameterized manner (that is, ignore all size values that are bigger than the value of the max_packet_size field of the instance) is more suitable here.

And as of Specman version 12.2, we have the appropriate syntax for doing exactly that:

 

extend packet_generator{

   cover packet_generated is also{

       item p_size using also instance_ignore = p_size.as_a(int) >

                                         inst.max_packet_size.as_a(int);

   };

};

 

The above code illustrates two new concepts: First, the use of the instance_ignore item option instead of the ignore option; second, the use of a special field named "inst" in the instance_ignore option.

Parameterized Instance-Based Coverage Options

In previous versions, Specman had four coverage options that defined what would be included in the coverage model:

-          no_collect group option – could be used to exclude groups / covergroup instances from the model.

-          no_collect item option –  could be used to exclude items from the model.

-          ignore / illegal items options – could be used to exclude specific buckets (bin) values from the model

 

In Specman 12.2, we added instance-based versions for these four coverage options:

-          instance_no_collect group option – for selectively refining which instance of the covergroup will be disabled.

-          instance_no_collect item option – for selectively refining from which group instance the item will be excluded.

-          instance_ignore /instance_ illegal items options – for selectively refining which item’s bucket will be under each coverage instance.

 

When using these instance based options, the user can use a special field, named ‘inst’, to reference the relevant unit instance of each coverage instance, and get the values of the configuration fields of the instance.

Specman assigns the value of the ‘inst’ field to the relevant unit instance, and then computes the expressions separately for each coverage instance.

As the above description indicates, the four instance-based options can be used to apply different behaviors to different instances of the same covergroup. But if there is a need to apply a common behavior for all instances of the covergroup, then the original “type based” options are more appropriate. For example, use the no_collect item option, not the instance_no_collect option, to remove base items of a cross item from the model.

Team Specman


That Cowbell Must be Registered – Introducing the UVM SystemVerilog Register Layer Basics Video Series

$
0
0

In May of 2012 we launched the initial cowbell YouTube video series on the basics of UVM for SystemVerilog IEEE 1800 and e IEEE 1647.

This was followed by a video series on debugging with SimVision.

Then, we struck a different kind of cowbell by releasing a MOOCs course for Functional Verification on Udacity.

Now it is definitely time for more cowbells.

One aspect that was not covered in the UVM Basics series was the register layer. In this new video series we are giving an overview of the concepts, components and applications of the UVM register layer.


 

The new video series is broken up into twelve clips:

  1. Introduction
  2. Testbench Integration
  3. Adapter
  4. Predictor & Auto Predict
  5. Register Model & Generation
  6. IP-XACT
  7. Register Model Classes
  8. Register API & Sequences
  9. Access Policies
  10. Frontdoor & Backdoor
  11. Predefined Sequences
  12. Demonstration

Go ahead and register your cowbells!

Axel Scherer

Incisive Product Expert Team
Twitter, @axelscherer

New Specman Coverage Engine (Part III)—Use of Extension Under "when" vs. Using Instance-Based Options

$
0
0

In both previous coverage blog posts (Part I and the Part II), we showed two solutions for refining instance-based coverage in a reusable way. And in doing so, we demonstrated a case where using the instance_ignore option is more suitable than using the extension under when solution.

Now, let us modify the requirement a little, by adding a new item to the covergroup:

extend packet_generator{

  cover packet_generated is also{

     item p_length: uint(bits:4) = cur_packet.length;

  };

};

 

The length of the packet depends on the value of the size field according to the following constraints:

extend packet{

    length: uint(bits:4);

    keep size == SMALL => length in [0..2];

    keep size == MEDIUM => length in [3..6];

    keep size == LARGE => length in [7..10];

    keep size == HUGE => length in [11..15];

};

 

So again, for each packet_generator, some of the higher length values might be irrelevant due to the max_packet_size constraint.

We can set the ignored values using either of the following techniques:

  • Using the instance_ignore option:

cover packet_generated is also{

   item p_length using also instance_ignore =

            (((inst.max_packet_size == SMALL) and (p_length > 2)) or

             ((inst.max_packet_size == MEDIUM) and (p_length > 6)) or

             ((inst.max_packet_size == LARGE) and (p_length > 10)));

   };   

};

 

  • Or by extending the covergroup under subtypes:

when SMALL'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 2);

   };

};

 

when MEDIUM'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 6);

   };

};

 

when BIG'max_packet_size packet_generator{

   cover packet_generated is also{

      item p_length using also ignore = (p_length > 10);

   };

};

 

Here we recommend using the extension under when subtype code (the second bulleted option above), since the ignore expressions that need to be evaluated with this code are much simpler than the instance_ignore expression.

In some cases, only one of the solutions can be used:

  • A different setting of one of the other coverage options (for example weight) for each instance can only be achieved by extending the covergroup under when.

For example, if we want to have a larger weight for packet generators that can generate any size of packet, we need to add the following code:

when HUGE'max_packet_size packet_generator{

   cover packet_generated using also weight=2;  

};

 

  • On the other hand, when collecting a covergroup under instances of a unit that is not the definition type of the covergroup (using the per_unit_instance=<other_type> group option), extending the under when subtype cannot be applied. In these cases, only the use of the instance-based options is possible.

For example, suppose that instead of defining the covergroup under the packet_generator unit, we would have defined it under the packet struct (but still collect it per instances of packet_generator):

extend packet{

     cover packet_generated using per_unit_instance=packet_generator is{

        item p_size: packet_size_t = cur_packet.size;

     };

};

 

Now the covergroup can only be extended under the packet type, but we'd like to control the ignored values of its items according to a configuration field of packet_generator unit. So extension under when will not help us here.

But since instance-based options have a reference to the collection unit type (packet_generator) instance via the inst field, they can be used in the same manner that they are used when the covergroup is collected per instances of its declaration unit type. 

Erez Bashi 

Configurable Specman Messaging Webinar Archive Available Now

$
0
0

Configurable Specman Messaging for Improved Productivity

Webinar Archive Available Now!

Hello Specmaniacs:


Ever wondered how to switch on all messages, or how to switch all of them off? Or get confused by the output from the "show message" command?

You're not alone. Many users and even Cadence R&D engineers have struggled with this. The main reason for the confusion is that messages are controlled by loggers, and loggers could be anywhere (apart from the sys logger). In 12.2 we have introduced a new infrastructure to configure messages, which is not based on loggers but on the unit hierarchy of your testbench.

If you missed the Configurable Messages Webinar delivered in July, here's another opportunity for you to view the archived webinar. Hannes Froehlich, a Solution Architect in the Cadence Functional Verification R&D team presents how you can now control your messages based on the location in the verification hierarchy (unit-tree) from which the messages were emitted. This has many benefits over the existing logger based message infrastructure. In the webinar we highlight how the new infrastructure and be used, and how it fixes the issues we had with loggers.

So don't delay and view the archived webinar to:

  • Get a basic introduction to the new message configuration system in Specman/e
  • Understand how messages can now be configured based on the component hierarchy
  • Find out about the new command switches and options to configure messages
  • Learn about the new procedural message configuration APIs in Specman/e

View Now: http://www.cadence.com/cadence/events/pages/archive.aspx 

Team Specman

e Macro Debugging

$
0
0
When creating a testbench using the MDV methodology, you want to write intelligent code whose behavior can be easily modified.

Using e macros can greatly improve your productivity by raising the level of abstraction at which these testbenches are created and used. With e macros, you can reduce the amount of code and simplify usage of code that needs to be used in several places in the testbench.

e macros are powerful code generators whose key benefit istheir ability to extend the e language.

What is called "macro" in some other languages might be merely text replacement, such as replacing all occurrences of some text "A" with the text "B".

Macros in e can do this too, but they are capable of far more sophisticated things. These usages might be more complicated to debug, so Specman allows us to debug the generated code, instead of the macro definition code itself.

In this document, we are going to explore ways to debug macros in various stages of the simulation.

Let's consider the following test case:

 

Here we have a macro (define as) which simply creates a client object, and adds a client to a list of clients. (Note: The parentheses and quotation marks that enclose <x'string> prevent the preprocessor from considering all the parameters that come after the <x'string> declaration in the program code as part of the string.)

The addition of client to the list is done via another macro: 

 

And the macro call is made here:

Parsing time errors

Specman parser gives clear error messages for syntax or parser issues at parsing time. For example, as seen below, we assign <x'name> (instead of <x'num>) to ‘it.num', but we do not have any such argument in the match expression.

This results in the following error:

The error here is pretty straightforward to fix. So let us correct the macro code (change line #14 to "it.num  == <x'num>") and re-run it.

Macro expansion errors (at load time)

There could be cases where the macro parses correctly but encounters issues after it was expanded at load time. In such cases, the code is still not loaded.

We can use the "trace reparse" command to debug such issues. Let us again look at our example. The macro is modified a bit, as shown below:

 

Note: The code "it.name==<3>" at line #9 and 13 parses well, but will fail to load with an error that doesn't give much information (the message merely says that Specman expects a string for "i"). So let us use "trace reparse" (before load phase). This re-parses the code, and gives us the following helpful message from which we can understand the root cause of the error.

 

 

Run-time errors

Run-time errors will occur on the expansion code that was created by the macro. In some cases, you might not get errors per se, but might see unintended or incorrect functionality. This is just like any other bug in your testbench, but the actual code is hidden under the macro definition.

Let's look how this reflects in our example. Once we are done loading and start to run the test, we get the following output:

For some reason, we generate NULL clients all the time. Is that expected behavior? Ummm...it doesn't seem so. So let's check what went wrong?

Like any other e code, we will use the source debugger to find the root cause:

1.       Put a breakpoint on the macro call and then step into the macro call itself, so that it breaks when macro is called.

  

As seen above, a breakpoint is applied at line #37.

 

2.       Run the simulation again after adding breakpoint. It automatically opens the source browser at breakpoint. If you step into the macro, the debugger will take you to the macro definition code:

 

3.       Click on the macro debug mode button  to select expansion mode. This will expand the macro code, to the real code Specman is running. Now you can keep on clicking ‘step into' button  to see the flow of execution.

 

 

We can set a ‘watch' on x and x1 to see how they take the values. After setting a ‘watch', run it for few more steps and the Watch window should show following values.

 

 

This shows that the client ‘x' (NOT x1) was generated. Since x1 is empty, it keeps adding empty items to the list. This clarifies what was causing a NULL list of clients.

Problem solved!

To summarize, macros in e are a very powerful tool. You need to know how to use them, and especially how to debug them. Having the correct tools makes this task much easier and intuitive, and prevents the frustration of debugging code you cannot even see.

Happy Debugging!

Mahesh Soni

Avi Farjoun

Generic Dynamic Run-Time Operations with e Reflection, Part 1

$
0
0

Untyped Values and Value Holders

The reflection API in e not only allows you to perform static queries about your code, but it also allows you to perform dynamic operations on your environment at run time. For instance, you can use reflection to examine or modify the value of a field, or even invoke a method, in a generic way. This means that if the specific field or method name is unknown a priori, but you have the reflection representation of the field or the method at hand, the reflection API provides you with the capability to perform the needed operation.

While this is a very strong and helpful capability, it should be used with care, to avoid unexpected results or even crashes. In this series of blogs, I will describe how to use some of these capabilities, as well as some tricky points which require caution in use.

In this first blog of the series, let's look at two important concepts with which you should be familiar: untyped values and value holders.

Untyped is a predefined pseudo-type in e, serving as a place-holder for any value of any type, which may be a scalar, a struct, a list, or any other valid e type. To assign a value to a variable of type untyped, you use the predefined pseudo-method unsafe(). For example (assuming my_packet is a struct field of type packet):

var a1: untyped = 5.unsafe();

var a2: untyped = my_packet.unsafe();

In this example, we assigned the numeric value 5 into untyped variable a1, and the struct value into untyped variable a2. We also use unsafe() to assign an untyped value back to a variable or a field of the original type, for example:

my_packet = a2.unsafe();

However, it is important to remember that the untyped variable itself does not know the actual type of the value assigned to it via unsafe(). Therefore, when you convert a value to untyped, it is your responsibility to later convert it to the correct original type. Thus, you need to avoid mistakes like this:

my_packet = a1.unsafe();  // This is bad code!

Here we take the value of the untyped variable a1 and try to assign it to my_packet. However, the value assigned previously to a1 was a scalar, not an instance of packet. So, this operation is illegal. The code would compile fine, but at run time it would most likely crash.

In simple cases, to avoid such mistakes you just need to be careful. In more complex cases, you can use a value holder. A value holder is a special object in e, of the pre-defined type rf_value_holder, and it allows you to keep a value of any given type along with its type information. So, as opposed to untyped, here the original type of the value is known. There are several reflection methods that operate on value holders. To create a value holder, we use the create_holder() method of rf_type, for example:

var vh1: rf_value_holder = rf_manager.get_type_by_name("int").create_holder(5.unsafe());

Here we created a value holder that keeps that value 5 of the type int. Note that since the create_holder() method itself can get a value of any type, it treats it as untyped; that's why we had to use unsafe() here. But as long as you call it on the correct rf_type (in this case, the one that represent the int type), it is fine.

Later we can enquire the type of the value kept in the holder, using the get_type() method:

print vh1.get_type();

or the actual value, using the get_value() method:

var x: int;

if vh1.get_type() == rf_manager.get_type_by_name("int") then {

     x = vh1.get_value().unsafe();

};

An important tip:

In general, conversions from any type to untyped and vice versa must only be done using unsafe(). It is a common mistake to use the explicit casting operator as_a(). Doing so leads to unexpected results (and consequently to confusions) and must be avoided. For example, the following causes an unexpected result.

var x: int(bits: 64) = 5;

var a: untyped = x.unsafe();

x = a.as_a(int(bits: 64));

In upcoming Specman releases (starting 13.2), using as_a() with untyped is going to be completely disallowed through a deprecation process.

In the next blog of this series we will look at some actual dynamic usages of the reflection API, based on untyped values and value holders, as well as some helpful tips.

 

Yuri Tsoglin

e Language team, Specman R&D 

Coverage Unreachability UNR App - Rapid Adoption Kit

$
0
0

The Cadence Incisive Enterprise Verifier (IEV) team recently developed a self-help training kit - a Rapid Adoption Kit - to help users gain practical experience applying IEV's Coverage Unreachability (UNR) App. The RAK also helps users see the benefits of different approaches, UNR flow with and without initialization. The "Coverage Unreachability UNR App" RAK is now available on Cadence Online Support.

 

 

Given an existing simulation environment, assertions are automatically generated from the code coverage holes and formal analysis is used to detect any unreachables. Unreachable code coverage is detected with each approach and results are compared between runs using IMC to locate and view the unreachables. You will also learn how to set up the simulation to collect code coverage and dump a minimal reset waveform for initializing the UNR proof.

 

The key objective is to familiarize the user with the flow, by running:

1. Simulation to generate the coverage database (and optional waveform for formal analysis initialization)

2. Formal analysis on the simulation code coverage to detect the unreachables and generate an unreachables database with two setups: basic uninitialized and initialized

3. IMC to merge the generated unreachables database into the original simulation database and load the merged database to view and accept the unreachables

 

 

Normal 0 false false false EN-US X-NONE HI

http://support.cadence.com/raks -> SOC and IP level Functional Verification

Rapid Adoption Kit Name

Overview

Application Note(s)

RAK Database

Coverage Unreachability (UNR) App

View

Lab Instructions

Download (2.6 MB)

We are also covering following technologies through our RAKs at this moment:

Synthesis, Test and Verification flow
Encounter Digital Implementation (EDI) System and Sign-off Flow
Virtuoso Custom IC and Sign-off Flow
Silicon-Package-Board Design
Verification IP
SOC and IP level Functional Verification
System level verification and validation with Palladium XP

Please keep visiting http://support.cadence.com/raks to download your copy of RAK.

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. If you are signed up for e-mail notifications, you've likely to notice new solutions, Application Notes (Technical Papers), Videos, Manuals, etc.

Note: To access above docs, click a link and use your Cadence credentials to logon to the Cadence Online Support http://support.cadence.com website.

 

Happy Learning!

Sumeet Aggarwal

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi;}

Covering Edges (Part I) – Cool Automation

$
0
0

With random generation, most of the fields are due to be quite well covered. If the field is of a type with a wide space, e.g. address is of 32 bits, then most likely not each and every of the 0xffffffff values will be generated. As verification engineers, we know that bugs tend to hide in the edges. That is - what will happen if the transfer is sent to the last address, to 0xffffffff? The verification environment challenge is, guaranteeing that these edge cases will be covered.

Making sure that edge cases are generated is easily achieved with the "select edges". For example:

 extend transfer {

    // For ~half of the transfers, the address will be

    // 0 or 0xffffffff

    keep soft address == select {

        50 : edges;

        50 : others;

    };

};

 

This "select edges" is an old feature. What I want to show here is a small utility answering the question of "should I go now and define this "select edge" on all fields?" This seems to be a very exhausting task ....

For this, I suggest using the e reflection to locate fields of interest. For example, all fields whose range is larger than 0xffffff.

This piece of code searches for fields defined in a given package with range larger than the given parameter, num_of_vals -

var t : rf_type;

for each rf_struct in rf_manager.get_user_types() {        

  for each (f) in it.get_declared_fields() {            

  // Do not add constraints to fields that

  //    - are not generate-able

  //    - were defined in a package other than what was requested

  if f.is_ungenerated() or                 

    f.get_declaration_module().get_package().get_name() != package_name {

      continue;

    };

                      

    t =  f.get_type();

    if t is a rf_numeric (nu) {

      if ipow(2, nu.get_size_in_bits()) > num_of_vals {

        fields.add( f);

      };

    };

}; // for each field

             

Once you have the list of fields of interest, you can do many things with it. For example, write into a file code similar to the"select edge" code shown above:

write_code (s : rf_struct, fs : list of rf_field) is {     

  var my_file  :=  files.open("cover_edges.e", "rw","big fields");

  files.write(my_file, append("extend",s.get_name(),"{"));

       

  for each in fs {        

    files.write(my_file,

               append("    // Field defined in ",                              

               it.get_declaration_module().get_name(),

               " @line ",

                                       

               it.get_declaration_source_line_num()));        

    files.write(my_file,

                append("    // Field type is ",                                       

                       it.get_type().get_name()));

             

    files.write(my_file,

                append("    keep soft ", it.get_name(),

                       " == select {"));

    files.write(my_file, "        50 : edges;");

    files.write(my_file, "        50 : others;");

    files.write(my_file, "    };");

  };

  files.write(my_file, "};");

  files.close(my_file);

};

 

You could copy and modify the code above, using the reflection to find fields by many more criteria, e.g. all fields that have "address" in their names, all fields of specific types, anything your imagination might come up with...

If you have any questions, or, even better, any suggestions for cool extensions of this example, please do share.

 

Efrat Shneydor 


Test Your Units Before Your Units Test You — Testing Your Testbench

$
0
0

Bugs are a part of life in any complex software development project. This is no different in the testbench development world.

Most bugs get discovered eventually. The question is: At which stage of the game are they discovered, and at what price?

Let's explore the option of testing parts of your testbench early on, at the lowest level you can leverage unit testing. This is an approach that has been successfully adopted in the general software development world. It consists of isolated, autonomous tests that target a very small piece of code, in order to test a specific behavior. Often these tests are just applied on methods.

The next question is: What does it take to adapt unit testing to the testbench development effort? 

Fortunately we are in luck. You can learn about unit testing for testbench development in two upcoming venues.

  • On December 12, 2013 in our webinar: "Testing the Testbench"

Register for this webinar

Doug Gibson of Hewlett-Packard will present an industrial application of this approach in session 9.3.

 

Happy (unit) testing,

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Practical Guide to the UVM for $15 - Virginia, There is a Santa!

$
0
0

Wondering what to get the verification engineer on your list?  You know, the one with the zealous love of SystemVerilog and UVM? It's the Practical Guide to Adopting the UVM, Second Edition for only $15!

The Practival Guide to the UVM is the most popular source of knowledge for the UVM.  The second edition, available since the beginning of 2013, has sold over 3500 copies. Authored by Kathleen Meade and Sharon Rosenbeg, the book provides novice to expert knowledge on testbench methodology and how to apply UVM to solve verification problems.

To get your deeply discounted version, visit our self-publishing company, LuLu.com.  You can search for the book there or follow this direct link.

Once you get it, be sure to get the examples.  Kathleen posted them on the UVMWorld forums at Accellera.org.  The downloads are free!

So grab your copy while it's at this new low price.  Come mid-January, the price will pop back to $60.

Wow, this sound sooooo much like a late-night commercial.  :-)

 

Happy  Holidays,

Your Cadence UVM team 

Generic dynamic run-time operations with e reflection Part II

$
0
0

Field access and method invocations

In the previous blog, we explained what are untyped variables and value holders in e, and how to assign and retrieve values to/from them. In this and the next blogs, we will see how they can be used in conjunction with the Reflection API, to perform operations at run time.

Normally, when you declare fields in your e structs and units, you then procedurally assign values to those fields at some points and retrieve their values at others. When you declare a method, you call it with certain parameters and retrieve its return value for later use. All of this is fine when you deal with a specific field or method, and that is what you need most of the time.

But what if you want to perform some generic operation? For example, you may want--given anye object (of any struct or unit type, which is unknown upfront)--to go over all its numeric fields and print their values. Or, you may want to traverse the whole unit tree, and on every unit whose type has a specific method (given by name), call that method and print its result.

The Reflection API allows us to perform tasks like that in a fairly easy manner. Here are some reflection methods which are helpful for those tasks. Given an instance object, the following two methods allow you to get the reflection representation of the struct or unit type of the object.

  • rf_manager.get_struct_of_instance(instance: base_struct): rf_struct

This method returns the like struct of the object, disregarding when subtypes.

  • rf_manager.get_exact_subtype_of_instance(instance: base_struct): rf_struct

This method returns the most specific type, including when subtypes, of the object.

For example, for a red packet instance, get_struct_of_instance() will return the reflection representation of type packet, and get_exact_subtype_of_instance() will return the representation of type red packet.

The following methods of rf_field allow, given an instance object of some struct, to set or get the value of the specific field of that object.

  • rf_field.set_value(instance: base_struct, value: rf_value_holder);
  • rf_field.set_value_unsafe(instance: base_struct, value: unsafe);
  • rf_field.get_value(instance: base_struct): rf_value_holder;
  • rf_field.get_value_unsafe(instance: base_struct): unsafe;

The set_valuemethods take the value passed as parameter, and assign it to the given field of the specified object. The get_value methods retrieve the value of the given field of the specified object and return it. There is a safe and an unsafe version of each method. The safe version uses a value holder, which already contains the type information for the value (as was explained in the previous blog), performs additional checks, and throws a run-time error in case of an inconsistency (for example, if the field does not belong to the struct type of the given instance). The unsafe version (the one with the _unsafe suffix) does not use a value holder and does not perform such checks; in case of an inconsistency, its behavior is undefined and might even cause a crash. Thus, you need to use it with a care. However, the unsafe version is more efficient, and I recommend using it when possible.

Similar to the above rf_field methods, the following methods of rf_method, given an instance object of some struct, allow you to invoke a specific method of that object or to start a TCM.

  • rf_method.invoke(instance: base_struct, params: list of rf_value_holder): rf_value_holder;
  • rf_method.invoke_unsafe(instance: base_struct, params: list of unsafe): unsafe;
  • rf_method.start_tcm(instance: base_struct, params: list of rf_value_holder);
  • rf_method.start_tcm_unsafe(instance: base_struct, params: list of unsafe);

The invoke methods call the given method on the specified object and return the value returned from that method. If the given method has parameters, they should be passed as a list in the second parameter; the list size must exactly match the number of parameters the method expects to get. Similarly, the start_tcm methods start the given TCM on the specified object. As with the rf_field methods above, the difference between the safe and unsafe versions of these methods is that the safe one uses value holders and performs additional run-time checks, while the unsafe version is more efficient.

The following short example demonstrates the usage of the above methods. The following method gets an object of an unknown type (declared as any_struct) and a method name. It goes over all fields of the object whose type is int, and calls the method by the given name, passing the field value as parameter. For simplicity, we assume it is known that the method by the given name indeed exists and has one parameter of type int.

extend sys {

    print_int_fields(obj: any_struct, meth_name: string) is {

        // Keep the reflection representation of the int type itself

        var int_type: rf_type = rf_manager.get_type_by_name("int");

        // Keep the struct type of the object

       var s: rf_struct = rf_manager.get_exact_subtype_of_instance(obj);

        // Keep the method which is to be called

        var m: rf_method = s.get_method(meth_name);

        // Go over fields of the struct

        foreach (f) in s.get_fields() do {

            // Is this field of type 'int' ?

            if f.get_type() == int_type then {

                // Retrieve the field value ...

                var value: untyped = f.get_value_unsafe(obj);

                // ... and pass it to the method

                compute m.invoke_unsafe(obj, {value});

            };

        };

    };

};

 

In the next blog in the series, we will discuss some additional relevant reflection methods, give several tips, and look at some more interesting examples.

 

Yuri Tsoglin

e Language team, Specman R&D 

ADI Success Verifying SoC Reset Using X-Propagation Technology - Video

$
0
0

Analog Devices Inc. succeeded in both speeding up the simulation and debug productivity for verifying SoC reset.  In November 2013 at CDNLive India they presented a paper detailnig the new technology they applied to reset verification and eight bugs they found during the project.  We were able to catch up with Sri Ranganayakulu just after his presentation and captured this video explaining the key points in his paper.

Sri had an established process for verifying reset on his SoC.  The challege he faced is one faced by many teams -- reset verification executed at gate level.  Why gate level?  It goes back to the IEEE 1364 Verilog Language Reference Manual (LRM).  At reset, the logic values in a design can either be a 0 or 1 so a special state "X" was defined to capture this uncertainty.  The LRM defined how the logic gates in Verilog could resolve these X values to known values of 0 and 1 as they occur in the hardware.  Unfortunately, the LRM defined a different resolution of X values for RTL.  As a result, companies like ADI simulated at gate level to match the hardware definition. But with larger SoCs, the execution of those simulations became too long.  in addition, SoCs now have power-aware circuits that mimic reset functionality when they come out of power shutdown, increasing the number of reset simulations that have to occur.  A change was needed.

Incisive Enterprise Simulator provides the ability to override the RTL behavior to mimic the gate behavior, resulting in up to 5X faster reset simulation. That's the attraction to "X-prop" simulation. But that is not verification.  Verification requires the ability to plan and measure the reset sequences and to debug when issues are found.  Sri focused on the debug aspects of X-prop verification with debug tools in SimVision to identify X values that are real reset errors from those X values that were artifically propagated in RTL. As a result, Sri found eight bugs in two projects in a shorter time than his previous approach.

In the Incisive 13.2 release, Cadence further improved this technology. The new release extends the language support for X-propagation and adds the ability to separate X values coming from power-down domains from the other two types in the previous paragraph.  In addition, the Superlinting Verification App in Incisive Enterprise Verifier now generates assertions that monitor for X values in simulation.  Since assertions also automatically create coverage, you now have an automated path to connect your reset verification to metric-driven verification (MDV) and your verification plan.

X-propagation in simulation is necessary to achieve performance for reset simulation. However, to get productivity for your reset verification, you need the automation from debug, verification apps, and enterprise planning and managerment.

Regards,

Adam Sherer 

Covering Edges (part II)—“Inverse Normal” Distribution

$
0
0

In the previous example, we used the "select edge" to generate edge values for fields. But in many cases, what you really want to generate is not the exact edge, but "near the edges". For example, for a field of type uint (bits : 24), generate many items whose values are 0..4, and many of 0xfffff0..0xffffff. To achieve this, you can call this "the inverse normal distribution" and give more weight to the edges.

Selecting"inverse normal" can be done by selecting normal distribution, around the edges:

extend transfer {

    keep soft address == select {

        10 : normal(2, 4);

        10 : normal(0xFFFFFD, 4);

        90 : others;

    };

};

 

 

Efrat Shneydor 

Cadence and AMD Add New UVM Multi-Language Features

$
0
0
0 0 1 454 2594 Cadence Design Systems 21 6 3042 14.0 Normal 0 false false false EN-US JA HE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-fareast-language:JA;}

The UVM Multi-Language Open Architecture open-source library was recently updated with new features.  The hallmarks of this solution continue to be the ability to integrate verification components of multiple languages and methodologies at the testbench level, expanding beyond simple connectivity at the more limited data level, and the multi-vendor support.

Interestingly, multi-language is a bit of a misnomer – the critical part of the name is Open Architecture.  For sure, this industry has verification IP written in multiple standard languages – SystemVerilogSystemC, and e – but that isn’t the whole story.  If language defined the verification component, then AVM, VMM, OVM, and UVM verification components would all interoperate without any modification or glue code because each one is written in the same language – SystemVerilog.  However, the code needed to be organized into libraries with generally accepted methodologies to create verification components that could be easily reused.  As a result, companies have created many well-verified components that need a lot of additional code to integrate into a coherent verification environment.  By coherent we mean an environment with organized phases, configuration, and control despite the different libraries.  When we add components from other languages, it's easy to see that simple data connections between the languages are quite necessary, but insufficient, to enable verification reuse.

The new UVM ML-OA 1.3 builds on the foundation established in June with the initial download posted on UVMWorld.  The important new feature is multi-language configuration.  With this new feature, users can configure integers, strings, and object values using the hierarchical paths established when the environment is constructed.  Wildcards are permitted but the interpretation is the responsibility of each integrated framework.   The release includes three new demos to help you become familiar with the new capability.  In addition, there are several ease-of-use enhancements aimed at making it easier to set up a multi-language environment and support for g++ 4.1 and 4.4.  The release notes and documentation in the 1.3 tarball have more details on the new features and how to use them.

UVM ML-OA goes beyond inter-language communication to provide the integration that allows verification components to work together in a coherent testbench.  The download is open source and known to run on all major simulators.

Cadence is also working with its partners to develop a portable UVM-SC adapter that will enable running SystemC verification environments with UVM-ML-OA using the SystemC support built into the simulator.  Cadence will test the adapter with the Incisive platform, and its partners will test it with the Mentor and Synopsys simulators.

So if you haven’t yet, come join the 2500 others who have downloaded UVM ML throughout its history and your verification reuse will be more productive.

 

=Adam Sherer, Incisive Product Manager

Incisive Verification: Top 10 Things I Learned While Browsing Cadence Online Support Recently

$
0
0
There is always a demand, in most corners of the world today, for learning and troubleshooting something simply and quickly. Most users of any product or tool want access to a self-service knowledge base so that they can go and troubleshoot the issue on their own. They do not really want to sit through a long training class and also pay money; rather, they are of the type who have the knack to figure things out on their own by taking a deep dive, head first.

In this quarterly blog, I will share what the teams across the Cadence Incisive verification platform have developed and shared on Cadence Online Support, http://support.cadence.com, in the last month of 2013 and first month of 2014 to enable verification and design engineers be comfortable and well versed with Cadence verification tools, technologies, and solutions.

Rapid Adoption Kits (RAKs) from Cadence help engineers learn foundational aspects of Cadence tools and design and verification methodologies using a "do-it-yourself" approach. Application notes (app notes), tutorials, and videos also aid in developing a deep understanding of the subject at hand.

Download your copies from http://support.cadence.com now and check them out for yourself. Please note that you will need Cadence customer credentials to log on to the Cadence Online Support http://support.cadence.com website.

1.     Reuse UVC for Acceleration - RAK

There are thousands of legacy UVCs, stable and reliable, developed over the last 15 years. It is ideal to reuse these environments when starting acceleration verification, rather than creating the whole verification environment from scratch.

This RAK provides a short overview of the process required for taking a UVC implemented in e, and using it for verifying a DUT running on an acceleration machine, e.g. - Palladium. It describes the steps that have to be taken for adapting the UVC to achieve the desired goal of acceleration verification - executing tests significantly faster over running with RTL.

Rapid Adoption Kits

Overview

Application Note(s)

RAK Database

UVM e : Reuse UVC for Acceleration

View

View

Download (0.4 MB)

 

2.     Acceleration Performance Boost - RAK

When employing acceleration verification, speed is a crucial aspect. The verification engineers strive to get supreme performance, while maintaining verification capabilities.

This RAK provides suggestions for advanced techniques for maximizing the performance of verification acceleration. It discusses the various interfaces between the simulator and the acceleration machine, and their effect on performance.

Rapid Adoption Kits

Overview

Application Note

RAK Database

UVM e : Acceleration Performance Boost

View

View

Download (0.4 MB)

 

3.     Introduction to CPF Low-Power Simulation - RAK 

This RAK illustrates Incisive Enterprise Simulator support for the CPF power intent language. The RAK provides instructions on invoking a CPF simulation in Incisive Enterprise Simulator, and also provides an overview of SimVision debug capabilities and Tcl debug extensions. It also comes with a hands-on lab to examine CPF behavior in simulation.  

 

 Rapid Adoption Kits

Overview

RAK Database

Introduction to CPF Low-Power Simulation  

View

Download (1.7 MB)

 

4.     Introduction to IEEE-1801 / UPF Low-Power Simulation  - RAK

This RAK illustrates Incisive Enterprise Simulator support for the IEEE 1801 / UPF power-intent language. In addition to an overview of Incisive Enterprise Simulator features, SimVision and Tcl debug features, a lab is provided to give you an opportunity to try these out.

 

 Rapid Adoption Kits

Overview

RAK Database

Introduction to IEEE-1801 / UPF Low-Power Simulation  

View

Download (2.3 MB)

 

5.     Specman Simulator Interface Synchronization Debug Cookbook - App Note

This Specman Simulator Interface Synchronization Debug Cookbook is supposed to be a guiding document for every engineer who wants to learn about Specman - simulator interface synchronization. This is a comprehensive document that includes a flowchart that can be used in order to map the problem, and take the correct steps in order to resolve it. It also includes a detailed section for every possible problem and its solution. This cookbook is also very useful for power users to be able to debug these kinds of issues independently.

6.     Loading Commands at Runtime for Verilog Tests - App Note

This app note on Loading Commands at Runtime for Verilog Tests illustrates how to convert directed Verilog tests into command files to enable a single compile flow, and shows the ability to use the save and restore feature of Incisive Enterprise Simulator.

The flow described in this note focuses on support for Verilog [IEEE 1800]. This app note shows you different approaches to optimize the execution and runtime of Verilog directed tests. It illustrates how to remove redundancy and how to run only portions of a test that are of interest. The suggestions in this app note can be adapted to your particular setup. An example testcase is included. 

7.     Incisive Enterprise Specman Elite Testbench Tutorial - Tutorial

TheIncisive Enterprise Specman Elite Testbench Tutorial is also available online for you to take advantage of this self-help tutorial.

The goal of the Specman tutorial is to give you first-hand experience in how the Specman system effectively addresses functional verification challenges. The tutorial uses the Specman system to create a verification environment for a simple CPU design.

8.      How to Detect Glitches in Simulation Using IES  - Video

The video "How to Detect Glitches in Simulation Using IES" discusses the common reasons of glitches in gate-level simulation. It also discusses the techniques to detect and analyze glitches during simulation with Incisive Enterprise Simulator.

9.      Delay Modes Selection, and Their Impact in Netlist Simulation - Video

The video "Delay Modes Selection, and Their Impact in Netlist Simulation" discusses different delay modes in which netlist simulation can be done. It demonstrates different methods to select a delay mode and the impact of a selected delay mode on timings in simulation. 

10.  What's New in 13.2 Debug Analyzer and SimVision - Videos

Short demo videos are now available on the latest/greatest features of our 13.2 debug solutions.  You may want to review them yourself just as a refresher on the latest features of both SimVision and Incisive Debug Analyzer.

Both of these videos will be linked to in the "What's New in Debug" screen that is launched at SimVision/Debug Analyzer startup or accessible through the help menus.  

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. If you are signed up for e-mail notifications, you've likely noticed new solutions, app notes (technical papers), videos, manuals, etc.

Happy Learning!

Sumeet Aggarwal

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Normal 0 false false false EN-US X-NONE HI Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} Normal 0 false false false EN-US X-NONE HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;} /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Mangal; mso-bidi-theme-font:minor-bidi; mso-bidi-language:AR-SA;}

e Language Editing with Emacs

$
0
0

Specman and e have been around for a while, and some clever people have developed a nice syntax highlighting package for Emacs. What does this package do? Well, have a look yourself:

 

Editing in Emacs with the Specman mode 

And

 

Editing in Emacs without the Specman mode 

As you can see, the Specman mode gives you syntax highlighting, automatic indentation, it detects comments and shows them in different font or color if you like, adds end-comments (for example, after "};" you get a comment that tells you what struct/unit was edited), inserts newline after a semicolon and more...

The Specman mode for Emacs used to be available here (www.specman-mode.com), but unfortunately this site is no longer actively maintained. If you do need a more recent version (e.g., if you want to run with Emacs 24.x or later), please download it from the related Cadence forum post.

Once you've downloaded and unzipped it, you need to setup Emacs or Xemacs to load the mode when you start the editor. The mechanics to achieve that are slightly different in Emacs and Xemacs. For Emacs, edit the file <HOME>/.emacs and add the following:


;; indicate where the package is stored
(add-to-list 'load-path"~/xemacs/")
;; load the package
(load "specman-mode")
;; setup files ending in .e or .ecom to open in specman-mode
(add-to-list 'auto-mode-alist '("\\.e\\'" . specman-mode))
(add-to-list 'auto-mode-alist '("\\.ecom\\'" . specman-mode))

 Happy coding,

-Hannes Froehlich

Incisive vManager at DVCon - Come See It!

$
0
0

Have you heard the news?  There is a new version of vManager announced this week, right in time for DVCon.   vManager has been completely re-architected to be a database driven environment, scaling to multiple users and supporting gigascale size designs..  And, with ever growing verification requirements there is now a need for highly coordinated verification teams.  With 100x more scalability and 2x greater verification productivity, the time is now to learn about the best verification management solution in the industry, vManager - the best just got better! 

The Incisive vManager solution is showcased on cadence.com, and there is a dedicated launch page you can visit for datasheets, whitepapers, videos, and more.  The direct link to that page is here - http://www.cadence.com/cadence/newsroom/features/Pages/vmanager.aspx?CMP=vManager_bb

And, for those of you going to DVCon this year (March 3 to March 6), you can see a live demonstration and speak to the experts about your verification challenges during the Exhibition hours. The DVCon Expo Hours are listed below:

     - Monday:  5:00 to 7:00PM
     - Tuesday:  2:30 to 6:00PM
     - Wednesday:  2:30 to 6:00PM

You can also sign up for a Metric Driven Verification (MDV) Tutorial on Thursday, which runs from 8:30 to Noon.  The abstract for the tutorial is located at the DVCon website (direct link here - http://dvcon.org/content/event-details?id=163-6-T ).  To get into the tutorial, you will need to register on the DVCon website.  A direct link to the DVCon registration options is here - http://dvcon.org/content/rates

The MDV Team at Cadence hopes to see you DVCon 2014!

John Brennan
MDV Product Management Director

 

 

Resetting Your UVM SystemVerilog Environment in the Middle of a Test — Introducing the UVM Reset Package

$
0
0
In general, reset will be applied at different times within a test.

 

1.   Reset at the beginning of a test

In a typical UVM test you might start out by applying a reset, and then go on to configure your device, and subsequently, start traffic. The associated UVM environment, in particular its components, do not have to do anything special to support this type of test - Life is Good!

 

2.   Reset in the middle of a test

Now, let's change things and apply reset again, later on in the test, in order to determine that the device can transition in and out of the reset condition properly. In this case, your verification environment needs to contain additional infrastructure to support this type of test. Otherwise, for example, your test might produce invalid errors.

 

Reset-Aware Components

UVM components such as scoreboards, sequencers, drivers, monitors, and collectors need to handle an arbitrary occurrence of reset in a robust matter.  This means that you need to implement ways to gracefully terminate ongoing activity once reset is asserted, and restart activity properly after reset drops. In other words, you need to have a reset-aware UVM component implementation.

 

Reset Package

Cadence provides an approach for this that works with the standard UVM library and leverages the UVM run_phase. In the testbench you add a reset monitor that notifies a reset handler, which in turn calls the reset-aware component so that they terminate and restart activity when needed (as show below). The key part of the packages is the utility library used to implement the reset handler.

 

 

The UVM reset package includes examples and documentation that show how this works in detail and how to use it. Cadence has contributed the reset package to Accellera's UVM world community so you can go ahead and check it out, and use it.

 

http://forums.accellera.org/files/file/111-cadence-reset-example-and-package/

 

Real-World Usage

Courtney Schmitt of Analog Devices has adopted this package and will present her experience at DVCon 2014 in San Jose at the poster session (and the associated paper) on Tuesday, March 4, 2014.

1P.7    Resetting Anytime with the Cadence UVM Reset Package

 

Reset away!

 

Axel Scherer

 

 

New Incisive Verification App and Papers at DVCon by Marvell and TI

$
0
0

If you're an avid reader of Cadence press releases (and what self-respecting verification engineer isn't?), you will have noticed in our Incisive 13.2 platform announcement  back on January 13th that Incisive Formal technology, with our new Trident cooperating multi-core engine, took top billing. But you would have needed to be very diligent to have followed the link in the press release to the Top 10 Ways to Automate Verification document that explained some other aspects of the Incisive 13.2 Platform.  There, weighing in at number 6, was a short description of our latest verification app, for register map validation. Verification apps apply combinations of formal, simulation and metric-driven technologies to mainstream verification problems. This approach puts the focus on the verification problem to be solved, rather than the attributes of the technology used to solve it. The Incisive verification apps approach is defined by the following principles:

  • Supplement a well-documented methodology with dedicated tool capabilities focused on a high-value solution to a specific verification problem
  • Use the appropriate combination of formal, simulation, and metric-driven technologies, aimed at solving the given problem with the highest efficiency
  • Provide significant automation for creating the properties necessary to solve the given problem, reducing the need for deep formal expertise
  • Provide customized debug capabilities specific to the given problem, saving considerable time and effort


Verification App for Register Map Validation
The new Register Map Validation app generates properties automatically from an IP-XACT register specification. You can exhaustively check a multitude of common register use cases like value after reset, register access policies (RW, RO, WO), and write-read sequences with front-door and back-door access. All these sequences are shown in clear, easy-to-use debug views. Correct register map access and absence of corruption is difficult and time-consuming to check sufficiently in simulation.

The result is a reduction of verification set-up times and, combined with the Trident engine we mentioned before, huge reduction in execution times, reducing register map validation from weeks to days or even hours. But don't take my word for it - come to DVCon next week and hear Abdul Elaydi of Marvell, who will be presenting "Leveraging Formal to Verify SoC Register Map", and Rajesh Kedia of TI, who will be presenting "Accelerated, High-Quality SoC Memory Map Verification Using Formal Techniques", both on Wednesday, March 5.

Pete Hardee

Randomizing Error Locations in a 2D Array

$
0
0

A design team at a customer of mine started out with Specman for the first time, having dabbled with a bit of SystemVerilog. I can't reveal any details of their design, but suffice to say they had a fun and not-so-simple challenge for me, the outcome of which I can share. Unlike some customers (and EDA vendors) who think it's a good test for a solver to do sudoku or the N-Queens puzzle (see this TeamSpecman blog post http://www.cadence.com/Community/blogs/fv/archive/2011/08/18/if-only-gauss-had-intelligen-in-1850.aspx), this team wanted to know whether IntelliGen could solve a tough real-world problem...

The data handled by their DUT comes in as a 2D array of data bytes, which has been processed by a front-end block. The data in the array can contain multiple errors, some of which will have been marked as "known errors" by the front-end. Other "unknown" errors may also be present, but provided that the total number of errors is less than the number of FEC bytes, all the errors can and must be repaired by the DUT. If too many errors are present, it is not even possible to detect the errors, so the testbench must generate the errors carefully to avoid meaningless stimulus. It also needs to differentiate between marked and unmarked errors so that the DUT's corrections can be tested and coverage performed based on the number of each type of error.

This puzzle is rather more complex than the N-Queens one: we have multiple errors permitted on any single column or row in the array, and there are three possible states for each error: none, marked and unmarked. There is an arithmetic relationship between the error kinds: twice the number of marked errors than unmarked can be corrected. Furthermore, unlike the N-Queens, a test writer may wish to add further constraints such as clustering all the errors into one row, fixing the exact number of errors, or having only one kind of error.

First we define an enumerated type to model the error kind:

By modelling the 2D array twice, once as complete rows and once as complete columns, we can apply constraints to a row or column individually, as well as to the entire array. We only look at whether to inject an error, not what the erroneous data should be (this would be the second stage). I've only shown the row-based model here, but the column-based one is identical bar the naming.

The row_s represents one row from the 2D array, with each element of "col" representing one column along that row. The constraints on num_known and num_unmarked limit how many errors will be present. These are later connected to the column-based model in the parent struct.

The effective_errors field and its constraints model the relationship between the known and unmarked errors, whereby twice as many known errors than unmarked errors can be corrected.

Next we define the parent struct which links the row and column models to form a complete description of the problem. Here "cols" and "rows" are the two sub-models, and the other fields provide the top-down constraint linkage.

The intent is that the basic dimensions are set within the base environment, and the remaining controls are used for test writing.

Next, we look at the constraints which connect the row and column models together. The first things to do are to set the dimensions of the arrays based on the packet dimensions, and to cross-link the row and column models. These are structural aspects that cannot be changed. The rest of the constraints tie together the number of errors in each row, column, and the entire array. By using bi-directional constraints, we are allowing the test writer to put a constraint on any aspect. 

And that's it! With just that small amount of information IntelliGen can generate meaningful distributions of errors in a controlled way. Test writers can further refine the generated error maps with simple constraints that are actually quite readable:

 

Notice another little trick here: the use of a named constraint: "packet_mostly_correctable". This allows a test writer to later extend the error_map_s and disable or replace this constraint by name; far easier than figuring out the "reset_soft()" semantics and a whole lot more readable.

Note that for best results, this problem should be run using Specman 13.10 or later due to various improvements in the IntelliGen solver.

Steve Hobbs

Cadence Design Systems 

Viewing all 669 articles
Browse latest View live