Quantcast
Channel: Cadence Functional Verification
Viewing all 666 articles
Browse latest View live

Fine Tuning of Coverage Model Definition

$
0
0

Functional Coverage is one of the main means to measure the quality and progress of the verification project. We define coverage models, run semi-random tests, and every once in a while analyze the coverage report to decide “are we there yet?”.

When talking about “coverage closure”, people tend to talk about generation, how we create the data that will fill the coverage. In this blog, we discuss the definition of the coverage model – how we define a model that will give us the exact information that we need.

To misquote Lewis Carroll – “if you don’t know where you are going, how will you know that you got there?”. If the model is not well defined, the collected data will not give you the information that you need, and you will not have the required confidence in the verification status.

Coverage model can de described as built of three layers.

The top layer is the coverage group. In this layer you define the foundations of the coverage model:

  • Where – where is it best to collect the coverage? In which structs and units?
  • When – when to sample the values?
  • How – are there values that can be ignored? Conditions in which the whole cover group should not be sampled? 

Under the group, there are the coverageitems. With the items you specify the “what”:

  • What – what are the items that have to be covered?
    • Which of the fields?
    • Are there important crosses?
    • Should the transition be covered?

In this blog, we focus on the third layer – the coverage buckets. With the buckets we perform the fine tuning of the coverage.

  • What values are legal?
  • What values do not add to the coverage hence should be ignored?
  • Are there values of more importance?
  • What granularity is required? 

To answer the last question, the granularity, we use what’s called “to bucketize”, grouping the values into buckets.

For example, let’s assume a data item having a 32 bits field named address. Surely we cannot collect nor analyze a bucler for each possible value (0, 1, …0xffffffff), so we define buckets. The naïve immediate approach to define the address item’s bucket is to define sub ranges all of identical size, using the ranges option. Like this -

 

extend monitor {
  cover ended using per_unit_instance is {
    item address :  uint(bits : 32) = cur_packet.address using
      radix=HEX,
      ranges = {
        range([0..0xfffffffe], "", 0xf00000, 10);
      };
    };
  };
};

 

A small side note – in the above code example, we use the coverage option per_unit_instance. This option is recommended when there are multiple instances of the monitor unit, as it allows viewing the coverage data for each instance separately. For sake of reusability, it is a good habit to always use this option, even if in the current project there is only one instance of this unit.

This definition of the item ranges works, but it does not give enough information of how close we are to our verification goals.
For getting the information that we need, the buckets should capture the coverage goals as good as possible. Typically, the verification plan describes which addresses are more important to cover. For example – the first address and the last in the memory space are meaningful values that must be executed multiple times. In addition, from the verification plan we can derive which values are not meaningful and can be ignore altogether.

The following code defines a coverage model that captures these verification goals:

  • The addressees that are meaningful in our project are in the range 0x1488..0x4488
  • Packets addressed to the first (0x1488) or last (0x4488) address are of special importance

By creating different buckets to the edge values – 0x1488 and 0x4488 – we define a coverage model that is tuned to the goals. We also define that all the packets with address out of the range as ignorable – having such packets is not part of the verification goal and viewing the in the coverage report will just waste our time. 

extend monitor {
  cover ended using per_unit_instance is {
    item address :  uint(bits : 32) = cur_packet.address using
      ignore = address < 0x1488 or address > 0x4488,
      radix=HEX,
      ranges = {
        range([0x1488], "first address", UNDEF, 100) ;                                       
        range([0x1489..0x4487], "", 0x200, 10);
        range([0x4488], "last address", UNDEF,  100);
    };
  };
};

The code example shown above is fine tuned to a very specific verification project, this model will help us know exactly when we reached the goals of this project. What will happen when we reuse this code in a new project? Let’s assume that one of the differences between the two projects is that in the new project the address space starts with address 0xff00000 and not 0x1488. Should we rewrite the coverage definition? Extend and override? 

This effort of rewrite could be avoided, if we make use of non constants in the coverage model (available starting Specman 15.2). That is – use configurable fields, rather than constants.

To make the above code more reusable, we replace “0x1488” and “0x4488” with references to fields. We define fields in a unit or a struct that are accessible from sys, and refer to these fields from the coverage model. For example:

struct cover_config {
  min_address : uint (bits : 32);
  max_address : uint (bits : 32);
  every_count : uint;    

  keep soft min_address == 0x1488;
  keep soft max_address == 0x4488; 
  keep soft every_count == 0x200;

};

extend sys {
  cover_config : cover_config;
};         

extend monitor {
  cover ended using per_unit_instance is {
    item address :  uint(bits : 32) = cur_packet.address using
      ignore = address < sys.cover_config.min_address
            or address > sys.cover_config.max_address,
      radix=HEX,
      ranges = {

        range([sys.cover_config.min_legal_address],
                          "first address", UNDEF, 100);                                  

        range([sys.my_config.min_address+1..
               sys.my_config.max_address-1], "",
                       sys.my_config.every_count, 10);

        range([sys.cover_config.max_legal_address],
                                "last address", UNDEF,  100);
    };
  };
};

Using configurable fields, the coverage model is now much more reusable – it supports fine tuning for multiple configurations. 

And here comes the next challenge – what if we want to reuse the same coverage model in multiple instances in one project? For one instance of the sub-system we want to get fine tuned coverage of the space 0x1488..0x4488, and in another instance – the area that needs to be focused on is 0xff00000.. 0xff0b000.

This is the next step of the per instance discussed above. With per_unit_instance we can view each unit instance independently, and what we ask for now is not only separate view, but also having a different model definition for each instance. 

To achieve this requirement, we use instance_ranges. With this option, instead of using a configuration struct that is used by all instances we can define different configurations for different instances. To do so, the configuration fields are not under sys, but rather under the unit that we cover. These fields can be accessed with inst which is a reference to current instance.

For example:

extend monitor {

  config : cover_config;

  cover ended using per_unit_instance is {
    item address :  uint(bits : 32) = cur_packet.address using
      instance_ignore =  address > inst.config.max_address
                     or  address < inst.config.min_address,
      radix=HEX,

      instance_ranges = {

         range([inst.config.min_address],
                  "first address", UNDEF, 100);           

         range([inst.config.min_address+1..
                inst.config.max_address-1], "",
                           inst.config.every_count, 10);
         range([inst.config.max_address],
                               "last address", UNDEF,  100);
     };
  };
};

 

With this code, we defined the coverage model once, but created two different models, configured per each instance of the agents. The following are screenshots of one run, each showing the coverage of one of the monitor instances.

Defining the coverage model is one of the most important tasks in functional verification. The more accurate it, the best it captures your exact goals – it increases the chances that you will get there.

We encourage you to try out the new features – usage of non constant and instance_ranges, so that you get more from coverage models, and improve the overall efficiency of your verification project.

  

Rodion Melnikov, Efrat Shneydor

Team Specman

 


Coverage Maximization

$
0
0

Searching for “automatic coverage maximization” results with ~16 million hits. Alas, this does not reflect 16 million solutions, rather it is an indication of the big interest in this topic, and the many attempts to automate the process of “getting full coverage, faster”. There are several tools available, performing smart coverage analysis and directing the tests to the less covered areas.

For example - with the Incisive Metric Center you can rank cover, and identify which tests contribute most to the coverage, and which tests are redundant.

If you use vManager as your runner, you can furthermore enhance this process, automatic rerun the most effective tests based on accumulated coverage.

Another approach to automate coverage maximization is Specman Constraint Solver’s Coverage Driven Distribution. This feature affects the probability of generating a value, based on the coverage model definition. Assume, for example, this coverage definition:

cover packet_ended using per_unit_instance is {
  item size : uint cur_packet.size using  
    ignore = size > 2048, 
    ranges = {
     range([0..1023],     "illegal too short", UNDEF, 1);
     range([1024],        "legal shortest", UNDEF, 1);
     range([1025..2047],  "legal ", UNDEF, 1);
     range([2048],        "legal longest", UNDEF, 1);  
  };
};

By defining these ranges, we express the coverage goal – each of the buckets (the two buckets containing ranges, and the two buckets each containing just one value) are of same importance, and all of the buckets should be filled. Activating CDD on the code containing this coverage model, will result with packets that are not longer than 2048 (thanks to the ignore option); and, even more important – getting the edge values of 1024 and 2048 will have same probability as generating any value in the range of [1025..2047].

 We realize that developing the perfect solution for coverage maximization would take some time. While waiting for the ideal solution that will “solve it all”, you can create your own utilities, targeted at specific use models. If you implement your verification environment in e, you can use the coverage api. The coverage api is a set of methods that enable getting information of the current model. The following code is a small example, demonstrating the possibilities. We implement a method named wait_for_full. This method samples the coverage database every scan_every cycles, and stops when getting indication that the required coverage goal was reached. The information of the current status of the coverage model is gathered by calling scan_cover(), and then extending the end_item() method and reading the item_grade field.

struct my_cover_struct like user_cover_struct {

  !reached_goal  : bool;
  !goal_grade    : int;
  !req_instance_name : string;

  wait_for_full( instance_name  : string,  struct_to_cover : string,
                 group_to_cover : string,  item_to_cover : string,
                 goal_grade     : int,     scan_every : uint) @sys.any is {

        me.goal_grade = goal_grade;
        me.req_instance_name = instance_name;
        var res : int;
        while not reached_goal {
            wait [scan_every] * cycle;
            // scan_cover accepts a string, concatenation of the names of containing struct, group and item
            res = scan_cover(append(struct_to_cover, ".",
                                    group_to_cover, ".", item_to_cover));
        };
    };

    end_item() is {
        if instance_name == append("e_path==", req_instance_name) {
            if item_grade/1000000  >= goal_grade {
                reached_goal = TRUE;
            };
        };
    };
};

And this is an example of using this utility. We want to run until the cross cover item cross__sma_state__smb_state (created by crossing the item smb_state and smb_state), in the coverage group named system_cover, in the unit system_monitor, gets the grade 80:

extend my_env {
  run() is also {
    start run_until();
  };

  run_until() @sys.any is {
    var cover_sm_cross : my_cover_struct = new;
    raise_objection(TEST_DONE);
    cover_sm_cross.wait_for_full(me.system_monitor.e_path(),
                                 "dummy_dut", "system_cover",
                                 "cross__sma_state__smb_state", 80, 100);
    drop_objection(TEST_DONE);
  };
};

The covers api is simple to use, and can be used as basic building blocks in implementing a sophisticated Coverage Maximization algorithm. You can enhance it, for example, to print report of coverage progress throughout the test, or even to stop the run if seeing that there was no significant change in the coverage status in the last X  cycles.

And while some of you will enjoy creating your own smart utilities, we, in Specman, will continue enhancing the language and the engines, to get us closer and closer to the full Automatic Coverage Maximization we all look for.

Efrat Shneydor,

Team Specman

What is ISO 26262 and Why Should I Care?

$
0
0

ISO 26262 is a functional safety standard applied to the development of electrical and/or electronic (E/E) systems in automobiles. It is aimed at reducing risks of physical injury or of damage to the health of people in the event of an unplanned or unexpected hazard. The standard requires that every tool used within the design and verification flow, that can either insert errors into the final product or prevent errors from being detected, be analyzed within the context of its use in the flow, and qualified if necessary. As our current and prospective customers see the rapid growth of E/E systems in automobiles, they also see the need to comply with the ISO 26262 standard and are looking to IP and EDA vendors for help. Failing to provide customers with acceptable responses to their functional safety needs puts Cadence at risk of being excluded from respective business opportunities.

In order to justify freedom from unreasonable risk, our customers need to develop a safety argument in which the safety requirements are shown to be complete and satisfied by the evidence generated from the ISO 26262 work products. One customer alone states that the annual spend for developing safety cases for its automotive products is in the millions of dollars. In order to pass a functional safety audit, they need to provide valid safety arguments for the tool chain and development processes used, supported by appropriate evidence that shows that no single failure in any tool could leave an undetected critical flaw in the system. Providing the safety argument and documenting the requisite evidence is challenging for our customers to do without direct help from Cadence.

With respect to EDA tools, in an effort to reduce the complexity, time, and cost associated with tool qualification, Cadence will provide Functional Safety Documentation to our customers, comprised of the following:

  • Safety Manual
  • Tool Classification Analysis
  • Technical Report issued by an independent Functional Safety Auditor

The Safety Manual presents a typical tool-chain sub-flow that could be used for design and/or verification of safety-related products. It describes a base use case; examines input, execution, and output of the software tools; and specifies fault mitigation and/or error detection methods, which drive the specified tool-chain to Tool Confidence Level (TCL) 1.  TCL1 reflects the highest confidence that tool malfunctions will not cause violations of safety requirements, and subsequently, no further qualification of the tool-chain would be necessary. Therefore, a tool-chain that evaluates to TCL1 will reduce the complexity, cost, and time required of our customers to certify their work products.

For each of the products described in a Safety Manual, a Tool Classification Analysis (TCA) document is created, which describes an assessment, including typical use cases, and an analysis of possible faults with corresponding potential impact on functional safety goals. Additionally, the TCA document can describe methods to increase error detection of possible faults and provide a rationalization for a predetermined tool confidence level. Since a product can be used in more than one tool-chain sub-flow, the TCA document should describe the relevant use cases within each sub-flow such that a single TCA document can be used for multiple Safety Manuals. In the long run, the goal is to have the majority of Cadence products covered in one or more sub-flows.

The combination of Safety Manual and TCA documents form the Functional Safety Documentation  In order to reassure our customers that this documentation set is adequate and suitable for use in their safety audit, it is to be reviewed by an independent Functional Safety Expert/Auditor. The results of this independent review are then documented in a Technical Report, which serves as validation of the fitness for use of the documents in a safety case, and of the suitable uses of the tool-chain for developing safety-related products. Cadence has selected TÜV SÜD to provide this validation. They established a functional safety team more than 30 years ago and have accumulated a strong track record. They participated in the establishment of the ISO 26262 standard and are an internationally accredited ISO 26262 testing body for development tools, development processes, and safety-relevant products or systems.

Automotive development presents a unique challenge in terms of safety, security, and reliability of embedded systems. End-to-end testing for automotive applications is too expensive and too complex. However, the cost of failure should serve as motivation for finding a way to mitigate risks. The Functional Safety Documentation set is one of the ways in which Cadence is helping customers comply with the ISO 26262 standard. For more details about how Cadence supports ISO 26262 qualification, read my white paper, Enabling ISO 26262 Qualification By Using Cadence Tools.


Randal Childers

Creating Code from Tables

$
0
0

Some things are best described with tables—each column shows the values for one category, and each row encapsulates one given set of values for all categories.

This blog describes how to use tables in your verification environment code to make the code more readable and more easily maintained. The tables can be written in the code or they can be pulled in from an external file. For example, you can pull a Microsoft Excel file defining device configuration into your verification environment, eliminating the need to manually convert the data in the excel file into code. More than that, as the last example in this blog illustrates, you can create any code you want based on data read from Excel.

Our first example is a basic one, illustrating how to create tables in the code itself, by using the ein_table. Assume a device whose configuration is defined by bus type, bus width, and speed.  Constraining the configuration unit using a table format is simple to write and, more importantly, simple to read and maintain:

type bus_type : [ISA, EISA, VL_BUS, PCI];

unit config {
    b_type  : bus_type;
    b_width : byte;
    b_speed : byte;

    keep (b_type,   b_width,   b_speed) in_table {
           ISA,     [16, 32],    8;
           EISA,     32,         8;
           PCI,     [32, 64],    33;
           ISA,      64,       [66, 133];
    };
};

The information about the legal configuration modes is required not only during generation. Some checks, for example, might depend on the device configuration. You can use tables anywhere in the code:

  if  (b_type, b_width, b_speed) in_table {
         ISA,    64,    [66, 133];
        } {
          ///… write checks that are relevant to this mode
        };

  if  (b_type,  b_width,  b_speed) in_table {
         ISA,    [16, 32],  8;
         EISA,     32,      8;
         PCI,    [32, 64], 33;
     } {
        ///… actions that are relevant to this mode
     };
  };

This code is valid, but what about reuse? Copying and pasting these lines is error prone and challenging to maintain. Instead of copying the tables, let’s define a table once and use it multiple times in the code. To define a table, we define one or more rows using the etable_row type:

define <slow'table_row> "SLOW_CONFIGURATIONS" as {
    ISA,  [16, 32], 8;
    EISA, 32, 8;
    PCI,  [32, 64], 33;
};

define <normal'table_row> "FAST_CONFIGURATIONS" as {
    ISA,  64, [66, 133];
};

define <all'table_row> "ALL_CONFIGURATIONS" as {
    SLOW_CONFIGURATIONS;
    FAST_CONFIGURATIONS;
};

After these tables are defined, they can be used in constraints, in checks, just about anywhere:

    keep (b_type, b_width, b_speed) in_table {
       ALL_CONFIGURATIONS;
  };

  check() is also {
      if  (b_type, b_width, b_speed) in_table {
          FAST_CONFIGURATIONS;
         } {
          ///… write checks that are relevant to this mode
      };

      if  (b_type, b_width, b_speed) in_table {
            SLOW_CONFIGURATIONS;
         } {
            ///… actions that are relevant to this mode
      };       
  };

Now let’s take this one step further. Not only can you define tables in e, you can also read tables from another file – for example, from a CVS or an Excel file. With this capability, the person who writes the configuration file needs to know nothing about how the verification environment is structured and implemented, and the person who writes the verification environment doesn’t have to be told of every change in the configuration.

For reading the configuration table from a CVS file, you would use the ecsv_to_table() operator. For reading the configuration table from an Excel file, you would use the eexcel_to_table() operator.

For example, assume the CSV file “config.cvs” contains these two tables:

SLOW_CONFIGURATIONS
#type, #width, #speed
ISA,  "[16,32]",  8
EISA, 32,         8
PCI,  "[32, 64]", 33

FAST_CONFIGURATIONS
#type,  #width,  #speed
ISA,     64,   "[66, 133]"

You can read these tables from your e code as follows:

    keep (b_type, b_width, b_speed) in_table {
       table from csv_to_table("config.csv" "SLOW_CONFIG") with {
            <#type>, <#width>, <#speed>;
       };
  };

And why stop there? Tables can be a good source for creating code that follows any kind of regular pattern.  Each row in the table holds the values for one instantiation of the pattern.

For example, if the verification environment defines a core unit that contains several fields, the properties for specific cores can be provided by an external file. The following code reads a table and, for each row in the table, instantiates a unit of the type core and constrains its fields according to the row’s data:

table from csv_to_table("config.csv", "Cores Info") with {
    <#id> : <#id> core is instance;
    keep <#id>.kind == <#kind>;
    keep <#id>.address == <#address>;
};

As you can see – the code within the “with {}” block is simple e code, and each <#> is replaced with the value in the relevant column. 

The input file could look something like this:

Cores Info

#id,   #kind,  #address, #enabled, #retainable
CORE0,  A34,   0x1000,   TRUE,      TRUE     
CORE1,  BL,    0x8000,   FALSE,     TRUE     
CORE2,  B5,    0xaa00,   TRUE,      FALSE
//...

These constructs – in_table, table_row, and table from file with are all part of Specman. Use them to make your verification environment more user friendly, more readable -- and more reusable.

 

Enjoy Verification, Enjoy e,

Team Specman 

 

IEEE Std 1647™ 2016 - e Language - New Standard Publication

$
0
0

Congratulations to the IEEE-1647 e Functional Verification Language Working Group (eWG)

In the beginning of 2017, the IEEE-1647 eWG issued for publication IEEE Std 1647™ 2016, IEEE Standard for the Functional Verification Language e. This version of the standard, issued under the chairmanship of Darren Galpin from Infineon, with input from other members from the EDA industry, contains enhancements of the e language done since 2011, and is a great resource for anyone who wants to get familiar with the exact syntax of the e language.

Many enhancements were made in the last years. We cannot list them all here, but some of the major highlights are summarized below.

The constraint solving approach was improved, providing better distribution hence achieving superior coverage.

As part of the focus on improving the debugging process, Structured Debug Messages (SDM) were added to the language. While the old messages handle strings, with the SDM you can print messages and record transactions with one action.
For example, the following messages print information about the collected data items (a burst and a frame) and also keep the information connected to these items:

    msg_started(MY_PROTOCOL, LOW, "Monitoring frame",
                                  cur_frame);       
    cur_burst.frames.add(cur_frame);
    msg_transformed(MY_PROTOCOL,MEDIUM, 
                   "Converting frames to burst",
                   frame, burst);

 

For a non-text destination, such as a wave form or a database, the message is sent and handled according to the nature of the destination. Behavior is implementation-dependent, so various tools might handle it differently. For example, a wave form can display matching pairs of msg_started and msg_ended messages as a transaction.

The following screen shot illustrates how debugging tools can use the new standard. Taken from Cadence Debug Analyzer, it shows all the messages that have to do with a specific burst instance; and thanks to the message action msg_transformed(), we also see the frames that are related to the burst.

  

 

The functionality of elanguage temporals was extended with the capability to add operation conditions to a temporal expression, either declarative or procedural.

For example, using the procedural condition do_abort_on_expect(), we can abort a check in case of an interrupt event, hence preventing getting false DUT errors:

   expect data_flow is @addr_started =>
           {[2..5]; @valid; [1..10] * @data; @ended} @clk;
   on interrupt {
       do_abort_on_expect("data_flow", TRUE);
   }; 

The conditions can also be defined as declarative, as part of the temporal expression definition: 

expect full_flow is @addr_started => {[2..5];
                @valid; [1..10] * @data; @ended} @clk 
                                    using abort @interrupt;

Another important addition to the updated standard is the support of TLM2-0, based on IEEE Std 1666TM-2011. The TLM sockets facilitate the transfer of transactions between verification components, taking advantage of the standardized, high-level TLM 2.0 communication mechanism.

In addition to documenting these and other enhancements to the e Language, the IEEE 1647 eWG technical editors tidied up the documentation of many chapters, making them much more coherent.

You can purchase your very own copy of this brand new standard on the IEEE website here.

 

Efrat Shneydor,
Team Specman

 

 

 

Specman in Xcelium

$
0
0

Just recently Cadence announced the new superb simulator, Xcellium. Just as Specman was part of the previous simulator, IES, it is now part of Xcellium.

As always, we keep enhancing and developing Specman, and the new Specman release, now part of Xcellium, contains great new capabilities. The focus in the last year was on tools that will enable you, the verification experts, to create easy to use powerful verification environments.

Here are some of the highlights of Specman 16.11 (aka “First Xcellium”). 

interface type was added to the e language. e interface is similar to Java interface—an abstract type used to specify a behavior that a component is supposed to implement.  Unlike other languages, an e unit implementing an interface does not have to implement all the interface methods. They can be implemented in a later extension, or not implemented at all (error will be issued only if an unimplemented method is being called).

The basic usage of the interface is to standardize the verification environment, as it defines not only what behavior is required, but also the api of the implementation.

For example, one can define the following interface named monitoring. Each unit that declares itself as implementing this interface, is expected to implement its three methods.

<’
// defining the interface
//

interface monitoring {
    write_log_file_header();
    dump_and_check_data();
    write_end_of_test_status();
};
‘>

<’
// implementing the interface
//
unit abc_monitor like project_abc_base_unit
                       implementing monitoring {
    my_items : list of abc_items;
    write_log_file_header() is {
       out(“Monitor ABC, protocol version ”, ABC_VERSION);
       // etc …
    };
    dump_and_check_data() is {
        for each in my_items {
          // …
        };
    };
    // etc...
};
‘>

In the above code, note how using interface we achieve something similar to multiple inheritance; the abc_monitor inherits a company or project base type (project_abc_base_unit, in this example), and “inherits” the monitoring interface behavior.

With interface, you can implement polymorphism. You can instantiate a list of interfaces - containing all units that implement this interface - and thus activate many interfaces in a loop. The list of the interface can be built in many ways, in this example – each unit that implements the interface, adds itself to the list.

<’
// the top component contains units implementing the 
// ‘monitoring’ interface

extend my_env {
    // a list of all units that implement the 
    // ‘monitoring’ interface.

    !all_monitors : list of monitoring;

    // add implementer to the list
    add_monitor(monitor_implementer : monitoring) is {
        all_monitors.add(monitor_implementer);
    };

    event power_down;

    // we activate the monitoring interface of all the units,
    // regardless of the implementing unit type
    on power_down {
        for each in all_monitors {
            it.dump_data();
        };
    };
};

// extend the unit my_checker to also implement the 
// ‘monitoring’ interface

extend my_checker implementing monitoring {
    dump_data() is {...};

    // if contained within my_env – add myself to the list
    post_generate() is also {
    var my_env_container := get_enclosing_unit(my_env);
        if my_env_container != NULL {
            my_env_container.add_monitor(me);
        };
    };
};
‘>

Another great construct that was added to Specman is the table. This is one of my favorites, so I already wrote about it in creating-code-from-tables.

To make a long story short – you can provide input to the verification environment in table format. The table can be written in e files, CSV, and even Excel, meaning that there is no need to “code” the configuration; one can use in the testbench the same excel files written by the manager or the architect.  The very basic use model is for defining configuration, but you can use tables to create code that follows any kind of regular pattern. This is a real eye opener experience, do take a look in creating-code-from-tables.

One of the areas of Specman that is keep evolving, is its interface to other tools, languages and platforms.

To improve the interface with SystemC models, we added two types to the e language - numeric,and fixed point.

The numeric is a template of an interface containing all the methods that are needed in order to define a numeric type. numeric_add(), numeric_to_int(), real_to_numeric(), numeric_ipow() , and many more.

As we got a request to implement fixed point in e—we defined this new type using the new numeric template. The fixed point type is implemented in UVM e, as open source, so you can view it and see how you can define any numeric type.

For the UVM-SV and UVM-SC users, we added multi languageEnd of Test synchronization; when running in a multi-language environment, Specman synchronizes with the UVM-SV objection mechanism. The test will stop only when all components, in all languages, have dropped their objections. 

As you can see, we enjoy extending e and Specman toolkit, maintaining it the most powerful (not to mention—cool) verification language.

 

Efrat Shneydor,
And the rest of Team Specman

 

 

 

 

 

 

 

Static Members in e

$
0
0

How do you define elegant or clean code? Usually, you know it when you see it; defining it is harder. It is usually a simple, clear and well-structured code. OO programming languages (like Java, C++, System Verilog, and also e) provide you with aids to write elegant code. This blog is about static members in e. Can you live without them? Usually, you can. What can they do for you? Help you write elegant and clean code. What if you don’t care about your code being elegant? Well, first you should care. But even if you don’t, there are problems you just cannot solve without static members.

 

Since Incisive version 15.2, the static keyword is part of e. This means that you can declare structs’ members as static (fields, methods or events) and get members that belong to the struct type, rather than to an instance of the struct. This also means that no matter how many instances of the struct are created (even none), there is only one copy of each static member which is shared by all instances of the struct type. This is actually the same idea as in the other OO programming languages mentioned above.

 

Let’s look at a simple example. In the code below, the field max_addressis declared static. This means that there will be only a single max_addressfield that will be created and initialized to 0x1000 before any instance of packet_s is created.  

extend packet_s{

staticmax_address : uint = 0x1000;

};

 

 

You can access this field directly from every instance of packet_s (as if it is a member of it). From other places it can be accessed using the struct type name followed by “::” as in the example below.

extend packet_s{

staticmax_address : uint = 0x1000;

keep addr <= value(max_address);  

};

 

extend monitor_s{

on reconfiguration_ended {

     packet::max_address =0x10000;

   };

};

 

 

Let’s look at some typical scenarios where using static fields can be useful.

Scenario1: Configuration

Let’s say that we want to limit the address of the packets during the run while the limit can be dynamic and changed during the run (as we have just seen in the example). It makes sense that it will be within the packet_s declaration, but since it is the same value for all instances, it also makes sense to hold it in a static field member. We could have put it under sys or ‘global’ but is this the right thing to do? If logically this is part of packet_s, then it should be defined as part of packet_s.

Now let’s say that we want to track how many times this field is being set in our environment. We could do this by making sure that max_address can be set through a set method (set_max_address) where we can count the number of times it is being called. To do that, we need to have a static method and an additional static field. We would also define max_addressas private so that the only way to set it would be through the method.

extend packet_s{

private staticmax_address : uint = 0x1000;

keep addr <= value(max_address);

 

staticnum_set_max_address_call:uint=0;

 

staticset_max_address(new_val : uint ) is {

max_address = new_val;

num_set_max_address_call+=1;

   };

};

What if you still need a central configuration for the entire environment? Some users define a configuration struct type and instantiate it under sys. A more elegant way is to have this configuration object (instantiated in the most appropriate place) with static methods that return the configuration items.

Prior to version 15.2, this is the old style you would use:

var config_s := get_enclosing_unit(env).config;

if config…..

 

Now, with the support of static members, you can define a config_s struct which holds the configuration and have static methods that return the configuration details. These methods can be accessed from anywhere without knowing where the instance actually resides.

if (config_s::bus_mode() == …) then

  

What did we get? Less code and no need to know where the configuration object is instanced – in a word, more robust and elegant code.

Scenario 2: Have a unique ID for each instance

Let’s take a look at another scenario. Let’s say that we want to make sure that each packet has a unique id (pkt_unique_id). An easy way to do it is by adding a single counter for all packet_s instances (packets_counter).

extend packet_s{

   pkt_unique_id : uint;

   staticpackets_counter : uint=0;

 

   init() is also{

       //get the current unique id from the counter value

       pkt_unique_id=packets_counter;

      

           //increment the counter

       packets_counter+=1;

   };      

};

Scenario 3: collect information for coverage

Let’s go back to the previous example where the maximum possible address was defined. You might want (for coverage purposes) to also know what was the maximum address that was generated de-facto. Using a static field (maximum_address) is the convenient way:

extend my_packet_s{

staticmaximum_address : uint =0;

post_generate () is also{

           if (addr >maximum_address) then

           {

maximum_address=addr;

  }

      };

}; 

Scenario 4: Template struct field

 I started by saying that using static is more elegant than adding fields to global. But you could say “I don’t much care for elegance. Defining a field in global works, and that’s good enough for me!” So what about this – consider the template of stack below. How would you handle adding a singleton max_size to a stack? Naturally you would want a different max_size for each type instantiated (as you want one max_size value for “stack of packets” and a different max_size value for “stack of instructions”).

template struct stack of <type> {

     };

 

How would you do it without a static member? Well, you can’t (at least not without lots of macros and tricks…).

How do you do it with static member? Very simple, you just define a static field.

template struct stack of <type> {

     static max_size: uint; 

};  

 

If you think of other things you can do with static members that you cannot simply do without-we will be happy to hear.

These were only simple examples to “give” you a taste; of course, there is more information in cdnshelp. We suggest you leave a place in your toolbox for static members. We encourage you also to check what you currently have under sys and global- can you find there constructs that logically belong to a specific context? Does it make sense to define them as static members in a more appropriate place?

What is your opinion?

Let’s say you are part of the Specman development team. Would you enable users to override a static method in a when subtype? Take several minutes to think about it.

Well, apparently, this is not a trivial question. Static method means that the method doesn't need an instance to operate on while when subtype is all about a concrete instance. So, no, this is not allowed (as C++ and Java do not allow it). If you think that it makes sense to enable it, we are happy to hear your voice!

Orit Kirshenberg Specman team

A Brief Introduction to Xcelium

$
0
0

Welcome to the XTeam blog! We are a team of bloggers dedicated to showcasing the newest in parallel simulation technology with the Xcelium Parallel Simulator. In this blog, we will bring you informational and technical articles regarding Xcelium’s features, such as X-propagation technology, save and restore, and incremental elaboration, in addition to information about innovations and general improvements over existing simulators. We will also discuss the improvements brought on by the move to multi-core technology in simulations, Xcelium in low power environments, and digital mixed-signal use cases.

Xcelium is the EDA industry’s first production-ready third generation simulator. Based on innovative multi-core technology, Xcelium allows SoCs to get from design to market in record time. With Xcelium, one can expect up to 5X improved multi-core performance, and up to 2X speed-up for single-core use cases. Backed by early adopters’ success stories from a wide variety of markets, Xcelium is already proving to be the leading simulator in the industry.

For now, though, you find out more about Xcelium's features and capabilities in the Xcelium Parallel Logic Simulation datasheet.


X-Propagation: Xcelium Simulator’s X-prop Technology Ensures Deterministic Reset

$
0
0

All chips need to cold reset on every power-up. Warm resets, however, are a bit more complicated. Take a smartphone screen, for example. The screen may power down while the phone is idle. However, the user will want it to return to their pre-set brightness level on power-up. Chips have to be tested for multiple warm-reset scenarios, and each of these tests take a very long time.

Enter Xcelium Simulator, and X-propagation. Also known as X-Prop, this idea represents how X states in gate-level logic can propagate and get stuck in a system during cold or warm resets. Unresolved X states spreading through a system can cause a non-deterministic reset, which makes a chip run inconsistently at best or fail to reset at worst.

Thanks to Xcelium Simulator and X-prop technology, we can debug X issues faster—10X faster than we could if the debug was completed during GLS. Right now, GLS happens towards the end of product development, which can lead to costly fixes when bugs are found so late. GLS needs to occur no matter what, but if X is propagated through RTL simulations, then this process can be completed far earlier, allowing bugs to be caught and dealt with efficiently.

X-prop analysis can be executed in either Compute As Ternary (CAT) mode, where X is propagated exactly as it would be in hardware, and Forward Only X (FOX) mode, where X is propagated disregarding inputs. This is required due to the fact that the propagation of Xs is not properly modeled to function like hardware in RTL for Verilog and VHDL. In addition, it’s much easier to debug in RTL—doing the propagation analysis at a higher level of design abstraction—and it has a smaller memory footprint shorter run time than GLS.

Figure 1: FOX and CAT mode

Many projects fail to run these diagnostics in application-realistic ways, such as reset validation and power-down/power-up sequences, due to the time required to run these long gate level simulations. If the diagnostic is run at RTL in accordance with the standards, then the X can be propagated forward more often than it would in practice, and this is called X-optimism.

What does this mean? X-propagation through RTL enables a more complete set of reset tests to be run instead of only the essential ones. If the X-propagation tests are left to be done during the GLS stage, then it is not time-feasible to run them all. By completing all of those tests earlier, it adds a level of security in knowing that all logic gates have been tested, and 100% of the chip works, instead of simply enough to ensure standard functionality. It’s easy to use—no complicated setup required. The sequential nature of the testing lets smaller chips be used, as RTL works with non-resettable flops. Finally—and most notably—RTL is faster, and more chips verified means more chips sold.

Nowadays, X-prop technology is built into Xcelium Simulator. Xcelium X-prop technology supports both SystemVerilog and VHDL, and doesn’t require any changes to existing HDL designs. Xcelium uses the aforementioned FOX mode and CAT mode to test for X-propagation, and both of these modes show the non-LRM compliant behavior needed to run your reset verification at RTL and improve your overall chip quality.

For more information, see the RAK at Cadence Online Support.

Cadence @ DAC: What to Expect and What to See

$
0
0

Cadence returns to DAC 2017 this year, showcasing our full verification suite. Here are some of the things you can look forward to from us in the upcoming week.

Once again, Cadence has the Expert Bar on Monday, Tuesday, and Wednesday. The Expert Bar is where engineers can visit our booth and have conversations with our technical experts. Cadence will be running many sessions, and those topics are listed below.

Topics List

Scheduled Time

Featured Products

Automotive: Functional Safety Focus

Tues 1:00-2:30, Wed 4:00-6:00

Xcelium Safety, DSG full-flow

Simplify SoC Verification with VIP

Mon 2:30-4:00, Tues 11:30-1:00

Cadence VIP

Performance Analysis and Traffic Optimization for ARM-Based SoCs

Mon 4:00-6:00, Wed 2:30-4:00

Interconnect Workbench, Palladium Z1, Xcelium simulator, vManager

Formal Verification Featuring the JasperGold Platform

Tues 2:30-4:00, Wed 2:30-4:00

JasperGold Apps

System Verification and HW/SW Co-Verification

with the Palladium Z1 Platform

Tues 1:00-2:30, Wed 10:00-11:30

Palladium Z1

Software Development with Protium S1 FPGA-Based Prototyping Platform

Tues 4:00-6:00, Wed 11:30-1:00

Protium S1

Verification Fabric: Portable Stimulus Generation Featuring Perspec System Verifier

Tues 10:00-11:30, Wed 1:00-2:30

Perspec System Verifier

High-Performance Simulation with Xcelium Parallel Simulation

Tues 11:30-1:00, Wed 1:00-2:30

Xcelium Simulator

Verification Fabric: Plan, Coverage, and Debug with vManager and Indago Solutions

Tues 1:00-2:30, Wed 4:00-6:00

vManager,, Indago

The Future of Verification with the Cadence Verification Suite

Mon 2:30-4:00

Cadence Verification Suite

Cadence Verification Implementation Solutions for ARM-Based Designs

Mon 10:00-11:30, Tues 1:00-2:30

 

 

Cadence will also be offering Tech Sessions—hour-long presentations about a singular topic. These will be held throughout DAC and cover the breadth of verification as listed below:

Topics List

Scheduled Time

Featured Products

Finding More Bugs Earlier in IP Verification by Integrating Formal Verification with UVM

Mon 3:30-4:30

JasperGold Apps, Verification IP, Xcelium Single-Core Simulator

High-Speed SoC Verification Leveraging Portable Stimulus with Multi-Core Simulation and Hardware Acceleration

Tues 2:30-3:30

Perspec System Verifier, Xcelium Multi-Core Simulator, Palladium Z1

Optimally Balancing FPGA-Based Prototyping and Emulation for Verification, Regressions, and Software Development

Wed 10:30-11:30

Palladium Z1, Protium S1, Palladium Hybrid

Automotive Functional Safety Verification

Tues 3:30-4:30

Xcelium Safety

RTL Designer Signoff with JasperGold Superlint and CDC Apps

Tues 12:30-1:30, Wed 11:30-12:30

JasperGold Apps

Cadence Verification Suite: Core Engines, Fabric Technologies, and Solutions

Wed 2:30-3:30

Cadence Verification Suite

In addition to these presentations, Cadence will be hosting a verification luncheon that offers a panel of experts from a variety of different companies to answer verification-related questions. In Monday’s luncheon, Cadence will share a table with Vista Ventures LLC, Hewett Packard Enterprise, and Intel to discuss “Towards Smarter Verification”—a panel asserting that the next big change in verification technology is not necessarily a new engine, but improved communication and compatibility between existing engines that may be optimized for different tasks. This panel will talk about how verification is changing in today’s application-specific world, as well as utilizing machine learning technology to assist in data analytics, among other topics.

Cadence technology experts will also be holding other events during DAC. Of chief importance are “Tutorial 8: An Introduction to the Accellera Portable Stimulus Standard,” presented by Sharon Rosenberg in room 18CD on Monday from 1:30pm to 3:00pm. The Designer/IP Track Poster Session regarding “Automating Generation of System Use Cases Using Model-Based Portable Stimulus Approach” is another important event, presented by Frederik Kautz, Christian Sauer, and Joerg Simon from 5:00pm to 6:00pm on the Exhibit Floor.

We have many exciting things in store for those who attend, and we hope to see you all at DAC this week!

Single Core vs. Multi Core: Simulation in Stereo

$
0
0

Latency simulations are the sworn enemy of the verification schedule. A handful of tests add days to weeks for each regression cycle; and when you add in the fact that they can’t be parallelized like the shorter bandwidth simulations, it gets hard to manage an engineer’s time efficiently. But…

What if there was a better way?

Since the olden days of simulation, all of that was true. Bandwidth sims were—and still are—mostly needed in the middle of the project time, but when the latency simulations come around at the end of the project they begin to dominate the regression cycle. This meant that the project would bottleneck at the end, and engineers would be left twiddling their thumbs, waiting days or weeks for a handful of tests to complete.

All of this was due to the fact that latency sims couldn’t be effectively shortened. The tests were simply too large and complex for the simulator to automatically break into convenient parts so they could be run on multiple machines. It simply couldn’t be done.

But now, it can.

Xcelium Simulator brings a new simulation technology to the table: multi-core. Patented software allows Xcelium to find the parts of a long latency simulation that can be effectively parallelized, and it distributes the overall simulation across multiple cores, representing a testing speed-up of anywhere between 3X and 10X, depending on the system. Before Xcelium, when all tests were run in single-core, no amount of distributing the bandwidth-hungry tests over many single-core machines could save your overall project time. The latency simulation was just so much longer that the bandwidth sims that the extra resource consumed for bandwidth simulation was essentially wasted. There was no real reason to use all of the processing power at your disposal if it wasn’t actually going to make your regression any faster overall. Xcelium Simulator opens the bottleneck, and makes it advantageous to strategically match your total bandwidth tests to your new, shortened latency test, thereby making the most efficient use of your resources.

Figure A: A simulation needs change as the project progresses. In the middle, bandwidth simulation is the primary use of resources, but as the project reaches the end, latency simulations dominate. At that point, the only pragmatic way to address regression time is to apply multi-core simulation. Bandwidth simulation regression time can be lowered by the use of additional machines, but only a new engine can reduce the latency simulation times.

It boils down to this: originally, engineers had one knob with single-core processing power: number of machines. It was like a radio with only a volume knob: they could throw more and more machines at a project until it finished—but there was hard limit in place with the latency tests. The number of machines engineers have access to is a finite resource, as well—that volume knob doesn’t go to eleven. Now, with Xcelium Simulator, engineers have access to a second knob: multi-core. Engineers no longer have just a volume knob—they have bass and treble adjusters. Like any audiophile knows, control over the system is paramount—and that’s exactly what Xcelium Simulator gives them: control. Parallelizing the latency simulations drastically reduces overall regression time because engineers can tune their single-core machine use to match the reduced run time for the latency tests.

Xcelium Simulator is the next step in simulation technology—a true third-generation engine. With multi-core technology, Xcelium allows engineers to have unprecedented control over their tests which in turn allows them to further tailor their test sequencing to their specific hardware needs.

Save & Restore with More: Preserve Your Entire SoC

$
0
0

The concept of Save and Restore is simple: instead of re-initializing your simulation every time you want to run a test, only initialize it once. Then you can save the simulation as a “snapshot” and re-run it from that point to avoid hours of initialization times. It used to be inconvenient. Using this feature in simulators could have massive productivity gains, but not all users made the most of it due to a couple of hassles regarding the way snapshots saved states.  The result is billions of wasted compute cycles on simulation farms worldwide.

Under Incisive, there was no procedural way from within your HDL code to execute a save—you had to do the save from tcl at a “clean” point in the simulation. This created awkward situations where you couldn’t use Save and Restore exactly when you wanted, but only at certain times in between delta cycles, and you had to write some roundabout code that was generally hard to read and often created more issues down the road than it solved. Beyond that, if you were using C/C++ code that you wrote yourself, you had to manage all state data used by that code on your own as well. PLI, VPI, and VHPI have mechanisms to deal saving data, but it is a significant effort that many C/C++ applications ignore.

Xcelium Simulator brings an improved approach to the Save and Restore feature by not taking a “snapshot” of the system, but instead saving the entire memory image. The main goal of Xcelium’s new Save and Restore feature is to get the Save and Restore methodology to a point where “it just works.” There won’t be any manual fiddling required accurately save and restore the model. You will be able to save, and restart, with a few commands and no hassle.

As it stands, Xcelium’s Save and Restore functions greatly improve the overall usability of saving and restoring over Incisive. Under old mechanism, if a test opened a file while it ran, the file handle / pointer would not be saved. Xcelium’s improvements save all file pointers in the image so that this is no longer an issue – open files are restored to their save state so a restart resumes at the same point. The new Save and Restore also fixes saved-memory issues with custom-built C code, so you will no longer have to manually handle state information stored in memory when saving—it will be saved for you, automatically.

Over time, the new Save and Restore feature will be updated to do even more out-of-the-box. The saved file islarger than the snapshot, but saving a memory image streamlines and eases the Save and Restore feature usage significantly. The file size is mitigated somewhat with the -zlib option, a compression tool integrated into Xcelium Simulator that automatically compresses the image—in the future, this compression will be improved, creating a yet smaller saved image. Save and Restore functionality for sockets and for thread-handling in multi-threaded applications are on the table for a future update as well.

Right now, not everyone is using Save and Restore. Those not using it are wasting energy and time in their simulation farms. With Incisive, they were saddled with manually saving all of the external data used in their tests, coupled with the inconvenient and awkward saving restrictions—meaning that those engineers were stuck with the wasted compute cycles. The new Save and Restore upgrades in the Xcelium Simulator fix those major issues, which means that there are no more excuses to avoid this time saving technology. Whether you are setting up a regression environment, doing test development, or creating relatively smaller block-level tests, or simply care about saving the Earth from global warming, Save and Restore cuts your test initialization time drastically and reduces the compute resources you need, with no hassle.

Enum compatibility error in Specman

$
0
0

 

One of my favorite quotes about SW programming is the following by Edsger Dijkstra: "If debugging is the process of removing software bugs, then programming must be the process of putting them in." Yes, we all insert bugs while we write code. And a significant part of the verification development time is spent on debugging. You might be tempted to think that most debugging time is spent on detecting errors like memory corruption, race condition, etc. However, most of our debugging time is usually spent on the most "stupid" mistakes. This blog is about helping you detect one kind of these mistakes just after you write your code.

Let's take the following example: You have some design in System Verilog that includes the following:

 

typedef enum {NOP, READ, WRITE, IDLE, DONE } read_write_t;

read_write_t op;

 

Your e test drives op and compares the result, so you declare the following in your e test:

 

type read_write_t : [NOP,WRITE,READ, IDLE, DONE];

unit top {

   op_p : inout simple_port of read_write_t is instance;

   keep bind(op_p, external);

   keep op_p.hdl_path() == "~/top/op";

….

};

 

Now let's see how sharp-eyed you are. Have you noticed that the enumerator is defined differently? READ and WRITE position is switched in the e code.

 

SystemVerilog code:

typedef enum {NOP, READ, WRITE, IDLE, DONE } read_write_t;

 

e code:

type read_write_t : [NOP,WRITE,READ, IDLE, DONE];

 

Yes, that's the way the cookie crumbles…. If you are lucky, then things will not work properly the first time you try to run the test, and if you are a good debugger you will most likely find out why and fix it immediately. In the worst case, everything will work fine for now and something will not work properly only two weeks (or months…) from now when someone changes something in the test or in the design. Then, this error might cost you few hours of debugging….

 

For these kind of errors, a new command is introduced in Specman of Xcelium 17.04: set e2hdl checks. This command checks the compatibility of enumeration types in e and hdl. The check is done for each simple port of e enumeration type. You just need to remember to issue it before the elaboration.

 

So in our case, after we set this command and run the test, we get the following error (it is Error by default, you can change it to Warning):

 

 *** Error - The external port 'top.op' of enum element type

'read_write_t' is bound to hdl object 'top.op' of type name 'read_write_t'. The

2th item 'WRITE' does not match literal 'READ'.

 

By default, the command is quite strict and it expects the name of the type in e to be identical to the name of the type in hdl. If you need more flexibility, you have a few options to allow different names for specific enum or even to ignore specific enum(s) at all. You can read about the various options in cdnshelp. In general, it is recommended to have this command issued by default to easily detect future errors.

 

One point worth mentioning is that to ensure that your test has fields that are identical to the fields in hdl, you can always use the mltypemap tool that can take a file in hdl and generate its matching file in e. You can read more about it in cdnshelp.

 

In Specman of Xcelium 17.04 we provided you with the ability to detect this error. In a future version we plan to totally eliminate the possibility of having this error. We plan to support the ability to point form your e code to the enumerator type in the hdl, so that you do not even need to define it in your e test. Check the What's New section in the next Xcelium versions… 

 

Orit Kirshenberg

Specman team

 

 

 

Xperiences with Xcelium: Hewett-Packard Enterprise Makes the Switch

$
0
0

At Hewlett-Packard Enterprise (HPE), the team working to create IP for “The Machine,” HPE’s vision for the future of computing, recently decided to make the switch to the Xcelium Simulator from their old mix of simulators. To gauge how much Xcelium would help improve their productivity, engineers at HPE devised a series of trials. Each trial applied a specific set of circumstances to check how Xcelium would perform in their environments—and Xcelium performed.

The first trial was a direct upgrade from Incisive 15.20 to Xcelium, on a simple build flow. Since Xcelium has irun mapped to xrun, all the engineers had to do to update it was simply point their tests from the old Incisive to the new Xcelium. Even something as quick and easy as that still saw a 15% system performance increase—and that’s just with single-core! In the future, HPE plans to migrate some of their gate models to test Xcelium’s multi-core simulation capabilities.

Another trial was to take tests that ran on a non-Cadence simulator, and run them on Xcelium. These were UVM-SV tests, with third-party-created build scripts, and they ran in both block-level and SoC environments. Even with only initial tuning and training, the team still saw a 25% increase in performance for block-level tests.

The final trial was, again, moving the tests from another non-cadence simulator to Xcelium; but this time, the tests ran on a C++ based testbench. Those tests ran in block-level and chip environments, and the block-level tests in that trial ran 20% faster than before.

Going forward, HPE plans to continue to train their engineers in using Xcelium, and they’ll be working with Cadence to do additional tuning to their unique simulation needs.

To watch the presentation given by HPE on their experiences with Xcelium at DAC 2017, click here.

ROHM CO., Ltd Adopts Our Functional Safety Verification Solution

$
0
0

On July 17, 2017, Cadence announced that the Cadence® Functional Safety Verification Solution had been adopted by ROHM CO., Ltd as part of its deisgn flow for ISO 26262-compliant ICs and LSIs for the automotive market. Cadence fault simulation tech can quickly and easily deal with the complexities around many types of faults, including single event transient (SET), stuck-at-0 or 1, dual-point faults, and more. It also outperforms existing DFT flows for safety-related fault effect analysis.

This new technology comes with quoted approval: “We’ve obtained a reliable, robust solution that we can depend upon for our automotive designs,” said Akira Nakamura, LSI Product Development Headquarters, ROHM CO., Ltd.

Cadence’s solution makes the tedious process of ensuring that all components meet functional safety requirements fast and easy via automation, and it supports Cadence’s System Design Enablement plan, which assists system and semiconductor companies like ROHM Co., Ltd to make complete, differentiated end products with unprecedented efficiency.

To read the full press release, click here.


X-Propagation: Xcelium Simulator’s X-prop Technology Ensures Deterministic Reset

$
0
0

All chips need to cold reset on every power-up. Warm resets, however, are a bit more complicated. Take a smartphone screen, for example. The screen may power down while the phone is idle. However, the user will want it to return to their pre-set brightness level on power-up. Chips have to be tested for multiple warm-reset scenarios, and each of these tests take a very long time.

Enter Xcelium Simulator, and X-propagation. Also known as X-Prop, this idea represents how X states in gate-level logic can propagate and get stuck in a system during cold or warm resets. Unresolved X states spreading through a system can cause a non-deterministic reset, which makes a chip run inconsistently at best or fail to reset at worst.

Thanks to Xcelium Simulator and X-prop technology, we can debug X issues faster—10X faster than we could if the debug was completed during GLS. Right now, GLS happens towards the end of product development, which can lead to costly fixes when bugs are found so late. GLS needs to occur no matter what, but if X is propagated through RTL simulations, then this process can be completed far earlier, allowing bugs to be caught and dealt with efficiently.

X-prop analysis can be executed in either Compute As Ternary (CAT) mode, where X is propagated exactly as it would be in hardware, and Forward Only X (FOX) mode, where X is propagated disregarding inputs. This is required due to the fact that the propagation of Xs is not properly modeled to function like hardware in RTL for Verilog and VHDL. In addition, it’s much easier to debug in RTL—doing the propagation analysis at a higher level of design abstraction—and it has a smaller memory footprint shorter run time than GLS.

Figure 1: FOX and CAT mode

Many projects fail to run these diagnostics in application-realistic ways, such as reset validation and power-down/power-up sequences, due to the time required to run these long gate level simulations. If the diagnostic is run at RTL in accordance with the standards, then the X can be propagated forward more often than it would in practice, and this is called X-optimism.

What does this mean? X-propagation through RTL enables a more complete set of reset tests to be run instead of only the essential ones. If the X-propagation tests are left to be done during the GLS stage, then it is not time-feasible to run them all. By completing all of those tests earlier, it adds a level of security in knowing that all logic gates have been tested, and 100% of the chip works, instead of simply enough to ensure standard functionality. It’s easy to use—no complicated setup required. The sequential nature of the testing lets smaller chips be used, as RTL works with non-resettable flops. Finally—and most notably—RTL is faster, and more chips verified means more chips sold.

Nowadays, X-prop technology is built into Xcelium Simulator. Xcelium X-prop technology supports both SystemVerilog and VHDL, and doesn’t require any changes to existing HDL designs. Xcelium uses the aforementioned FOX mode and CAT mode to test for X-propagation, and both of these modes show the non-LRM compliant behavior needed to run your reset verification at RTL and improve your overall chip quality.

For more information, see the RAK at Cadence Online Support.

Moving to Xcelium Simulation? I’m Glad You Asked

$
0
0

Ready to take the next step in simulation technology with a true third-generation engine, with multi-core technology? ­ Cadence® Xcelium™ Simulator allows you to have unprecedented control over your tests including to further tailor test sequencing to your specific hardware needs.

Get started immediately with the new release Xcelium 17.04 by using the central page on https://support.cadence.com to learn everything you need to know about installation, licensing, and easily migrating projects from Incisive to Xcelium.

Visit the page - https://support.cadence.com/xcelium  It lists important links to Xcelium simulator documents. Visit the “one-stop shop" page to get all you need to install and use the release. 

The Xcelium Simulator Introduction helps you introduce the Xcelium simulator with detail in changes in the Xcelium single-core engine, and describes recommended steps to take when upgrading to Xcelium from Incisive.

If you are looking for migration document to help you upgrade to Single Core Xcelium from Incisive, find Migrating from Incisive to Single Core Xcelium

The new Xcelium software installation is focused on the core simulation engines. But Xcelium is only the foundational part of an overall digital simulation methodology. To know what is included in the core simulator download and optional Xcelium components, as well as other key products available for the Cadence simulation flow, you have got to read this article - What technologies are installed as part of the Xcelium release

And if you are wondering about ways to determine which licenses will be required before running a simulation – all your questions are answered here - How to list the licenses requested or being consumed by the Xcelium tools

Xceligen is the next generation random-constraint solver released as part of Xcelium Simulator. It contains new components as well as major enhancements. This document Xceligen - Next Generation SV Constraint Solver describes how to take advantage of the new technology using constraint solver switches and environment variables.

We have also built a Cadence support matrix to list the Xcelium release versions cross-indexed with the other verification engine and Verification IP (VIP) release versions available. The Xcelium Flow Support Integration Matrices includes recommended version combinations, version combinations under investigation at Cadence, and version combinations not recommended.

Also, there are many troubleshooting articles that generally provide helpful hints and solutions to address design problems while using Xcelium simulator in the flow. We have collected important articles, categorizing them by methodology or flow topic on this page under “Methodology and Flow Topics”.


And finally under Knowledge Resources, you can find access to our Application Notes, Videos, Rapid Adoption Kits related to Xcelium simulator and technology. 

Visit the page - https://support.cadence.com/xcelium for more.

Contact us for any questions. Leave a comment in this blog post or use the Feedback / Like mechanism on https://support.cadence.com.

Happy New Learning!

Sumeet Aggarwal

Infineon’s Coverage-Driven Distribution: Shortcutting the MDV Loop

$
0
0

There are more ways to improve productivity in the verification process than simply making the simulation run faster. One of these is to cut down on the amount of time engineers spend working hands-on with the testbench itself, preparing it and coding Specman/e tests for it. It is common knowledge that the engineer’s task is to stock the testbench with these tests and rerun regressions to measure their effect as they grind their way toward coverage closure. But—are we bound to this slogging process, or is there a more productive way?

Coverage-Driven Distribution (CDD), an advancement developed by Infineon, goes beyond metric-driven verification (MDV) to solve this problem by “closing the loop.” It takes a process where an engineer needs to stop the flow repeatedly to meddle in the testbench and removes that part from the equation entirely via an algorithm that automatically puts Specman/e tests in the testbench to run. This is a deterministic, scalable, repeatable, and manageable process—and when you include the CDD (coverage-driven distribution) add-on, it becomes even more automated.

Now, instead of running a bunch of e tests with Specman/Xcelium and crossing their fingers, an engineer spends their time analyzing the coverage report and determines from there what needs to be tweaked for the best overall coverage. This adds a bit of time at the start, but the amount of time subtracted from the body of the verification process is significantly greater.

In short, this tool automates the analysis, construction, and execution parts of the verification process. An engineer defines what functional coverage would be for a given testbench, and then that testbench fills itself with tests and runs them.

The full paper describing this tool from Infineon—which won the “Best Paper Award” in the IP and SoC track at CDNLive—is available here.

Work Flow with CDD

The algorithm used by CDD works in four steps:

1. Read coverage information from previous runs. This helps the algorithm “learn” faster, and saves engineer time between loop iterations.

2. Detects coverage holes using that information: this results in a re-ordering of events.

3. Builds a database of sequences which sets coverage items to the location of holes. Then, collection events are triggered.

4. The database is used to apply stimuli to the DUT.

Figure A: The graphic below shows the work flow under the algorithm.

 

Results

As it turns out, this tool is proven to be effective: it has already been used in real projects to successful ends. That means that—assuming everything is configured correctly—a test can drive the correct sequence, using the correct constraints, by itself. It can then set the coverage items to the right value and trigger collection events, increasing coverage.

All of that is automatic.

There are some costs. A verification engineer still has to feed inputs to the algorithm to train it, specifically information regarding when to trigger a collection event, and how to set values for each item in a given coverage group. The biggest pull here is that the major downside of randomization—the uncertainty—is no longer an issue; it can be planned via the CDD algorithm. It also does not take away the advantages of randomization.

In total, in exchange for a bit more work in building the database and Specman/e sequences, you gain faster coverage closure, earlier identification of coverage groups not included in a given test generation, and a better understanding of the DUT itself.

Looking forward, this technology may expand to be able to derive information in the database—like sequences, constraints and timings—from previous runs, instead of just coverage information, which would further reduce the time an engineer spends manually interacting with the testbench. Beyond that, an additional machine-learning algorithm that uses the coverage model and “previous experience” may be able to create and drive meaningful stimuli to patch the remaining coverage holes, even further reducing engineer meddle-time.

Put On Your Perspectacles: How Perspec Can Speed Up Your Testbench

$
0
0

It’s no secret that the bulk of time running simulations is spent on the testbench side. You can throw as many cores as you want at the DUT when you’re running RTL tests, but let's face it, it’s the testbench itself that’s bottlenecking you. Let’s say it takes thirty minutes to run the testbench side, and thirty minutes to run the tests themselves. You can use multi-core parallelism to massively shrink the DUT time, but at the end of the day, those RTL tests will still take at least thirty minutes per iteration.

Without changing how the testbench itself runs, simulation time can only ever asymptotically approach the testbench time. This is true for the GLS stage too. With a thirty-minute testbench time, and, say, five hours and thirty minutes of tests, you’re looking at a total of six hours per iteration—six times longer than our RTL example. In that RTL example, using multi-core parallelism over an arbitrarily high number of cores puts you close to a 2X speedup. For GLS, an arbitrary amount of multicore parallelism could, in this simplified example, bring you down to roughly a thirty-minute runtime, which is equal to about a 12X speedup. Sounds great, right?

While that appears to be a huge amount, you’ve got to factor in that the testbench side doesn’t care how many cores you’re running the tests on—you can’t parallelize those processes, and that’s going to take thirty minutes to run per iteration regardless of how many cores you have. You still can’t reduce the run time to less than thirty minutes.

Once you get to the DFT ATPG side of things, the actual tests being run are enormous. Fault analysis takes ages—in this case, around three weeks, which is functionally eons in dev-time terms. Running those tests on multi-core shrinks that down a lot—but the DFT ATPG step isn’t why you’re sweating about what put your schedule behind the deadlines. The real issue lies with the GLS section and that obnoxious little half-hour of testbench time that has to be re-run over and over again.

With modern SoCs, the issue becomes even more complicated. Chips nowadays have to be tested with full-chip, comprehensive real-world use cases. Consider, for instance, the use case of viewing a video while uploading it. On the surface, it seems fairly simple, but when you move down to the system level, lots of things are happening: the video buffer has to be converted to MPEG4 format with a certain resolution using whatever graphics processor is available, then that has to be transmitted through the modem via a communications processor—and while that’s happening, it has to be decoded using another available graphics processor so the video can be displayed on the screen.

It takes a very heavy testbench to push all those events—so wouldn’t it be great if that block of testbench time could get shrunken down, too?

Perspec Can Help

Luckily, the Cadence Verification Suite has a tool for that. Enter Perspec: the tool for creating use-case C tests. Most designs nowadays come with integrated cores, sourced from third parties. You already know those cores work, since they’re sourced from third-party vendors—so tests aren’t really written for them. Perspec leverages those internal cores to drive the test activity from within the design—and this activity can be accelerated as part of the DUT while at the same time off-loading the compiled code simulator by making the non-acceleratable (NACC) TB portion much smaller. This allows those embedded cores to do the work of pushing the tests instead of the testbench, which takes a huge amount of data off the buses between the testbench and the design. By utilizing the CPU to exercise the design, Perspec effectively turns your real-world testbench with embedded cores and turns it into a lightweight testbench—which means you can actually cut that testbench time down to size, and maybe panic a little bit less when you start into the fault detection phase.

Before, when using multi-core simulation, you were stuck running the testbench on a single core, because only a small amount of the testbench code can be parallelized. This gave rise to the “two knobs” notion mentioned in an earlier blog—where one “knob” was the number of single core machines dedicated to improving the efficiency of that side, and the second “knob” was the number of cores a given DUT was parallelized over. Engineers could only adjust that first knob so much since machines are so expensive. Thanks to Perspec, though, the use-case tests can reduce the execution time without simply throwing more machines at the problem—which leaves more resources to twist up the “cores” knob.

To look at the Chalk Talk for Perspec, check here.

To see how others are using Perspec, look here.

A Brief Introduction to Xcelium

$
0
0

Welcome to the XTeam blog! We are a team of bloggers dedicated to showcasing the newest in parallel simulation technology with the Xcelium Parallel Simulator. In this blog, we will bring you informational and technical articles regarding Xcelium’s features, such as X-propagation technology, save and restore, and incremental elaboration, in addition to information about innovations and general improvements over existing simulators. We will also discuss the improvements brought on by the move to multi-core technology in simulations, Xcelium in low power environments, and digital mixed-signal use cases.

Xcelium is the EDA industry’s first production-ready third generation simulator. Based on innovative multi-core technology, Xcelium allows SoCs to get from design to market in record time. With Xcelium, one can expect up to 5X improved multi-core performance, and up to 2X speed-up for single-core use cases. Backed by early adopters’ success stories from a wide variety of markets, Xcelium is already proving to be the leading simulator in the industry.

For now, though, you find out more about Xcelium's features and capabilities in the Xcelium Parallel Logic Simulation datasheet.

Viewing all 666 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>