Quantcast
Channel: Cadence Functional Verification
Viewing all 666 articles
Browse latest View live

Xcelium's New Save and Restart Saves You Time

$
0
0

You may have heard about the overhaul to the old save/restart mechanism that was in Incisive—but are you aware of what the new Xcelium Simulator version can do?

While the old version is still supported, there’s a lot of benefits to shifting to the new one. Now called “checkpointing,” this new system uses a process-based implementation and is overviewed in the app note, Using Save/Restart Checkpointing with Xcelium.

If you’re still using that old save/restart functionality, you’re probably already aware of some of the issues and shortcomings of that system. You could only save or restart at a “clean” point, which limited when you could actually save. It also didn’t save any states from any of your external code—so your C/C++/SystemC code needed to be re-run, and you’ll have to manually scrub your output files. There were also issues with certain basic system tasks that weren’t supported.

Xcelium’s checkpointing system solves these issues and others, creating a smoother, better-integrated solution that’s a good fit for any environment.

To enable the new checkpointing system just use the -checkpoint_enable run-time switch. Once you’ve done that, there are a couple of ways to invoke the new checkpointing system.

From tcl, just use the save command as before. You can also put the $save system task in your Verilog code. If that task is present at elaboration time, wherever the $save system task is called, checkpointing will be used to save the snapshot.

With checkpointing, the memory state of the entire model is saved—no more worrying about your external code! This includes the state of any files being read or written. That means the read/write pointers of any associated files are saved. in a warm restart, your output will pick up right where it left off at the save action. In a cold restart, you can choose to start with a fresh log file by specifying a different log file name from what was used at the time of save, or opt for a continuous log file from before the save through post-restart by using the same one.

There are two issues still in the works with Xcelium’s new checkpointing. One is that the new images created through this checkpointing system are much larger than the old ones. That said, you can use the -zlib option with xrun to compress the image. The images are still very large even when compressed, though, so be sure to watch your disk space usage if you’re saving multiple times. You can read more about -zlib in the Xcelium User Guide.

If you need multi-threading, stay tuned—it will be available in the next few weeks.

If you want to read more about Xcelium’s new save/restart functionality, check out the app note here.

 


Leading the Charge: Cadence Announces New Verification IP for UFS 3.0, CoaxPress, and HyperRAM

$
0
0

Today, Cadence announced three new VIPs, two of which are industry-firsts! Cadence revealed the first available VIP for CoaXPress for high-speed imaging and the first available VIP for HyperRAM high-speed memory. We also unveiled a new JEDEC Universal Flash Storage 3.0 (UFS) VIP. Together, these three new VIPs allow early-adopters to leap off the starting line in the race to create new, incredible SoCs and IPs incorporating these revolutionary new technologies.

CoaXPress is great for imaging technology needs. With the rise of automated driving technology, lightning-fast transfer and analysis of videos and images is more important than ever—and with speeds of up to 6.25 Gbit/s, CoaXPress is uniquely positioned to fill that need. Combine that with Cadence’s TripleCheck technology, and you’ve got a VIP worth integrating into your next project.

HyperRAM is a HyperBus-based read performance memory with speeds of up to 333 MB/s. Applications including automotive, industrial, and consumer applications that focus on a small footprint are a great fit for HyperRAM VIP.

The UFS 3.0 spec doubles the throughput bandwidth from 1.3 GB/s (in the old spec) to 2.6 GB/s. This’ll be a big help to the rapidly growing need for bandwidth and low-power SoCs—for both automotive and mobile designs. And don’t worry—this VIP still utilizes the Cadence TripleCheck  technology you know and love.

To read the full press release for all three of these new VIPs, check here.

Empowering Generation - Range Generated Fields (RGF)

$
0
0

Specman constraints solver process consists of a series of reductionsand assignments. It reducesthe range of the field value based on the constraints, and then assignsto it a random value from the reduced range. After assigning one field, the range of all connected fields is reduced accordingly, and the process continues until all fields are assigned.

A new feature added in Specman 18.03 – Range Generated Fields (RGF)- allows distinction between the reductionsand the assignments. With RGF, the constraint solver performs the reduction as usual but we have the option to do the assignment ourselves.

Let’s look at a simple example of the reduction and assignment process. The example defines three fields and some constraints.

   val_x  : uint;
  val_y  : uint;
  val_z  : uint;

  keep val_x in [5000..10000];
  keep val_z in [500..8000];
  keep val_y >= val_x;
  keep val_y <= val_z;

 The steps of generating one item are shown below (this information can be easily retrieved using the “trace gen” command):

reducing: val_x -> [5000..10000]       // applying constraint #1
reducing: val_z -> [500..8000]         // applying constraint #2
reducing: val_y -> [5000..8000]        // applying constraints #3 & #4
reducing: val_x -> [5000..8000]        // applying constraints #3 & #4
reducing: val_z -> [5000..8000]        // applying constraints #3 & #4
assigningval_z: [5000..8000] -> 6419  // picking randomly from range
reducing: val_y -> [5000..6419]        // applying constraints #4
assigningval_y: [5000..6419] -> 5693  // picking randomly from range
reducing: val_x -> [5000..5693]        // applying constraint #3
assigningval_x: [5000..5693] -> 5471  // picking randomly from range

 

As seen in the given example, we usually let the constraint solver to do all the work for us, including reducing all ranges and assigning values to all fields. However, in some cases we want to perform the last decision of assigning values ourselves by deciding which value from the reduced range to assign to the field.

In this blog post, we will look at some examples of flows in which the Range Generated Fields are helpful. In these examples, we will take the final decision of assigning a value from the reduced range to the field ourselves.

Rapid Creation of Altered Input

When testbench performance is crucial, for example in acceleration verification, we look for ways to boost the input generation. On one hand – we want to use the constraint solver, ensuring only legal data is created. On the other hand – using the constraint solver has a cost in performance.

Let’s change a bit the example shown above, and mark the val_x as an RGF field, by writing ~ before it:

  ~val_x  : uint;
   val_y  : uint;
   val_z  : uint;

Now the constraint solver will not only generate the struct, it will also maintain the information of the reduced range of val_x ([5000..5693], for example). After marking the field as RGF, we can generate the struct once and create many copies of it, in each of them picking one value from the reduced range and assigning it to val_x. This way we create many instances that are almost identical, differ in one essential field. We know all instances are legal - we ensure this by picking values only from the reduced range as it was calculated by the constraint solver.

For getting the reduced range we use a new method of any_structrgf_get_range(). This method returns the reduced range of the field. Note that this method is valid only for fields marked as RGF. To assign the field we use the new assignment operator rgf= and not a regular assignment (=). Using rgf= Specman checks that the assigned value is in the reduced range as it was calculated by the constraint solver.

   main_tcm() @clk is {
      var trans : trans;
      gen trans;
      // get the reduced range of val_x
      var reduced_range :=  rgf_get_range(trans.val_x);  

      var copied_trans : trans;
      var one_val_x : uint;
      
      // create many copies, in each of them give a different
      // value picked from the reduced range
      for i from 0 to 10000 {
         copied_trans = trans.copy();
         gen one_val_x keeping {
            it in reduced_range;
         };
         copied_trans.val_x rgf= one_val_x;
         send(copied_trans);
      };
   };  

 “Optional Assignment” - Does the user really want this value?

As described above, the constraint solver assigns values to all fields.  If there are no constraints on the field, the constraint solver would pick a random value from the type initial range. In some cases, you wish to know if the field got its value as a result of user constraints, and if no constraints were applied to this field, we will apply some predefined policy.  Until now, to implement this methodology you would add a designated field, e.g. “use _address”, and instruct the test writers to constrain this field to TRUE indicating “yes, I wrote constraints on address field”. If use_address is FALSE – it means that its value was assigned fully randomly by the constraint solver and in this case the testbench would apply some predefined policy.

Using RGF, we do not need such an auxiliary field. We mark the field as RGF and then we query whether the field range was reduced.

In the following example we use two new methods of any_struct– first is rgf_was_generated(), indicating whether the struct was evaluated by the constraint solver (and not created using ‘new’). After establishing that indeed it was generated – we call rgf_got_reduced(), returning TRUE if the struct’s final range differs from the original range (indicating that constrains were applied on it).  

   struct trans {
      ~address : address_t;
      // other fields …
   };

   unit agent {
      send(t : trans) is {
        if rgf_was_generated(t.address) {
          if rgf_was_reduced(t.address) {
             // reduced, meaning there were user constraints
             // on it, so send as is
          } else {
             // no user constraints were applied on the address field
             // so execute my default addressing policy
             t.address rgf= me.config.get_prefered_address();
          };
        };
       //// continue with sending this trans ….
     };
   };

Combining Power of Constraints with Other Algorithms

The last use model we will describe here, is the one in which we want to employ the power of the constraint solver with some complex algorithm. Some algorithms can be expressed with constraints, but this is not always the truth. Today, you have to decide whether a field is to be constrained or to be assigned procedurally applying some smart algorithm. Using RGF, we can mix both by letting the constraint solver reduce the range of the field and then calling some smart algorithm (even one implemented in another language) to pick one value from the reduced range.

In the following example, mem_block kind field is marked as RGF. After the mem_block is generated, we allocate it in the memory using a C routine named alloc_memory(). We pick random values from the kind reduced range until the memory allocator succeeds allocating a block of the requested kind. (To make the code example short, we skip the code that is required to ensure we do not get into an endless loop).

   routine alloc_memory(mem_block : mem_block): bool is C routine alloc_mem;

   struct mem_block {
      ~kind : mem_kind;
       size : uint;
      // additional fields and constraints …
   };

   extend agent {
      add_mem_block(block : mem_block) is {
         if rgf_was_generated(block.kind) {
            var a_kind : mem_kind;
            var assigned : bool = FALSE;
            while not assigned {
              gen a_kind keeping {it in rgf_get_range(block.kind)};
              if alloc_memory (a_kind, block) {
                assigned = TRUE;
              };
            };
            block.kind rgf= a_kind;
         };
      };
   };

As described in the above examples, Range Generated Fields (RGF) provide extra power to the generation process. When analyzing your environment and methodology to decide which fields should be marked as RGF, keep in mind that there are a few issues to be considered:

  • Firstly, there cannot be more than one RGF field in a constraint.
  • Secondly, if there is an unconditional == constraint between an RGF field and a generative field (i.e. constraints such as "keep sz == len/3", when 'sz' is an RGF), then RGF's range will always be a single value. Therefore, in these cases there is no advantage in marking such a field as RGF.

We are sure you will find great ways to employ this new powerful capability of RGF and continue to enjoy verification using Specman  J

 

App Note Spotlight: Streamline Your SystemVerilog Code, Part II - SystemVerilog Semantics

$
0
0

Welcome back to a special multi-part edition of the App Note Spotlight, where we’ll continue highlighting an interesting app note that you may have overlooked—Simulation Performance Coding Guidelines for SystemVerilog. This app note overviews all sorts of coding guidelines and helpful tips to help optimize your SystemVerilog code’s performance. These strategies aren’t specific to just the Xcelium Parallel Simulator—in fact, they’ll help you no matter what simulator you’re using.

In this section, we’ll talk semantics—but these aren’t throwaway details, they’re SystemVerilog semantics. These are some small things that can show up here and there that can speed up your code without a whole lot of extra effort.

1)      Be Explicit with the Storage of Logic Types

SystemVerilog has a special data type called the logic data type. When a logic type is being used as a wire, you want to be explicit about it, otherwise SystemVerilog says you will get a variable. You don’t technically have to explicitly declare whether you’re using wire storage or variable storage, as the storage can be determined by context, but it’s much faster to choose one at the declaration. The biggest slowdowns caused by implicitly-declared logic types occur when a logic type is being used as a wire, without actually being declared as such. There’s a pretty simple solution here: make sure you declare it as a wire!

As a special note: when you’re profiling your design, (that’s with -profile, or -xmprof if you’re using Xcelium), places where this issue occurs will be flagged as Anonymous Continuous Assignments (ACAs).

2)      Avoid Bit-Blasting Vectors

Here’s another simple one: make sure your operations use the full vector instead of just operating on individual bits whenever possible. While SystemVerilog has constructs, such as generates, which make it easy to operate on individual bits of a vector, it’s often times just as easy to simply run your operations on the full vector instead, and it’s faster, too. The compiler generally won’t give you trouble if your use case is simple.

3)      Minimize String Formatting

SystemVerilog does provide string objects, but formatting these strings is a very expensive operation. String objects have a lot of useful features for writing messages in your verification environment—so using them occasionally isn’t the end of the world—but keep them to a minimum.

As an example: make sure your code only does whatever formatting is required in the event that the string object is actually used. And, when you are running regressions, where farm throughput is critical, make sure you are not wasting time formatting messages instead of doing real work.

4)      Pass Array Objects by Reference

SystemVerilog doesn’t specifically have “pointers” as they’re understood in other languages. That means you can’t create a reference to an object through syntax, but you can specify a function or a task argument as a reference. When the object is simple and small, like an integer or a class handle, it’s faster to pass it by value, but for larger objects, passing by reference is the less expensive operation.

That’s all we have for today—check back soon for a couple more helpful tips!

RAK Attack: Verifying Power Intent for Low Power Mixed Signal SoCs

$
0
0

The wait is finally over—the Rapid Adoption Kit (RAK) for verifying the power intent of low-power mixed signal SoCs is here! The RAK is a tutorial designed to clearly show, through example, how to verify one of the most critical technology convergences in IoT devices. It’ll talk about the differences in verification of a processor-based design whether it’s powered internally or externally, how to verify SPICE, SystemVerilog Real Number Models, AMS, and Verilog models in the same environment, the new features in UPF 2.0 regarding power intent specification, and highlight issues you might face while verifying low-power SoCs.

Of course, all of this IP was developed by Cadence, and the testbench was created using UVM.  The source code is Apache-licensed to make it easier for you to use.

In the example provided by the RAK, the top level of the SoC is a processor-based design where an off-chip voltage regulator drives the power supplies on the chip. This design is referred to as the CORE.

                Figure 1: Block Diagram and Low Power Architecture

Off-chip, the voltage regulator provides 5V power to CORE. Inside CORE are two blocksANALOG_TOP and DIGITAL_TOP. ANALOG_TOP has the on-chip power supplies, while DIGITAL_TOP contains the processor (PROC) and its subsystem.

                Figure 2: Block Diagram of ANALOG_TOP

How is the power reduced? There are two main ways: through Power Shutoff (PSO), where the parts of a design that aren’t in use are shut down to conserve power, and through Multi-Supply Voltage (MSO), which lowers the supply voltage when the timing constraints allow that to be done without sacrificing performance.

The different power needs for different parts of the chip are called the “power intent”, which is specified using the Unified Power Format (UPF) 2.0.

Usually, in mixed-signal SoCs, the performance of on-chip power supplies isn’t all that important to the digital IP. In low-power simulations, however, those on-chip power supplies become a lot more relevant, as they’re being toggled on and off to save power regularly. This means that low-power mixed signal simulations can take advantage of the variable power intent modeled by UPF.

So—how does UPF model power intent? UPF does so through supply nets, which are modeled as SystemVerilog structs containing a state (of which there are four different ones) and an integer “voltage”, that holds the value of the voltage in microvolts.

There are four different states that can be mapped to the “state” field of the supply net: OFF, UNDETERMINED, PARTIAL_ON, and FULL_ON. In this screenshot of the RAK, you can see the state of the VDD_5V net transition from OFF to PARTIAL_ON, among other state transitions described in the “Power Up” power state.

                

                Figure 3: The “Power Up” power state

This RAK’s example SoC passes through six power states—Power Up, Load SRAM, Power Up PROC, Low Power ModePower Down, Low Power ModePower Up, and LDO Shutdown. Check out the RAK manual for pictures and additional information regarding each power state.

If you have questions about verifying power intent for low-power mixed-signal SoCs, this RAK is exactly what you’re looking for. Check it out here.

Come Join Us for "Deep Dive into the UVM Register Layer" - A Webinar From Duolos

$
0
0

Join us on September 14th for a free one-hour webinar on the finer aspects of the UVM register layer. We’ll be focusing on key aspects of the UVM Register Layer that can help you with your UVM modeling in ways you may not be aware of.

We’ll be covering the following topics:

  • How to use user-defined front doors and back doors to expand what the register layer can do
  • Understanding the role played by the predictor, and how to use it with the aforementioned user-defined front doors
  • Using register callbacks to help model quirky register behaviors, alongside the side-effects of register read/writes
  • What changes you can or can’t make to UVM code while preserving the random stimulus generation.

Combined, the information covered in these topics can make you a better user of the UVM register layer. Code examples shown during the webinar can all be run with our Xcelium Parallel Simulator.

Come join in!

For more information on this webinar, and for available times on September 14th, check out the link here.

App Note Spotlight: Streamline Your SystemVerilog Code, Part IV - Dynamic Objects

$
0
0

Welcome back to the fourth installment of a special multi-part edition of the App Note Spotlight, where we’ll continue highlighting an interesting app note that you may have overlooked—Simulation Performance Coding Guidelines for SystemVerilog. This app note overviews all sorts of coding guidelines and helpful tips to help optimize your SystemVerilog code’s performance. These strategies aren’t specific to just the Xcelium Parallel Simulator—in fact, they’ll help you no matter what simulator you’re using.

Today, we’ll be highlighting dynamic objects.

1)      Don’t do so much in object constructors.

SystemVerilog, similar to other object-oriented programming languages, allows you to use constructors to initialize dynamic objects when they’re instantiated. Using this is a lot more convenient than manually setting up your objects each time. Technically, you can put whatever you want inside a constructor—however, this is not advised. Any code put into a constructor will be run for all objects of that type, so only put code in the constructor that is required for any object of that type’s initialization.

2)      Make copying the consumer’s  responsibility

Making deep copies is a large-overhead operation for heap management, garbage collection, and data initialization. Generally, objects are passed around by reference in SystemVerilog. It’s typical for the producer to make a deep copy of an object before passing it along, and it’s also typical for consumers to do another deep copy if they need to keep a safe one lying around. That means there’s a lot of expensive operations happening that don’t lead to anything particularly important. Thus, you should leave the copying of objects to the consumers if they require the full object – consumers should minimize the data they copy to that which they really need. Then, the producer can reuse objects when applicable, and consumers, which usually don’t need all of an object, just take what they need.

3)      Make fewer objects through object pools or singleton objects.

If you use a dynamic object a lot, the overhead from regenerating it constantly can add up. Sometimes, you can use a singleton object or an object pool instead of a uniquely created dynamic object, and you can shrink your overhead this way. Keep an eye out for opportunities to do this.

4)      Use structs for basic data used in classes

Since classes are heap objects, they’ve got a lot of overhead compared with simple structures. If you’re passing data elements around and operating on them a lot, consider using structs instead, as they don’t require garbage collection. Inside of classes it is often convenient to used structs (packed or unpacked) for metadata used by the class.

5)      Work in interfaces instead of class tasks

Interfaces are common in verification. They allow class objects to attach to a structural part of the design. Often times, verification components which directly interact with signals, like drivers and monitors, contain state machines and similar code that applies to interface signals. Having state machines in your class tasks has three main drawbacks: it reduces code reuse, since all class objects that do similar work on the same interface types have to replicate the work; it’s a little slower because classes are dynamic objects whereas interfaces are static objects; and classes aren’t synthesizable, so you can’t put them in hardware.

That’s all we have for today—check back soon for the next installment!

Specman 18.09: Avoiding the Small Annoying Mistakes

$
0
0

Specman 18.09: Avoiding the small annoying mistakes

 In almost every industry, one has the potential of making a small mistake that may cost hours or days to find. The following interesting article takes the small mistakes to the extreme and mentions a few cases of small mistakes that had a huge effect: Messing up big time: 10 tiny mistakes that have caused HUGE problems.

How is it relevant to Specman? As a verification engineer, you want to avoid silly mistakes (mistakes that may occur regardless of how smart you are). True, usually these mistakes may not result in a catastrophe like in the article, but might cost you hours of debugging and loads of frustration that ends with saying “I do not believe I have spent the whole day looking for this careless mistake” or something like “Who is this guy that messed the order of these values? Ha… it was me...”.

This blog is about trying to avoid errors around equivalent enums in HDL and the testbench. It is a very common requirement to create an enumeration in e to drive (with a port) an equivalent HDL enum. For example, assume you have System Verilog object of type:

typedef enum  {M_IDLE = 2'b00 , M_DATA = 2'b10 , M_ADDR = 2'b01 , M_END = 2'b11} rcvstate;

Then you need to define in e the following type:

type rcvstate [M_IDLE = 2'b00 , M_DATA = 2'b10 , M_ADDR = 2'b01 , M_END = 2'b11];

In Specman 18.03, we added the set e2hdl checks command. This command compares two enum existing definitions (in HDL and in the Test bench) to ensure that they are compatible. The purpose of this command is to ensure that if someone changes one of these enums, assuming that you have this command running on a regular basis, you will get to know about it right away and it will save you the time that you may have used in debugging to understand what went wrong. You can read more about this in the blog: Enum compatibility error in Specman.

In Specman 18.09, we took this functionality one step further by adding a new command, write type_map. This command saves you the time to define the enum in the testbench in the first place. It extracts the HDL data and creates the equivalent enum in e for each enum in the design. The write type_map command gets the snapshot as an input and creates an e package files with all the e enumerators. Naturally, this means that you can use this command only after elaboration.

Let’s take an example. Let’s assume that we have a DUT with SV module called types that includes the following enums:

typedef enum integer {IDLE=0, GNT0=1, GNT1=2} state;

typedef enum int {ADD = 1,SUBTRACT = 3,MULTIPLY = 7} cmd ;

typedef enum bit [3:0] {bronze='h1, silver, gold='h5} newMedal;

When we elaborate the design and run the command:

xrun -elaborate top.sv 

specman -c "write type_map worklib.top:sv”

An e file called top.e is created with the following content:

<'

package types;

type state : [IDLE =0, GNT0 =1, GNT1 =2](bits:32);

type cmd : [ADD =1, SUBTRACT =3, MULTIPLY =7](bits:32);

type newMedal : [bronze =1, silver =2, gold =5](bits:4);

'>

This way we are eliminating the option of making an error while defining the enums (most likely with the order of the enum values).

This command has few options you can use. You can read about it in cdnshelp.

We strongly recommend that you have a look at all the new features of Specman 18.09 in “What’s new in the Xcelium 18.09”. 

Orit Kirshenberg

Specman Team

 


Improving Your Testbench Flexibility with Enhanced Specman Templates

$
0
0

Cadence® Specman® Elite delivers faster and higher quality verification at block, chip, and system levels. The tool is cloud ready, supports industry-standard verification languages, and is compatible with the Open Verification Methodology (OVM), the Universal Verification Methodology (UVM), and the eReuse Methodology (eRM), so you can quickly and easily integrate it with established verification flows. Attend our FREE webinar on Improving Testbench Flexibility with Enhanced Specman Templates with Specman Elite expert, Daniel Bayer, on Tuesday, October 30, 2018 at 8:00am PDT, and learn how to:

  • Create new and enhanced sequence infrastructure using templates
  • Manage rapid UVC development using templates

A Q&A session will follow. For any issues with registration, please contact training_enroll@cadence.com. Registration deadline is October 28. For a selection of Specman training options, please visit www.cadence.com/training.

REGISTER NOW

Speedup SystemVerilog UVM Debug Regression Time with Dynamic Test Load

$
0
0

Microsemi has been evaluating a unique feature in Xcelium System Verilog UVM Dynamic Test Load for some time now, and they shared their thoughts on it in a paper presented at CDNLive San Jose April 2018.

One of the fields Microsemi operates in is the realm of optical networking. In optics, a given sensor’s input can vary wildly, but the way the input is processed is largely the same. Thus, a wide variety of tests are required to fully test a design for an optical sensor’s chip, and that implies the need for a lot of simulation.

To reach maximum coverage, though, they still need to be verified thoroughly. How did they solve this issue? Testing each input, and recompiling in between each test, takes a very long time.

In comes SystemVerilog / UVM Dynamic Test Load!

Before this update, SystemVerilog was very slow to debug: it requires you to recompile after each test. Plus, it only has simple peek, poke, and force options. SystemVerilog still has these limitations, but now Xcelium doesn’t! Dynamic Test Load is a solution within Xcelium that solves those issues making SystemVerilog UVM easier to use.

What was added that makes this new update so cool, then?

SystemVerilog UVM Dynamic Test Load allows you to load new UVM sequences via SystemVerilog packages into a saved snapshot (saved simulation state), and call testbench functions when that snapshot is reloaded.

There’s two types of dynamic snapshots: a dynamic base snapshot (DBS), which is the snapshot saved at time zero, and a dynamic test snapshot, which contains whatever new SystemVerilog packages you want to use. Use the $save("snapshot_name") system task in the testbench to create a snapshot and use the xsim tcl save command during runtime (xrun> save snapshot_name).

Dynamic Test Load is still pretty new, so new features are being added all the time. Right now, we’re working on support to save your snapshot into a different path than the default—that said, Dynamic Test Load does support saving into a different library at this time. Be wary of increases in the size of the CDS library caused by multiple jobs writing to the same library —in the future, there will be a bundled tool that can copy this library, minimizing the risk of the original library corrupting.

How do you use SystemVerilog UVM Dynamic Test Load? First, compile the DUT and the stable parts of your testbench with incremental elaboration (MSIE). Save a dynamic base snapshot at time zero. Make sure each test case and all sequences that may change are in their own packages. Now, you’re ready to save and load snapshots as you need.

Doing this the old way, where everything was re-elaborated every time, would take around five minutes. However, with Dynamic Test Load, you can drop that time done just one minute.

This can speed up your RTL debug turnaround time by around 90%, with an average time to failure in simulation at six hours. Overall, it can halve your development time!

If you’re ready to start cutting your development time in half, check out the presentation given at CDNLive by Microsemi here.

DMS 2.0 - What's Cool and What's New

$
0
0

Are you aware of all the cool new features in Digital Mixed Signal 2.0 (DMS 2.0)? Provided with the Xcelium Parallel Simulator versions 17.10 and beyond, DMS 2.0 brings you all kinds of new and wonderful features to help you use Xcelium to verify your mixed-signal designs.

The level of interaction between analog structures and digital logic is a lot more complex than it used to be. Divide-and-conquer verification isn’t good enough anymore; analog and digital components are often too intertwined to properly test each individually. Thus, Cadence has put substantial time and effort into solving this problem—and DMS 2.0 is the fruit of our labor.

Cadence is leading the industry with the most comprehensive set of mixed-signal verification capabilities across logic, real number modeling, and electrical abstractions. You may have already used DMS 1.0 in earlier versions of our simulators—but with 2.0, we’re bringing so much more to the table.

DMS 2.0 is an extension of 1.0, so everything you know and love is still there. So, what’s new?

We’ve added advanced real-number modeling features like coercion, support for multiple drivers and resolutions, wreal arrays, support for `wrealXstate and `wrealZstate, real assertions (in both PSL and SVA), multilanguage connections, and amsd control block support for blocks without analog content. There are also new SystemVerilog real-number modeling features, like SystemVerilog real coverage, SystemVerilog compliant nettypes and interconnects, support for multiple drivers and resolutions, support for user-defined data types (UDTs), and user-defined resolution functions (UDRs) in both real and non-real cases, and scalar nets with UDTs and UDRs.

We’ve expanded port-binding capabilities, too.

With DMS 2.0, Xcelium now has the best bi-directional modeling tech for SystemVerilog real-number modeling, the smartest implementation of power intent and distribution (brought to you by UPF/1801 or CPF), as well as SystemVerilog and mixed-signal features like the first SystemVerilog AMS compiler for connect modules and automatic insertion.

And that’s not even covering the new reuse features like advanced test bench and IP reuse.

DMS 2.0 brings a lot of new tech to the table, and if you’re not using it already—in this bloggers’ opinion, you’re missing out.

Check out a video overview of what DMS 2.0 brings to the table here.

Is it Time to Verify Your Chips in the Cloud? Part 1 of 3

$
0
0

Welcome to the first installment of a three-part blog series examining the issues and opportunities for performing verification in the cloud.

For a while now, there’s been a growing interest in cloud-based EDA solutions, but the time wasn’t ripe for production deployment. People had concerns about security and the sustainability of the data centers. But nowadays, public clouds are highly secure, typically more secure than on-premise datacenters, which has reenergized thinking about moving design and verification to the cloud.

New chip designs are more complicated than ever, and cloud-based verification solutions can take some of the weight off local machines, allowing smaller companies to have more competitive times-to-market. Time-to-market is, and always will be, critical for a successful SoC development project. Designing and verifying a chip can cost upwards of a billion dollars, and if that chip isn’t ready when the market demands it, any dream of recuperating that cost goes right out the window.

In a survey conducted in 2016, it was found that, on average, 55% of ASIC development time is spent on verification. Past surveys show that this figure is consistent across previous years—but here’s the catch: the flow of fresh engineers into verification is three times that of design. So—even with three times as many engineers entering the field, verification time is only static. Clearly, increasing the engineer headcount can only do so much—a breakthrough is needed, and that breakthrough is coming in the cloud.

But why is the cloud the answer? Can’t companies just have more on-site computers to deal with the increasing machine cost of verification?

Well, no. Not every company can afford that—start-ups can’t, for one, and even for larger companies, replacing or expanding their existing farms can cost huge amounts of money. Even for the largest companies, for whom this sort of spending isn’t that much of an issue, constantly expanding and  upgrading your computer resources is inconvenient and cumbersome.

Consider the figure below:

 

Here, we see a picture of a company that upgrades their computer resources before they become out of date. They never have an issue where their computing power isn’t enough; however, they’re spending huge amounts of money on state-of-the-art equipment, and replacing it just as often as everyone else, since the time-cost of installing new computers all the time factors in too.

Now, consider this:

Here, we see what is likely a smaller company, who can’t afford to replace their computers all the time. They get the most out of what they have—but they have to completely stop everything when the demands become too great.

What happens if the company doesn’t have to keep track of their own computer resources, and lets a cloud-based service do it, instead?

In this situation, a company can simply increase the amount of resources they use as they need it. They still have their own resources, of course, but whenever needs become too great, they can use the cloud to keep things running smoothly, as some extra power. This way there’s no great waste in running state-of-the-art equipment all the time, and no downtime when equipment needs to be replaced mid-project.

Now that we’ve set the stage, check back next time to see what Cadence has in store for the future of cloud-based verification!

App Note Spotlight: Streamline Your SystemVerilog Code, Part III - SystemVerilog Data Structures

$
0
0

Welcome back to the third installment of a special multi-part edition of the App Note Spotlight, where we’ll continue highlighting an interesting app note that you may have overlooked—Simulation Performance Coding Guidelines for SystemVerilog. This app note overviews all sorts of coding guidelines and helpful tips to help optimize your SystemVerilog code’s performance. These strategies aren’t specific to just the Xcelium Parallel Simulator—in fact, they’ll help you no matter what simulator you’re using.

Today, we’ll talk about data structures, and how to make sure you’re using the best-optimized method for any use case.

1)      Use static arrays instead of dynamic arrays when the size is relatively constant.

SystemVerilog has a couple of dynamic data structures, and static counterparts for each. The dynamic structures have overhead in that they are heap-managed objects, so there’s a certain amount of overhead that comes with that. Static data structures don’t suffer from this issue so, for cases where it is reasonable, static arrays will perform better than their dynamic counterparts.

2)      Use associative arrays when you need to do lookup or random insertion/deletion

Associative arrays have more efficient lookup than other data structures. Often implemented using a tree, they have a complexity of O(log n). This is much, much faster than a queue or array, which has a linear lookup complexity, O(n).

It’s not all fun and games for associative arrays, though. Since they’re implemented as trees, adding or removing an element from the front or back isn’t as simple of a process; while queues and arrays have access to those areas in constant time, O(1), associative arrays are O(log n) to add or remove an element at any point.

So—if your use case requires a lot of random lookup, and you won’t be inserting or deleting things from the front or back specifically all that often, consider an associative array.

3)      Choose queues when insertions and deletions are mostly at the front or back

Like the above tip, if you only care about adding to the front or back of your structure, a queue is for you. Not only does it access the front or back of the queue in constant time—it can also access an element at any index in constant time, too. This makes it the absolute fastest option for that use case.

4)      Use built-in array functions when needed

This is a fairly simple one—don’t reinvent the wheel on your data structure’s functions. If you’re using a SystemVerilog standard data structure, there’s a good chance whatever operation you’re trying to perform can be done through a built-in function. This extends to querying functions, locator methods, ordering methods, and reduction methods.

5)      Don’t use the wildcard (*) index type for associative arrays

You’re allowed to use the wildcard (*) to make your key a generic integral type; however, this isn’t a good idea if you’re looking for maximum efficiency. Since the wildcard allows you to use any key type, whenever an item is stored in the associative array, the simulator uses a dynamic key type to account for the uncertainty. This has quite a bit of overhead versus a statically sized key. Likewise, the simulator must dereference the dynamic keys to check against the input when you’re doing a lookup or search, and this adds overhead as well.

That’s all we have for today—check back soon for the next installment!

Is It Time to Verify Your Chips in the Cloud? Part 2 of 3

$
0
0

Welcome back to our series on cloud verification solutions. This is part two of a three-part blog—you can read part one here.

The high-performance computing (HPC) market continues to grow. Analysts say that the HPC market will reach almost $11 billion by 2020—that’s an annual growth rate of almost 20%. Cloud providers and other related companies are putting more and more resources into building solutions for this rapidly growing HPC market, which is of great interest to the EDA community.

All of this talk is leading to action—cloud providers are increasing their attention to HPC, which in turn helps create an EDA-appropriate environment for users in the cloud. It’s also a nice confidence boost to the verification and semiconductor design team—it shows them that their needs are being addressed, and that there may be better options for them to do their future work in the cloud.

Now, if you’re in verification—as this is the functional verification blog—how can the cloud help you? There’s a couple of important things to keep in mind as you do your research.

Make sure you’re fully aware of your long-term needs as well as your immediate needs before you partner with an EDA vendor. Just because something works for you now doesn’t mean it’ll work for you in the future. Companies and goals evolve; make sure you’re not getting stuck with subpar tools because you had simple needs when you wrote up your contracts.

Another thing to be aware of is a given provider’s expertise with variable computing requirements. You don’t want to be a new company’s guinea pig with something this vital to your workflow. Keep an eye out for:

1.       Awareness of the best practices associated with creating a secure cloud environment

2.       Products that are validated as cloud-ready

3.       Experience selecting compute and storage instances in the cloud using EDA workload data

4.       The ability to facilitate cloud orchestration

Beyond those things, make sure that your prospective EDA partner has plans to improve their cloud solutions, and to create new cloud solutions, in the future. You want a partner who will take this as seriously as you take it—don’t settle for anything less.

Now, you may be wondering where this is all going.

Tune in next time for our thrilling finale!

Learn How Valens uses Specman Macros Automate Configuration of Verification Environments at DVCon EMEA Next Week

$
0
0

Valens has achieved success through applying Specman to their verification projects. At DVCon EMEA (Oct 24-25) you can learn how their use of Specman Macros to automate configuration of the verification environment to their design. This saves them effort and lowers the learning curve for engineers who jump from project to project. In collaboration with Veriest Verification Ltd, a Cadence Connections Verification partner, they have created a verification environment approach that enable them to fully exercise their device through broader team contributions.

Check out their session 7.1 on Thursday, Oct 25th at 1:15pm in Forum 6 room


Cadence Announces Full Cadence Verification Suite Compatibility for Arm-Based High Performance Computing Servers

$
0
0

On October 16, 2018, Cadence Design Systems, Inc announced that, through a wide-reaching system design enablement collaboration, the Cadence Verification Suite is ready for use on Arm®-based high-performance computing (HPC) server environments. Now, all of the Cadence verification software tools you know and love—including Xcelium Simulator—can be run on the Hewlett Packard Enterprise (HPE) Apollo 70 system, which uses a processor based on the Armv8-A architecture. There’s a significant cost-saving associated with this—nineteen percent—which will help enable a new world of flexibility, user license allocation, unlicensed task execution, and time-saving.

If you’re using an Arm server, you now have access to not just Xcelium, but the whole of the Cadence Verification Suite’s prowess—that includes the JasperGold Formal Verification Platform, the vManager Metric-Driven Signoff Platform, the Indago Debug Platform, and the Verification IP catalog.

“HPE Apollo 70 Systems with the Marvell ThunderX2 Arm-based processor provide a new multi-core and high job-throughput hardware choice for the EDA market. We look forward to collaborating with Cadence to offer our joint EDA customers a leading HPC Arm-based solution, enabling them to run the Cadence Verification Suite either on-premise or off-premise,” says Bill Mannel, vice president and general manager overseeing HPC and AI at Hewlett Packard Enterprise.

For more information—and for more glowing endorsements—check out the full press release here.

You can also see the landing page for the Cadence Verification Suite on Arm-based HPC Datacenters here.

Is it Time to Verify Your Chips in the Cloud? Part 1 of 3

$
0
0

Welcome to the first installment of a three-part blog series examining the issues and opportunities for performing verification in the cloud.

For a while now, there’s been a growing interest in cloud-based EDA solutions, but the time wasn’t ripe for production deployment. People had concerns about security and the sustainability of the data centers. But nowadays, public clouds are highly secure, typically more secure than on-premise datacenters, which has reenergized thinking about moving design and verification to the cloud.

New chip designs are more complicated than ever, and cloud-based verification solutions can take some of the weight off local machines, allowing smaller companies to have more competitive times-to-market. Time-to-market is, and always will be, critical for a successful SoC development project. Designing and verifying a chip can cost upwards of a billion dollars, and if that chip isn’t ready when the market demands it, any dream of recuperating that cost goes right out the window.

In a survey conducted in 2016, it was found that, on average, 55% of ASIC development time is spent on verification. Past surveys show that this figure is consistent across previous years—but here’s the catch: the flow of fresh engineers into verification is three times that of design. So—even with three times as many engineers entering the field, verification time is only static. Clearly, increasing the engineer headcount can only do so much—a breakthrough is needed, and that breakthrough is coming in the cloud.

But why is the cloud the answer? Can’t companies just have more on-site computers to deal with the increasing machine cost of verification?

Well, no. Not every company can afford that—start-ups can’t, for one, and even for larger companies, replacing or expanding their existing farms can cost huge amounts of money. Even for the largest companies, for whom this sort of spending isn’t that much of an issue, constantly expanding and  upgrading your computer resources is inconvenient and cumbersome.

Consider the figure below:

 

Here, we see a picture of a company that upgrades their computer resources before they become out of date. They never have an issue where their computing power isn’t enough; however, they’re spending huge amounts of money on state-of-the-art equipment, and replacing it just as often as everyone else, since the time-cost of installing new computers all the time factors in too.

Now, consider this:

Here, we see what is likely a smaller company, who can’t afford to replace their computers all the time. They get the most out of what they have—but they have to completely stop everything when the demands become too great.

What happens if the company doesn’t have to keep track of their own computer resources, and lets a cloud-based service do it, instead?

In this situation, a company can simply increase the amount of resources they use as they need it. They still have their own resources, of course, but whenever needs become too great, they can use the cloud to keep things running smoothly, as some extra power. This way there’s no great waste in running state-of-the-art equipment all the time, and no downtime when equipment needs to be replaced mid-project.

Now that we’ve set the stage, check back next time to see what Cadence has in store for the future of cloud-based verification!

App Note Spotlight: Streamline Your SystemVerilog Code, Part III - SystemVerilog Data Structures

$
0
0

Welcome back to the third installment of a special multi-part edition of the App Note Spotlight, where we’ll continue highlighting an interesting app note that you may have overlooked—Simulation Performance Coding Guidelines for SystemVerilog. This app note overviews all sorts of coding guidelines and helpful tips to help optimize your SystemVerilog code’s performance. These strategies aren’t specific to just the Xcelium Parallel Simulator—in fact, they’ll help you no matter what simulator you’re using.

Today, we’ll talk about data structures, and how to make sure you’re using the best-optimized method for any use case.

1)      Use static arrays instead of dynamic arrays when the size is relatively constant.

SystemVerilog has a couple of dynamic data structures, and static counterparts for each. The dynamic structures have overhead in that they are heap-managed objects, so there’s a certain amount of overhead that comes with that. Static data structures don’t suffer from this issue so, for cases where it is reasonable, static arrays will perform better than their dynamic counterparts.

2)      Use associative arrays when you need to do lookup or random insertion/deletion

Associative arrays have more efficient lookup than other data structures. Often implemented using a tree, they have a complexity of O(log n). This is much, much faster than a queue or array, which has a linear lookup complexity, O(n).

It’s not all fun and games for associative arrays, though. Since they’re implemented as trees, adding or removing an element from the front or back isn’t as simple of a process; while queues and arrays have access to those areas in constant time, O(1), associative arrays are O(log n) to add or remove an element at any point.

So—if your use case requires a lot of random lookup, and you won’t be inserting or deleting things from the front or back specifically all that often, consider an associative array.

3)      Choose queues when insertions and deletions are mostly at the front or back

Like the above tip, if you only care about adding to the front or back of your structure, a queue is for you. Not only does it access the front or back of the queue in constant time—it can also access an element at any index in constant time, too. This makes it the absolute fastest option for that use case.

4)      Use built-in array functions when needed

This is a fairly simple one—don’t reinvent the wheel on your data structure’s functions. If you’re using a SystemVerilog standard data structure, there’s a good chance whatever operation you’re trying to perform can be done through a built-in function. This extends to querying functions, locator methods, ordering methods, and reduction methods.

5)      Don’t use the wildcard (*) index type for associative arrays

You’re allowed to use the wildcard (*) to make your key a generic integral type; however, this isn’t a good idea if you’re looking for maximum efficiency. Since the wildcard allows you to use any key type, whenever an item is stored in the associative array, the simulator uses a dynamic key type to account for the uncertainty. This has quite a bit of overhead versus a statically sized key. Likewise, the simulator must dereference the dynamic keys to check against the input when you’re doing a lookup or search, and this adds overhead as well.

That’s all we have for today—check back soon for the next installment!

Is It Time to Verify Your Chips in the Cloud? Part 2 of 3

$
0
0

Welcome back to our series on cloud verification solutions. This is part two of a three-part blog—you can read part one here.

The high-performance computing (HPC) market continues to grow. Analysts say that the HPC market will reach almost $11 billion by 2020—that’s an annual growth rate of almost 20%. Cloud providers and other related companies are putting more and more resources into building solutions for this rapidly growing HPC market, which is of great interest to the EDA community.

All of this talk is leading to action—cloud providers are increasing their attention to HPC, which in turn helps create an EDA-appropriate environment for users in the cloud. It’s also a nice confidence boost to the verification and semiconductor design team—it shows them that their needs are being addressed, and that there may be better options for them to do their future work in the cloud.

Now, if you’re in verification—as this is the functional verification blog—how can the cloud help you? There’s a couple of important things to keep in mind as you do your research.

Make sure you’re fully aware of your long-term needs as well as your immediate needs before you partner with an EDA vendor. Just because something works for you now doesn’t mean it’ll work for you in the future. Companies and goals evolve; make sure you’re not getting stuck with subpar tools because you had simple needs when you wrote up your contracts.

Another thing to be aware of is a given provider’s expertise with variable computing requirements. You don’t want to be a new company’s guinea pig with something this vital to your workflow. Keep an eye out for:

1.       Awareness of the best practices associated with creating a secure cloud environment

2.       Products that are validated as cloud-ready

3.       Experience selecting compute and storage instances in the cloud using EDA workload data

4.       The ability to facilitate cloud orchestration

Beyond those things, make sure that your prospective EDA partner has plans to improve their cloud solutions, and to create new cloud solutions, in the future. You want a partner who will take this as seriously as you take it—don’t settle for anything less.

Now, you may be wondering where this is all going.

Tune in next time for our thrilling finale!

UVM-ML- Managers’ Freedom of Choice

$
0
0

Freedom of choice is a term we hear a lot, especially in the last 10 years. It is defined in wikipedia as “an individual's opportunity and autonomy to perform an action selected from at least two available options…”.

Is having many choices always a good thing? Well, usually it is- who would not want to live in a world where s/he has options and can make choices? However, there are also downsides to having multiple options. In the TED lecture, The paradox of choice, Barry Schwartz discusses the negative aspects of having too many choices.

For example, in the healthcare world- in some cases, your physician might present you with a few options and let you make the right medical choice for yourself.  Is it necessarily a good thing? Do you always feel you have the tools to make this choice?...

Freedom of Choice is discussed a lot in the context of many fields such as, law, economics, and well being. But, what does freedom of choice mean in the context of verification? Among other things, it is the freedom to choose the right verification language for your project.

The world today is much more complex than it used to be a few years ago.  Situations like acquisitions, mergers, remote-sites, and purchased third-party VIPs might create few challenging situations, especially for the managers who need to make choices.

Let’s talk about two common scenarios, integrating existing UVCs and choosing a language for new projects.

Integrating Existing UVCs

This is a very common scenario as a result of acquisitions. What would you do if you need to have an existing UVM-SV UVC interact with an existing UVM-e UVC? Will you rewrite one of them with the other language? Naturally, rewriting a UVC is a huge effort.

Fortunately, with UVM-ML you can reuse these existing UVCs and have them co-exist and communicate with the right topology) discussed in the following sections). As a manager, it saves you the effort of rewriting a UVC and if you want each team to continue working with the language it is used to, you can do this with UVM-ML.

Choosing a Language for New Projects  

You might find yourself in a similar situation (as described above) even for a new project. For example, if you have two teams, and each team is used to working with a different language. Here, you can either select the best language for all the teams or let each team continue working with the language it is used to. In principle, the former option is preferred. Firstly, regardless of the power of UVM-ML, it is always easier having everything written in the same language. Secondly, you would prefer having all your people working with the language providing the best quality and productivity.

So, what is the best verification language?

Anyone who has worked with both e and SV knows by experience that e is much superior because of several reasons, but this is a subject for a different blog.  It is true that Xcelium bypasses many limitations of the System Verilog LRM, however, still using e as the verification language is easier and more effective.

So, the best option would be to have everyone work with e, however, it is not always possible. In reality, there might be other factors and considerations. For example, you might have a team pressuring to continue with the language it is used to. In such a situation, as a manager you might decide to let each team select its preferred language. This is the flexibility you get with UVM-ML, this means you have a choice and you can have each team make its own choice.

So what is UVM-ML exactly?

UVM-ML library enables you to connect different UVCs written with UVM-e, UVM-SV, and UVM-SC. While we, as the R&D, are aware of its importance as the glue in several leading companies, it is always exciting to hear customers appreciate its capabilities.  In 2018 US DVCon, HP Enterprise won the best poster titled: “Is e still relevant?”. In addition to the fact that the answer in the poster is “yes”, in this poster (and its relating paper), HP Enterprise describes UVM-ML as the enabler for both reuse of the existing projects written in different languages and selecting the right language for each project (frankly speaking, we could not say it better…).

Figure 1:HP Enterprise poster that won best poster in 2018 US DVCon

This poster and the paper include case studies and learnings from projects within the Silicon Design Lab (SDL) of HP Enterprise. They are saying: “UVM-ML has a proven track record within SDL. It has been used for several years, spanning numerous projects… Through these projects, SDL has been able to take advantage of many of the advanced testing features available in Specman/e while utilizing a variety of UVMSV content from internally developed VCs to externally purchased Verification IPs.”.

UVM-ML was developed with AMD as an open source library provided in Accellera site. Since Incisive 15.2 it is provided also within incisive and Xcelium. We encourage our customers to use the version provided within Xcelium since it has an enhanced integration with Xcelium, however, there are some cases in which customers choose to use the open source version (you can always consult with the UVM-ML support team: support_uvm_ml@cadence.com).

UVM-ML supports multiple topologies according to the user environment. In a side-by-side hierarchy (parallel trees), the environment contains multiple tops, each top containing the components of a single language. In Unified hierarchy (single tree)- each component is instantiated in its logical location in the hierarchy. For simplicity, the two examples contain two languages, however can be extended to three.

Figure 2: Side by side example

Figure 2: Side by side example

Figure 3: unified hierarchy example

How does the magic work?

UVM-ML contains an inner backplane, services, etc. but the main part that is relevant to the user’s point of view is the adapter which has an API for connecting your UVC to the UVM-ML library. The API of each adapter is provided in the native language of the framework that is being connected, meaning there is a UVM-e adapter, a UVM-SV adapter, and a UVM-C adapter. This means that when you connect your UVC to the library, you do it with the language you are familiar with.

 

Figure 4: UVM-ML inner blocks

There is a lot of material out there about UVM-ML. You can read more about UVM-ML in the reference and user manuals in cdnshelp. If you want to quickly get started, it is recommended to read the following blogs:

 To wrap up, enjoy our multi choices world of verification!!!

Orit Kirshenberg

Specman team

Viewing all 666 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>