Quantcast
Channel: Cadence Functional Verification
Viewing all 666 articles
Browse latest View live

Single Core vs. Multi Core: Simulation in Stereo

$
0
0

Latency simulations are the sworn enemy of the verification schedule. A handful of tests add days to weeks for each regression cycle; and when you add in the fact that they can’t be parallelized like the shorter bandwidth simulations, it gets hard to manage an engineer’s time efficiently. But…

What if there was a better way?

Since the olden days of simulation, all of that was true. Bandwidth sims were—and still are—mostly needed in the middle of the project time, but when the latency simulations come around at the end of the project they begin to dominate the regression cycle. This meant that the project would bottleneck at the end, and engineers would be left twiddling their thumbs, waiting days or weeks for a handful of tests to complete.

All of this was due to the fact that latency sims couldn’t be effectively shortened. The tests were simply too large and complex for the simulator to automatically break into convenient parts so they could be run on multiple machines. It simply couldn’t be done.

But now, it can.

Xcelium Simulator brings a new simulation technology to the table: multi-core. Patented software allows Xcelium to find the parts of a long latency simulation that can be effectively parallelized, and it distributes the overall simulation across multiple cores, representing a testing speed-up of anywhere between 3X and 10X, depending on the system. Before Xcelium, when all tests were run in single-core, no amount of distributing the bandwidth-hungry tests over many single-core machines could save your overall project time. The latency simulation was just so much longer that the bandwidth sims that the extra resource consumed for bandwidth simulation was essentially wasted. There was no real reason to use all of the processing power at your disposal if it wasn’t actually going to make your regression any faster overall. Xcelium Simulator opens the bottleneck, and makes it advantageous to strategically match your total bandwidth tests to your new, shortened latency test, thereby making the most efficient use of your resources.

Figure A: A simulation needs change as the project progresses. In the middle, bandwidth simulation is the primary use of resources, but as the project reaches the end, latency simulations dominate. At that point, the only pragmatic way to address regression time is to apply multi-core simulation. Bandwidth simulation regression time can be lowered by the use of additional machines, but only a new engine can reduce the latency simulation times.

It boils down to this: originally, engineers had one knob with single-core processing power: number of machines. It was like a radio with only a volume knob: they could throw more and more machines at a project until it finished—but there was hard limit in place with the latency tests. The number of machines engineers have access to is a finite resource, as well—that volume knob doesn’t go to eleven. Now, with Xcelium Simulator, engineers have access to a second knob: multi-core. Engineers no longer have just a volume knob—they have bass and treble adjusters. Like any audiophile knows, control over the system is paramount—and that’s exactly what Xcelium Simulator gives them: control. Parallelizing the latency simulations drastically reduces overall regression time because engineers can tune their single-core machine use to match the reduced run time for the latency tests.

Xcelium Simulator is the next step in simulation technology—a true third-generation engine. With multi-core technology, Xcelium allows engineers to have unprecedented control over their tests which in turn allows them to further tailor their test sequencing to their specific hardware needs.


Teradyne "Formally" Adopts JasperGold FPV

$
0
0

 CDNLive Boston 2017: Teradyne reveals their success with JasperGold in their presentation, Success using Formal Verification--and now they join the ever-growing fold of JasperGold FPV (Formal Property Verifier) App users.

Teradyne had used some of JasperGold's functionality before with the JasperGold Connectivity App, but this marked a noted change in their usage of it by embracing more of the features the JasperGold suite offers. Previously, Teradyne was using Cadence Incisive Formal Verifier for those large mixed-signal SOCs.

Teradyne has had considerable success with formal verification for a long time. But why would they care so much about it? Formal verification is more exhaustive than other types. It looks at every block level, and tests all possible stimuli, one cycle at a time. The trade-off here is, obviously, that it takes longer than other verification types. This can be alleviated somewhat by having well-constructed formal constraints, but there is still a notable time increase.

Formal verification functions by creating properties of two types: assert properties and cover properties. Assertions report when they’re violated, and covers report when they’re hit. Assertions want the design to perform a certain action in a certain way, and they report when that doesn’t happen. Covers want certain areas of the design to be reached, and report when those areas are successfully accessed. Formal verification may explore all reachable states in a design, but it also might keep exploring until timeout.

The challenges with formal are mostly related to the size of the design. While it is superior in terms of coverage to other methods, it’s significantly slower, so 10-100k registers is a healthy range—however, good bug hunting methodologies can significantly extend that sweet spot. Formal also struggles when there’s a lot of sequential depth. In an over-constrained environment, formal doesn’t do all that much, so one should be careful not to use too many constraints if formal is on the table.

Teradyne realized that the comprehensive, exhaustive coverage offered by formal verification surmounted the challenges offered by switching handily.

Teradyne used the CONNECTION and CONDITION constraint keywords in an Excel-based Jasper Connectivity template with FPV. Their first experience was on a big mixed-signal SoC, and it was run post-RTL freeze. Right off the bat, Teradyne found an incredible three bugs in a complex controller block—two in FIFO control and one in error detection! This sort of thing can’t be run in simulation. Teradyne found those bugs much earlier than usual in their process with the FPV app, under only a few hours of setup! In their second experience, Teradyne found two simple and two complex bugs within forty-eight hours of running. Engineers working with JasperGold FPV gave feedback that it was more user-friendly and easier to debug than Incisive Verification was.

In the future, Teradyne plans to use JasperGold FPV on most future projects, and they want to develop more guidelines for assertions and usage of FPV’s tools.

For the full presentation on Teradyne’s experiences with JasperGold FPV, check here.

Teradyne Standardizes on Xcelium Simulator

$
0
0

Today, Cadence announced that Teradyne has adopted the Xcelium™ Parallel Simulator for use in ASIC development. They’ve reached a 2x speedup with Xcelium when compared to their old simulation solution.

Xcelium has quickly become a key part of Teradyne’s verification environment, providing an easy-to-use, yet very powerful, tool that runs  fast and ensures high-quality designs. Beyond Xcelium, Teradyne is also using the Cadence JasperGold® Formal Verification Platform for their formal verification needs, and the Cadence vManager™ Metric-Driven Signoff Platform to comprehensively integrate all of their Cadence verification tools together.

“Rapid development and verification of our automation test equipment solutions is critical to our success,” said Andre Hendarman, Director of Mixed Signal ASIC Development at Teradyne, Inc. “The Xcelium Parallel Logic Simulator has provided us with the fastest simulation performance by far, which is helping us speed up the delivery of our test products, while also ensuring our designs are of the highest quality.”

The Xcelium Simulator supports Cadence’s System Design Enablement strategy, which enables system and semiconductor companies to more efficiently create comprehensive and clearly differentiated products more quickly.

To read the full press release, click here.

Munich October 18—Come See SystemC Evolution Day!

$
0
0

Sorry, you missed Oktoberfest (which is mostly in September anyway). But come to Munich in October for SystemC Evolution Day—a workshop on the evolution of SystemC standards held in Munich, Germany on October 18th. It’s a full day workshop, and the second iteration of a successful first run in May 2016. SystemC Evolution Day will feature several in-depth sessions about current and future standardization topics involving SystemC, with the intention of accelerating their progress for inclusion with the Accelera and IEEE standards.

Here’s a copy of the agenda:

SystemC Evolution Day will feature—as shown on the agenda—four technical sessions. These sessions discuss new ideas and suggestions for the SystemC community. Here’s a quick run-down of each:

#1: Checkpointing and SystemC – How Can We Make Them Meet?

Checkpointing technology has been around since the mid-90s. It’s seen wide use in transferring the state of a system between different simulators, saving time in workflows by avoiding time-costly re-dos like rebooting a system, and as a collaboration tool, among others. It’s on the list of features being considered by the SystemC Configuration, Control and Inspection Working Group (CCIWG), but there is more to do. It’s been tough to implement checkpointing in SystemC, mainly due to how it affects model writing. In order for checkpointing to work properly, one would need to be able to save and restore into an entirely different implementation of the same model—something that is currently still in development. This session aims to discuss the challenges around checkpointing in SystemC, and how it can be implemented under current standards.

#2: Standardization Around Registers – What’s Needed?

This session aims to talk about what users expect from register libraries, what needs standardizing, and what advantages standardization brings. Organizations have different register modeling libraries, and users’ expectations will vary to match. The current proposals for standardization are outdated—and they lack the user’s perspective which is needed in order for them to be successful. Here, the session will show some of the proposals offered for register library standardization and gather more of that user perspective from the session attendees.

#3: SystemC Datatypes – a Community Discussion

Since the early 2000s, SystemC users and EDA companies have had different uses for the standard datatypes—customizing the proof-of-concept library, leaving it alone, or re-implementing it completely in the event of simulation needs.

Now, the SystemC Datatypes Sub-Working Group, created by Accellera, seeks to create and define an advanced set of SystemC datatypes, compatible with all user needs. This session aims to take the user community in and discuss the definition of this set with them. Accellera members are encouraged to join the SystemC Datatypes Sub-Working Group, as well.

#4: Throughput Accurate Modeling and Synthesis of Abstract Interfaces

The current SystemC standard doesn’t deal with the modeling and synthesis of abstract interfaces. It addresses signals and ports, but nothing regarding scheduling rules for synthesizing cycle-accurate protocols.

A protocol can be encapsulated in a C++ class with methods that perform transaction-level operations. This a key way to raise the abstraction of an interface. The encapsulation is great, but it doesn’t solve a notable issue: it doesn’t model the interaction of different ports being accessed from a single process as concurrent. One can write behavior in a threat that handles multiple ports, but it reduces the freedom to schedule that port for other uses.

This session aims to alleviate these concerns by discussing the modeling of abstract interfaces and how they result to high-level synthesis.

For the full run-down on all these events, and for additional event details, check out the SystemC Evolution Day page on the Accellera website here.

Cadence and Arm Announce Early Access to Xcelium Parallel Logic Simulators on Arm-Based Servers

$
0
0

On October 24, Cadence and Arm announced early access to the Xcelium Parallel Logic Simulation on Arm-based servers. It was demonstrated running on Cavium ThunderX2 and Qualcomm Centriq servers at the Arm Techcon event on October 25 and 26.  This represents a new development in low power, yet high performance simulation solutions for the EDA industry.

Ensuring that designs function correctly is a huge challenge for the electronics industry—it can account for 70% of the entire industry’s computing workload. Growth in this area, and a reduction in the amount of computing power required for verification, is paramount for the continued improvement of the next generation of chips.

Here’s what the partnership between Arm and Cadence adds to this area: The Xcelium simulator runs natively on Arm-based servers! This allows for huge power and capacity benefits when executing both high-throughput and long latency workloads.  Both Cavium and Qualcomm introduced Xcelium in their Arm Techcon presentations.

Xcelium Simulator—part of the larger Cadence Verification Suite—speeds-up single-core tasks, and boosts multi-core tasks 3-10X. Utilizing Xcelium lets companies run workloads on the best core configurations for their verification tasks. Beyond that, Xcelium also automates the compile and elaboration design and verification testbench code to keep execution speedy on multi-core servers.

“Collaborating with Cadence on the Xcelium simulator is a key milestone in accelerating the electronic design ecosystem for Arm-based servers,” said Drew Henry, senior vice president and general manager for Arm’s Infrastructure Business Unit. “The flexibility of the Arm architecture will create new opportunities for more compute core density for EDA workloads, enabling high-performance parallel simulation while reducing the power and floor space required for implementing and validating silicon designs.”

For the formal announcement, check here.

For more information about other Arm-based Cadence solutions, check here.

Adding Annotations in Your e Code

$
0
0

If you have had a chance to work with languages like Java or C#, you might have come across Annotations. Since the Specman 17.10 version, annotations have become part of the e language! (See Java annotation and Basic Introduction to Data Annotation in .Net Framework.)

What are annotations? Annotations are a form of metadata that provides information about an entity in your code. However, they have no direct effect on the operation of the entity they annotate. For example, using the development_status– the annotation below - we provide some information on the development status of the struct packet_s.

@development_status(developer="David", req_ID="REQ-2.3",comment="Integrating with Lisa")

struct packet_s{

};

We will get back to this example. But before we do, let’s answer some questions you might have in mind.

Hmm…. isn’t that what comments are for? Well, yes,annotations are similar to comments. However, they are structured, and more importantly, you can write a code that easily “reads” and processes the information of these comments.

Hmm…isn’t this why we have members for units/structs (we can even define them as static)? Yes, this is true. However, when you add annotations to some entity (some struct for example):

  • You do not impact the run time of the entity
  • You can use the same annotation for different types of entities
  • You set them at design time (when you write your code) and you cannot change them in run time

So to sum it all up, you can see annotations as structured comments that you can later on process, that do not affect run-time, and that can be placed almost anywhere.

Hmmwhen do I use annotations? First, I must admit that you can spend your entire life working successfully with Specman without using annotations. However, for an even more productive verification system, it is recommended you continue reading this blog and consider how you can enhance your development environment using this advanced feature.

Usually, you use annotations when you want to perform some off-line actions on your code, without having any effect on run-time. These kinds of actions generally include: unit-tests, linting tools, dynamic documentation, etc.

With e, you can annotate different language constructs, such as type, struct, struct member, etc. Let’s look at an example of a situation in which annotations can be very helpful.

Hopefully, you are already running a linting tool (either a custom one, or the Cadence tool: HAL), running on a regular basis (otherwise, it is highly recommended you read about HAL in csdnshelp). Let’s assume that you are using HAL and you have your own set of custom rules corresponding to your methodology. Now with annotations, you can enrich HAL by annotating some entities with information HAL can read and use to implement more specific checks.

For example, let’s say you want to make sure that Transaction level units/structs do not have TCMs. How do you do that?

Define the annotation

First, you define the annotation in some central location. In our case, you define a Transaction level annotation:

annotation@transaction_level{};

Note that annotations can have fields, but in our case, we just want a very simple annotation.

Annotate

After you have defined it in some central location, it is available for all the engineers. Now, every engineer is asked to annotate each entity (struct/unit), which is a Transaction level entity, by adding the transaction_level annotation before the declaration:

@transaction_level

unit checker_u{   ….

 

};

Read the annotations

After the engineers have annotated their code with this annotation, you want to actually read it. In your linting tool, you can find all the entities that have this annotation and produce some message in case they have a TCM. Let’s look at the following code:

for each rf_struct (s) in rf_manager.get_user_types() do {

       if( transaction_level::get_attached_annotation(s.get_declaration())!=NULL){

               for each rf_method (m) in s.get_methods(){

                   if(m.is_tcm()){                                          

out(s.get_name()," is annotated as a transaction level entity while it has the following TCM: ",m.get_name());

                   };//tcm

               };//for each method

       };//struct with annotation

}//for each user struct  

In this code, we use a predefined static method available for each annotation you define, called: get_attached_annotation. This method returns an annotation object representing the particular annotation that is attached to a given entity. If this entity does not have this specific annotation, it returns NULL. In this piece of code, we:

  1. Iterate over each struct the user has defined
  2. Then if it has a transaction_level annotation, we iterate over its methods
  3. If a method is actually a TCM, we print a message

In order to see it in action, I put the following code that violated the methodology in my environment:

@transaction_level

unit checker_u{  

my_tcm()@sys.any is {

out("I am a TCM in a Transaction Level unit.");

};

};

Then, I added the code mentioned at the beginning of this section (which prints a message when a Transaction level entity has a TCM) to my custom rules in HAL. (I will not get into details about HAL since it is out of this scope, but it is very easy to add custom rules to HAL.) When I ran HAL on my code, I got the following message:

   

What is cool about this example is the fact that I have told my linting tool something important by putting this information in the right place, next to the relevant entity, where it is visible and natural. Any other solution would require some hardcoding information about my entities in the linting tool, which is clearly not what we want to do.

There are other multiple examples of what you could do with annotations. Let’s go back to the first example of annotation at the beginning of this blog: @development_status. Sometimes, you check-in your code before the requirement/feature is completed. You might do it for multiple reasons. First, it is never healthy to have a code on a private view for too long. Second, if you want to integrate with another team and you know your code is “dead” for the time being and there is no risk, it may be much easier to just check it in the repository. However, you might want to control these cases and monitor them. In this case, you should first define the annotation, including some fields that provide information:

annotation@development_status{

   developer:string;

   req_ID: string;

   comment:string;

       

};

And then you can annotate it where relevant:

@development_status(developer="David", req_ID="REQ-2.3",comment="Integrating with Lisa")

struct packet_s{

};

 

Now, you can, for example, produce reports about these cases from some tool/script and send them to the relevant people (the developer, the integrator or the manager) to closely monitor these cases.

Any suggestion on other cool uses? Would you like to share your ideas with us? We’d be glad to hear from you…

Orit Kirshenberg, Specman team

Slaying the Gate-Level Simulation (GLS) Dragon: Your Knight Is Here!

$
0
0

Even today, gate-level simulation is still a major signoff step for most semiconductor projects. However, those simulations can take days or weeks to run. A bug that causes a rerun of a gate regression can push a tapeout for weeks—but help is on the way!

The app note for gate-level simulation (GLS) methodology was released on [DATE]. It aims to showcase new methods and simulator-use models that make GLS more productive, with emphasis on a few strategies to make GLS more effective. The thought of spending weeks on GLS simulation may loom large over your design time-frame, but have no fear: this app note is the knight sent to help you slay the GLS dragon.

So, why care about reducing GLS runtime and memory? Well, as process technologies improve, allowing for smaller, more gate-dense dense chips—from 65 nm to 40 nm, and soon to 28 nm—teams are finding that more GLS cycles are required to reach signoff. Below 40 nm, new timing rules exist; and this creates a need for more GLS cycles as well. Despite the growing need, GLS simulations require huge servers with massive memory and runtime, which lays a serious strain on closure cycles.

The contents of this app note are as follows:

Table of Contents

Abstract - 4

Gate-Level Simulation Flow Overview - 5

Why Gate-Level Simulation Is Required - 6

Techniques to Improve Gate-Level Performance - 6

Improving Gate-Level Simulation Performance with Xcelium Simulator - 6

  1. Applying More Zero-Delay Simulation - 6
  2. Improving the Performance of Gate-Level Simulation with Timing - 15
  3. Improving Performance in Debugging Mode - 20
  4. Other Useful Xcelium Simulator Gate-Level Simulation Features - 21
  5. Improving Elaboration Performance Using Multi-Snapshot Incremental Elaboration (MSIE) - 23
  6. Accelerating Gate-Level Simulations - 24

A Methodology for Improving Gate-Level Simulation - 27

  1. Effectively Use Static Tools Before Starting Gate-Level Simulation - 27
  2. Controlling or Handling Timing Checks Based on STA Reports - 37
  3. Focusing on Limitations of Static Timing Analysis and Logical Equivalence Tools - 39
  4. Using DFT Verification - 39
  5. Catching Gate-Level Simulation X Mismatches at RTL Using Xcelium Simulator X-Propagation Solution - 46
  6. Blackboxing Modules Based on the Test Activity - 47
  7. Saving and Restarting Simulations - 47
  8. Hybrid Mode - 48

Library Modeling Recommendation - 49

Summary  - 51

As an example, one of the techniques used to improve GLS performance is to run more zero-delay simulations. Simulations in zero-delay mode run a lot faster than they would normally, so running more zero-delay simulations while the design is still in the timing closure process can make sure that the design functions correctly. In Xcelium Simulator, you can control delay values through command-line options and compiler directives, so it’s easy to customize.

The app note also offers tips for improving GLS results that are simulator-independent. It offers advice on using static tools—like linting and static timing analysis (STA)—to reduce gate-level verification time. STA tools can do both hierarchical timing analysis and SDF generation. Based on the reports from the STA tools, one can handle timing checks differently.

Beyond that, gate-level DFT verification can be used to check test structures put in place by specific DFT tools, like ModusTM Test. The app note runs through a list of tips and tricks for ensuring the thorough, fast, and painless running and debugging of DFT simulations, including simulation with netlists, functional equivalency checks, tips for ensuring that timing requirements have been met, and more.

To begin your quest to slay the GLS dragon, check here.

X-Propagation: Xcelium Simulator’s X-prop Technology Ensures Deterministic Reset

$
0
0

All chips need to cold reset on every power-up. Warm resets, however, are a bit more complicated. Take a smartphone screen, for example. The screen may power down while the phone is idle. However, the user will want it to return to their pre-set brightness level on power-up. Chips have to be tested for multiple warm-reset scenarios, and each of these tests take a very long time.

Enter Xcelium Simulator, and X-propagation. Also known as X-Prop, this idea represents how X states in gate-level logic can propagate and get stuck in a system during cold or warm resets. Unresolved X states spreading through a system can cause a non-deterministic reset, which makes a chip run inconsistently at best or fail to reset at worst.

Thanks to Xcelium Simulator and X-prop technology, we can debug X issues faster—10X faster than we could if the debug was completed during GLS. Right now, GLS happens towards the end of product development, which can lead to costly fixes when bugs are found so late. GLS needs to occur no matter what, but if X is propagated through RTL simulations, then this process can be completed far earlier, allowing bugs to be caught and dealt with efficiently.

X-prop analysis can be executed in either Compute As Ternary (CAT) mode, where X is propagated exactly as it would be in hardware, and Forward Only X (FOX) mode, where X is propagated disregarding inputs. This is required due to the fact that the propagation of Xs is not properly modeled to function like hardware in RTL for Verilog and VHDL. In addition, it’s much easier to debug in RTL—doing the propagation analysis at a higher level of design abstraction—and it has a smaller memory footprint shorter run time than GLS.

Figure 1: FOX and CAT mode

Many projects fail to run these diagnostics in application-realistic ways, such as reset validation and power-down/power-up sequences, due to the time required to run these long gate level simulations. If the diagnostic is run at RTL in accordance with the standards, then the X can be propagated forward more often than it would in practice, and this is called X-optimism.

What does this mean? X-propagation through RTL enables a more complete set of reset tests to be run instead of only the essential ones. If the X-propagation tests are left to be done during the GLS stage, then it is not time-feasible to run them all. By completing all of those tests earlier, it adds a level of security in knowing that all logic gates have been tested, and 100% of the chip works, instead of simply enough to ensure standard functionality. It’s easy to use—no complicated setup required. The sequential nature of the testing lets smaller chips be used, as RTL works with non-resettable flops. Finally—and most notably—RTL is faster, and more chips verified means more chips sold.

Nowadays, X-prop technology is built into Xcelium Simulator. Xcelium X-prop technology supports both SystemVerilog and VHDL, and doesn’t require any changes to existing HDL designs. Xcelium uses the aforementioned FOX mode and CAT mode to test for X-propagation, and both of these modes show the non-LRM compliant behavior needed to run your reset verification at RTL and improve your overall chip quality.

For more information, see the RAK at Cadence Online Support.


Slaying the Gate-Level Simulation (GLS) Dragon: Your Knight Is Here!

$
0
0

 Even today, gate-level simulation is still a major signoff step for most semiconductor projects. However, those simulations can take days or weeks to run. A bug that causes a rerun of a gate regression can push a tapeout for weeks—but help is on the way!

The app note for gate-level simulation (GLS) methodology was released on November 11, 2017. It aims to showcase new methods and simulator-use models that make GLS more productive, with emphasis on a few strategies to make GLS more effective. The thought of spending weeks on GLS simulation may loom large over your design time-frame, but have no fear: this app note is the knight sent to help you slay the GLS dragon.

So, why care about reducing GLS runtime and memory? Well, as process technologies improve, allowing for smaller, more gate-dense dense chips—from 65 nm to 40 nm, and soon to 28 nm—teams are finding that more GLS cycles are required to reach signoff. Below 40 nm, new timing rules exist; and this creates a need for more GLS cycles as well. Despite the growing need, GLS simulations require huge servers with massive memory and runtime, which lays a serious strain on closure cycles.

The contents of this app note are as follows:

Table of Contents

Abstract - 4

Gate-Level Simulation Flow Overview - 5

Why Gate-Level Simulation Is Required - 6

Techniques to Improve Gate-Level Performance - 6

Improving Gate-Level Simulation Performance with Xcelium Simulator - 6

  1. Applying More Zero-Delay Simulation - 6
  2. Improving the Performance of Gate-Level Simulation with Timing - 15
  3. Improving Performance in Debugging Mode - 20
  4. Other Useful Xcelium Simulator Gate-Level Simulation Features - 21
  5. Improving Elaboration Performance Using Multi-Snapshot Incremental Elaboration (MSIE) - 23
  6. Accelerating Gate-Level Simulations - 24

A Methodology for Improving Gate-Level Simulation - 27

  1. Effectively Use Static Tools Before Starting Gate-Level Simulation - 27
  2. Controlling or Handling Timing Checks Based on STA Reports - 37
  3. Focusing on Limitations of Static Timing Analysis and Logical Equivalence Tools - 39
  4. Using DFT Verification - 39
  5. Catching Gate-Level Simulation X Mismatches at RTL Using Xcelium Simulator X-Propagation Solution - 46
  6. Blackboxing Modules Based on the Test Activity - 47
  7. Saving and Restarting Simulations - 47
  8. Hybrid Mode - 48

Library Modeling Recommendation - 49

Summary  - 51

As an example, one of the techniques used to improve GLS performance is to run more zero-delay simulations. Simulations in zero-delay mode run a lot faster than they would normally, so running more zero-delay simulations while the design is still in the timing closure process can make sure that the design functions correctly. In Xcelium Simulator, you can control delay values through command-line options and compiler directives, so it’s easy to customize.

The app note also offers tips for improving GLS results that are simulator-independent. It offers advice on using static tools—like linting and static timing analysis (STA)—to reduce gate-level verification time. STA tools can do both hierarchical timing analysis and SDF generation. Based on the reports from the STA tools, one can handle timing checks differently.

Beyond that, gate-level DFT verification can be used to check test structures put in place by specific DFT tools, like ModusTM Test. The app note runs through a list of tips and tricks for ensuring the thorough, fast, and painless running and debugging of DFT simulations, including simulation with netlists, functional equivalency checks, tips for ensuring that timing requirements have been met, and more.

To begin your quest to slay the GLS dragon, check here.

26262 4U: Infineon and the Incisive Functional Safety Simulator

$
0
0

Infineon and Cadence have a bit of a history: they’ve been working together on functional safety mechanisms for around two and a half years now, and Infineon has been using the entire Cadence verification suite since the nineties. Functional safety is a serious hurdle for the automotive industry, and with the rise of ADAS systems, the issues that face Cadence and Infineon are about to get a lot more complicated.

Right now, the hot topic in functional safety is fault injection. It plays a huge role in functional safety, but it’s the same fault injection you need to do in usual DFT sims. For functional safety, your standard DFT simulator is typically not rigorous enough. Chips inside cars  receive so many stressors over the course of their useful lifecycle that both physical breakdown and radiation based breakdown can knock them off their usual function. Therefore, in testing, faults are purposely added to the design to ensure that the chip can work properly even if things go very, very wrong. You can’t just run a conventional functional verification sim and get the data you need.

Fault models are used to decide where faults are going to be injected. There’s a few types of faults that could be tested—one is a soft error, which is caused by charged particles hitting the chip in places where they shouldn’t be. This can cause a single event upset (SEU) or a single event transfer (SET), both of which are fairly unpleasant. SEU faults cause an error that exists for a finite period of time, but a SET faults cause permanent errors that occur in the running system.  Both types can trigger a whole host of problems that can be difficult to diagnose if they are not caught near the source of the error.

There’s also physical faults, which come in two main flavors: stuck-at, where the fault node is permanently 1 or 0; and bridging, where two signals connect where they shouldn’t.

In addition to that, not all faults are strictly harmful. Some are “safe faults”—which means they’re in a non-safety-dependent part of the design. Safe faults are important for DFT, but aren’t a factor for safety systems since they don’t factor into any safety-related failure modes in the system—but it’s worth noting that “acceptable faults” exist.

Fault injection ties into both qualitative and quantitative analyses. By injecting faults, you can see how a design fails, rather than just where—and that knowledge lets you design better systems in the future. You can also ensure that the safety mechanisms can detect the fault by injecting it in a known location. Quantitatively speaking, fault injection gives you various statistics, a list of faults, and information regarding types of workloads that make failure more likely, as well as other data points.

Historically, Infineon used Cadence’s Verifault software to do fault injection, which was pretty inefficient. Verifault could only do DFT, so you had to run it twice to check to the safety mechanism each time you wanted to do so.

Now, Infineon uses IFSS, or the Incisive Functional Safety Simulator and the vManager-based fs_runner environment.

Figure 1: The fs_runner environment.

This allows them to simulate to a time point, pause it, inject a fault, and let it propagate as the simulation continues. vManager allows them to run a “fault campaign”, which compiles a series of different faults that are all injected at various points. It also generates a report from the fault simulation for easy review.

For more information on this topic, check out Infineon’s presentation on this topic here.

Check Again: Cadence Announces Release of the First PCIe 5.0 VIP—With TripleCheck!

$
0
0

On November 28, 2017, Cadence announced the release of the first available PCIe® 5.0 Verification IP. This new VIP gives designers access to Cadence’s TripleCheck  technology—which gives designers a comprehensive verification plan that uses measurable objectives related to spec features, along with a test suite containing thousands of tests. These combine to greatly improve the speed and quality of functional verification runs for server and SoC designs using the PCIe 5.0 specification. It also gives designers access to the Indago  Protocol Debug App.

The ferocious appetite for higher performance demanded by Big Data, IoT and mobile computing is being addressed with the fast tracking of the PCI Express 5.0 specification by PCI SIG.   Early adopters have already been working on PHYs beyond PCIe Gen 4 16GT/s speeds and will be able to quickly move forward with Gen 5 support as the spec solidifies at this accelerated pace with rev 0.7 anticipated for Q2 of 2018.

“Our team has successfully utilized the Cadence VIP for previous versions of the PCIe specification, which enabled us to deliver world-leading interconnect solutions for compute and storage infrastructures,” said Shlomit Weiss, the senior vice president of silicon engineering at Mellanox Technologies.

The Cadence VIP with TripleCheck is part of the Cadence Verification Suite, and optimized for Xcelium  Parallel Logic Simulation.

To read the full press release, check here, and to read more about TripleCheck for PCIe 5.0, check here.

 

ROHM CO., Ltd Adopts Our Functional Safety Verification Solution

$
0
0

On July 17, 2017, Cadence announced that the Cadence® Functional Safety Verification Solution had been adopted by ROHM CO., Ltd as part of its deisgn flow for ISO 26262-compliant ICs and LSIs for the automotive market. Cadence fault simulation tech can quickly and easily deal with the complexities around many types of faults, including single event transient (SET), stuck-at-0 or 1, dual-point faults, and more. It also outperforms existing DFT flows for safety-related fault effect analysis.

This new technology comes with quoted approval: “We’ve obtained a reliable, robust solution that we can depend upon for our automotive designs,” said Akira Nakamura, LSI Product Development Headquarters, ROHM CO., Ltd.

Cadence’s solution makes the tedious process of ensuring that all components meet functional safety requirements fast and easy via automation, and it supports Cadence’s System Design Enablement plan, which assists system and semiconductor companies like ROHM Co., Ltd to make complete, differentiated end products with unprecedented efficiency.

To read the full press release, click here.

Moving to Xcelium Simulation? I’m Glad You Asked

$
0
0

Ready to take the next step in simulation technology with a true third-generation engine, with multi-core technology? ­ Cadence® Xcelium™ Simulator allows you to have unprecedented control over your tests including to further tailor test sequencing to your specific hardware needs.

Get started immediately with the new release Xcelium 17.04 by using the central page on https://support.cadence.com to learn everything you need to know about installation, licensing, and easily migrating projects from Incisive to Xcelium.

Visit the page - https://support.cadence.com/xcelium  It lists important links to Xcelium simulator documents. Visit the “one-stop shop" page to get all you need to install and use the release. 

The Xcelium Simulator Introduction helps you introduce the Xcelium simulator with detail in changes in the Xcelium single-core engine, and describes recommended steps to take when upgrading to Xcelium from Incisive.

If you are looking for migration document to help you upgrade to Single Core Xcelium from Incisive, find Migrating from Incisive to Single Core Xcelium

The new Xcelium software installation is focused on the core simulation engines. But Xcelium is only the foundational part of an overall digital simulation methodology. To know what is included in the core simulator download and optional Xcelium components, as well as other key products available for the Cadence simulation flow, you have got to read this article - What technologies are installed as part of the Xcelium release

And if you are wondering about ways to determine which licenses will be required before running a simulation – all your questions are answered here - How to list the licenses requested or being consumed by the Xcelium tools

Xceligen is the next generation random-constraint solver released as part of Xcelium Simulator. It contains new components as well as major enhancements. This document Xceligen - Next Generation SV Constraint Solver describes how to take advantage of the new technology using constraint solver switches and environment variables.

We have also built a Cadence support matrix to list the Xcelium release versions cross-indexed with the other verification engine and Verification IP (VIP) release versions available. The Xcelium Flow Support Integration Matrices includes recommended version combinations, version combinations under investigation at Cadence, and version combinations not recommended.

Also, there are many troubleshooting articles that generally provide helpful hints and solutions to address design problems while using Xcelium simulator in the flow. We have collected important articles, categorizing them by methodology or flow topic on this page under “Methodology and Flow Topics”.


And finally under Knowledge Resources, you can find access to our Application Notes, Videos, Rapid Adoption Kits related to Xcelium simulator and technology. 

Visit the page - https://support.cadence.com/xcelium for more.

Contact us for any questions. Leave a comment in this blog post or use the Feedback / Like mechanism on https://support.cadence.com.

Happy New Learning!

Sumeet Aggarwal

X-Propagation: Xcelium Simulator’s X-prop Technology Ensures Deterministic Reset

$
0
0

All chips need to cold reset on every power-up. Warm resets, however, are a bit more complicated. Take a smartphone screen, for example. The screen may power down while the phone is idle. However, the user will want it to return to their pre-set brightness level on power-up. Chips have to be tested for multiple warm-reset scenarios, and each of these tests take a very long time.

Enter Xcelium Simulator, and X-propagation. Also known as X-Prop, this idea represents how X states in gate-level logic can propagate and get stuck in a system during cold or warm resets. Unresolved X states spreading through a system can cause a non-deterministic reset, which makes a chip run inconsistently at best or fail to reset at worst.

Thanks to Xcelium Simulator and X-prop technology, we can debug X issues faster—10X faster than we could if the debug was completed during GLS. Right now, GLS happens towards the end of product development, which can lead to costly fixes when bugs are found so late. GLS needs to occur no matter what, but if X is propagated through RTL simulations, then this process can be completed far earlier, allowing bugs to be caught and dealt with efficiently.

X-prop analysis can be executed in either Compute As Ternary (CAT) mode, where X is propagated exactly as it would be in hardware, and Forward Only X (FOX) mode, where X is propagated disregarding inputs. This is required due to the fact that the propagation of Xs is not properly modeled to function like hardware in RTL for Verilog and VHDL. In addition, it’s much easier to debug in RTL—doing the propagation analysis at a higher level of design abstraction—and it has a smaller memory footprint shorter run time than GLS.

Figure 1: FOX and CAT mode

Many projects fail to run these diagnostics in application-realistic ways, such as reset validation and power-down/power-up sequences, due to the time required to run these long gate level simulations. If the diagnostic is run at RTL in accordance with the standards, then the X can be propagated forward more often than it would in practice, and this is called X-optimism.

What does this mean? X-propagation through RTL enables a more complete set of reset tests to be run instead of only the essential ones. If the X-propagation tests are left to be done during the GLS stage, then it is not time-feasible to run them all. By completing all of those tests earlier, it adds a level of security in knowing that all logic gates have been tested, and 100% of the chip works, instead of simply enough to ensure standard functionality. It’s easy to use—no complicated setup required. The sequential nature of the testing lets smaller chips be used, as RTL works with non-resettable flops. Finally—and most notably—RTL is faster, and more chips verified means more chips sold.

Nowadays, X-prop technology is built into Xcelium Simulator. Xcelium X-prop technology supports both SystemVerilog and VHDL, and doesn’t require any changes to existing HDL designs. Xcelium uses the aforementioned FOX mode and CAT mode to test for X-propagation, and both of these modes show the non-LRM compliant behavior needed to run your reset verification at RTL and improve your overall chip quality.

For more information, see the RAK at Cadence Online Support.

Infineon’s Coverage-Driven Distribution: Shortcutting the MDV Loop

$
0
0

There are more ways to improve productivity in the verification process than simply making the simulation run faster. One of these is to cut down on the amount of time engineers spend working hands-on with the testbench itself, preparing it and coding Specman/e tests for it. It is common knowledge that the engineer’s task is to stock the testbench with these tests and rerun regressions to measure their effect as they grind their way toward coverage closure. But—are we bound to this slogging process, or is there a more productive way?

Coverage-Driven Distribution (CDD), an advancement developed by Infineon, goes beyond metric-driven verification (MDV) to solve this problem by “closing the loop.” It takes a process where an engineer needs to stop the flow repeatedly to meddle in the testbench and removes that part from the equation entirely via an algorithm that automatically puts Specman/e tests in the testbench to run. This is a deterministic, scalable, repeatable, and manageable process—and when you include the CDD (coverage-driven distribution) add-on, it becomes even more automated.

Now, instead of running a bunch of e tests with Specman/Xcelium and crossing their fingers, an engineer spends their time analyzing the coverage report and determines from there what needs to be tweaked for the best overall coverage. This adds a bit of time at the start, but the amount of time subtracted from the body of the verification process is significantly greater.

In short, this tool automates the analysis, construction, and execution parts of the verification process. An engineer defines what functional coverage would be for a given testbench, and then that testbench fills itself with tests and runs them.

The full paper describing this tool from Infineon—which won the “Best Paper Award” in the IP and SoC track at CDNLive—is available here.

Work Flow with CDD

The algorithm used by CDD works in four steps:

1. Read coverage information from previous runs. This helps the algorithm “learn” faster, and saves engineer time between loop iterations.

2. Detects coverage holes using that information: this results in a re-ordering of events.

3. Builds a database of sequences which sets coverage items to the location of holes. Then, collection events are triggered.

4. The database is used to apply stimuli to the DUT.

Figure A: The graphic below shows the work flow under the algorithm.

 

Results

As it turns out, this tool is proven to be effective: it has already been used in real projects to successful ends. That means that—assuming everything is configured correctly—a test can drive the correct sequence, using the correct constraints, by itself. It can then set the coverage items to the right value and trigger collection events, increasing coverage.

All of that is automatic.

There are some costs. A verification engineer still has to feed inputs to the algorithm to train it, specifically information regarding when to trigger a collection event, and how to set values for each item in a given coverage group. The biggest pull here is that the major downside of randomization—the uncertainty—is no longer an issue; it can be planned via the CDD algorithm. It also does not take away the advantages of randomization.

In total, in exchange for a bit more work in building the database and Specman/e sequences, you gain faster coverage closure, earlier identification of coverage groups not included in a given test generation, and a better understanding of the DUT itself.

Looking forward, this technology may expand to be able to derive information in the database—like sequences, constraints and timings—from previous runs, instead of just coverage information, which would further reduce the time an engineer spends manually interacting with the testbench. Beyond that, an additional machine-learning algorithm that uses the coverage model and “previous experience” may be able to create and drive meaningful stimuli to patch the remaining coverage holes, even further reducing engineer meddle-time.


Save & Restore with More: Preserve Your Entire SoC

$
0
0

The concept of Save and Restore is simple: instead of re-initializing your simulation every time you want to run a test, only initialize it once. Then you can save the simulation as a “snapshot” and re-run it from that point to avoid hours of initialization times. It used to be inconvenient. Using this feature in simulators could have massive productivity gains, but not all users made the most of it due to a couple of hassles regarding the way snapshots saved states.  The result is billions of wasted compute cycles on simulation farms worldwide.

Under Incisive, there was no procedural way from within your HDL code to execute a save—you had to do the save from tcl at a “clean” point in the simulation. This created awkward situations where you couldn’t use Save and Restore exactly when you wanted, but only at certain times in between delta cycles, and you had to write some roundabout code that was generally hard to read and often created more issues down the road than it solved. Beyond that, if you were using C/C++ code that you wrote yourself, you had to manage all state data used by that code on your own as well. PLI, VPI, and VHPI have mechanisms to deal saving data, but it is a significant effort that many C/C++ applications ignore.

Xcelium Simulator brings an improved approach to the Save and Restore feature by not taking a “snapshot” of the system, but instead saving the entire memory image. The main goal of Xcelium’s new Save and Restore feature is to get the Save and Restore methodology to a point where “it just works.” There won’t be any manual fiddling required accurately save and restore the model. You will be able to save, and restart, with a few commands and no hassle.

As it stands, Xcelium’s Save and Restore functions greatly improve the overall usability of saving and restoring over Incisive. Under old mechanism, if a test opened a file while it ran, the file handle / pointer would not be saved. Xcelium’s improvements save all file pointers in the image so that this is no longer an issue – open files are restored to their save state so a restart resumes at the same point. The new Save and Restore also fixes saved-memory issues with custom-built C code, so you will no longer have to manually handle state information stored in memory when saving—it will be saved for you, automatically.

Over time, the new Save and Restore feature will be updated to do even more out-of-the-box. The saved file islarger than the snapshot, but saving a memory image streamlines and eases the Save and Restore feature usage significantly. The file size is mitigated somewhat with the -zlib option, a compression tool integrated into Xcelium Simulator that automatically compresses the image—in the future, this compression will be improved, creating a yet smaller saved image. Save and Restore functionality for sockets and for thread-handling in multi-threaded applications are on the table for a future update as well.

Right now, not everyone is using Save and Restore. Those not using it are wasting energy and time in their simulation farms. With Incisive, they were saddled with manually saving all of the external data used in their tests, coupled with the inconvenient and awkward saving restrictions—meaning that those engineers were stuck with the wasted compute cycles. The new Save and Restore upgrades in the Xcelium Simulator fix those major issues, which means that there are no more excuses to avoid this time saving technology. Whether you are setting up a regression environment, doing test development, or creating relatively smaller block-level tests, or simply care about saving the Earth from global warming, Save and Restore cuts your test initialization time drastically and reduces the compute resources you need, with no hassle.

If you want to take a look at the app note, check it out here.

Register for the UVM Register Layer Webinar on January 12!

$
0
0

On Friday, January 12, Doulos is hosting a UVM Register Layer webinar, with the aim of helping users model UVM in certain less-intuitive ways. This webinar will cover the usage of user-defined front and back doors to extend register-layer capabilities past simple call-and-response transactions, understanding the role the predictor plays in updating the register model, and how to use register callbacks to model strange register behaviors. It will also discuss what changes you can and can’t make to UVM code without messing with the random stimulus generation.

Code examples running in Xcelium Parallel Simulator will be shown.

Register now!

App Note Spotlight - Introduction to Connect Modules

$
0
0

Welcome to the App Note Spotlight—a bi-weekly series where the XTeam highlights an app note that contains valuable information you may not be aware of. Today, we're going to look at connect modules—what they are, and why you should care about them.

A Quick Run-Down on Connect Modules

To understand connect modules, we'll need to step back a bit. First, understand that mixed-signal simulation has to happen in both analog and digital contexts. Analog signals are continuous, which means they can’t be described by simply using 0 or 1. They could be anywhere in between, so they’re described as “continuous.” Moreover, analog signals in a design may have a completely different range than the voltages used to represent the digital values. Digital signals functionally have two discrete states in most circuits —on or off—so they’re referred to as just that, “discrete.”  Since the real world is analog, digital states also include X, Z, strengths, and digital signals can have other states, but those are all attributes of the 0 and 1.

Now, mixed-signal simulation has to traverse these incompatible domains—so there must be some method of translating analog signals to digital ones and vice versa. This is what connect modules do.

Connect modules are Verilog-AMS or SV-RNM (SystemVerilog Real Number Model) modules designed to translate signals between discrete and continuous domains. They can be varyingly complex based on whatever the need is, but have the capabilities to accommodate any required adjustments, like input threshold delays, output impedance, power supply sensitivity, and more. They are often classified by their points of view: supply, modeling accuracy, or electrical property.

Depending on the mixed-signal simulation at hand, connect modules may have different requirements. They need to be able to get the supply signal in a static or dynamic way, handle either potential or flow, and need to reflect or inherit port impedance.

Basic Terms

Here’s some basic terminology you might need to understand connect modules:

Discipline: A Verilog-AMS language declaration used to define whether a domain is continuous or discrete. Disciplines define nodes, ports, and branches.

Nature: This describes individual signal types. Disciplines then pair those signals so they can more easily be used in the declaration of nodes, ports, or branches. These are defined in the disciplines.vams file.

Connect Module Placement                         

Now, you can’t just stick a connect module anywhere you want. Automatically, a connect module is inserted just after the Discipline Resolution (DR) process. However, you need to define two things before that automatic insertion can run and have the connect module function properly: connect modules and connect rules. Verliog-AMS LRM defines these items as a part of the language standard. Connect modules will never be inserted in the middle of a wire or net; only on a module or cell view boundary port.

A couple of factors can affect where a connect module is placed. They have to be between analog nets and digital nets; but certain things can mess with the specific location of the connect module. These include the DR algorithm used in simulation, the disciplines used to declare the nets explicitly, hierarchical IE and DR optimization, and the value of connect_mode’s attributes used in connect statements.

For More

There’s a lot more to see in regards to connect modules. If you want to learn more, check out the app note titled: "Introduction to Connect Modules."

If you thinking to yourself “how can I do that verification thing,” send me a email at tyler@cadence.com or post a comment here describing that “verification thing.” I’ll work with our engineers to see if we have an app note already, or we can create one. Otherwise, check back in two weeks for another app note spotlight!

User Extensions to DUT Error

$
0
0

A question was raised to stackoverflow about how can one extend the dut_error() for printing more information. The capability to provide the test runners and debuggers more information upon an error can be a great enhancement to the quality and usability of the verification environment, so I decided to extend here a bit the answer given in the stackoverflow.

The request was worded: Can we extend the dut_error() method to print additional information such as the name of the package from which the error is reported.

Actually, although it looks like a method, dut_error() is not a method so it cannot be extended. But what you can do is to use the dut_error_struct. This struct contains two methods that you can extend, pre_error() and write(), and multiple methods that you can call.

  • The pre_error() is a hook method called when dut_error() is called.
  • The write() method is the method that writes the error message, and you can extend it to add information and/or modify the message format.

In your code you can use the dut_error_struct API for getting information such as which struct issued the error.  

For example, the following code increases a counter whenever there is an error during post_generate(), and reduces the check effect to IGNORE:

extend dut_error_struct {
    pre_error()
 is also {
        if source_method_name() == "post_generate" {
            out("\nProblem in generation, ",
                 source_location());
            sys.gen_errors_counter += 1;

            set_check_effect
(IGNORE);
        };
    };
};

To get the package name, as was requested in the original request, we have to know which struct reported this error, and then query it using the reflection API:

extend dut_error_struct {
   write() is first {
        // Special output for errors coming from the 
        // ahb package:
        var reporter_rf : rf_struct = 

          rf_manager.get_struct_of_instance(source_struct()
);
        if reporter_rf.get_package().get_name() == "ahb" {
            out(append("xxxxxx another bug in AHB package, ",
                           "\nreported ", source_location()));
        }; 
    };

These are just two examples of ways to extend the dut_error_struct, for implementing advanced verification utilities.

Additional examples can be downloaded from github-spmn-dut_error. Please try it out, and see if it cannot help you adjust and improve your verification methodology and requirements.

Efrat Shneydor,

Specman team

CRAFTing Your Aero/Defense UVM Testbench the Easy Way

$
0
0

                So you want to build an automated testbench for your aero/defense project, eh? Luckily, there’s a solution for you. A project called CRAFT (which stands for Circuit Realization At Faster Timescales) seeks to speed the development of SoCs by providing the tools necessary to make automated testbenches faster and more easily created than ever before.

Funded by DARPA, CRAFT utilizes components from a number of different companies and institutions: UC Berkeley provided the HDL and CHISEL (Constructing Hardware In Scala Embedded Language), Cadence provided the verification workbench and VIP, and Northrop Grumman designed the Fast Fourier Transform block other IP blocks used in the design and assembly of verification environments. This culminated in Phase 1 of CRAFT: a web-based verification workbench tool called VWB.

                Now, how does one build a verification workbench with VWB? First, you need to start the VWB server app, which generates a URL that you can plug into the browser of your choice to reach the tool. Once you’re there, go to the tab for a UVM testbench. Select “+” to create a new UVM testbench. Then, add your DUT to it. You can then add some Verification UVCs to the testbench—the GUI allows for UVC config and port sizing. Specify your testbench’s name, then select a structure to fill it in.

                Once you’ve gotten this far, you can add some clocks to your design. VWB lets you choose monitor or driven clocks, and all you have to do to add them is drag and drop them from the clocks panel to your testbench panel—and as soon as the icon next to the clock turns green, you’re good to go.

                VWB also generates a run script for you right out of the box—and it’s all compatible with SimVision, so can easily get to your existing debug tools. VWB supports all sorts of VIPs, including AMBA, MMCARD, CSI1, I2C, SPI, UART, and more.

                Using the VWB results in a massive reduction in time when developing your DUT’s first environment: you can put it together in around an hour instead of in a week. Thanks to IPXACT metadata, VIP config and instantiation is much easier, and you can also do full-register tests based on that metadata. For the best results, be sure to use testbenches that run one vector at a time; any more than that, and UVM doesn’t work as well. As of this moment, testbenches created with VWB are not emulatable in hardware such as Palladium Z1 — they’re just regular UVM testbenches—but functionality on that end is planned for future updates.

                To see Northrup Grumman’s presentation on this topic, check here.

Viewing all 666 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>