Quantcast
Channel: Cadence Functional Verification
Viewing all 673 articles
Browse latest View live

The Cowbell Rings On – We Have Completed the “UVM SystemVerilog Basics” Videos in Chinese

$
0
0

In July we released 12 videos of the UVM SystemVerilog Basics series with Chinese audio . Now we are completing the set and releasing the remaining 13 videos.

 

  1. Interface UVC Environment
  2. Virtual Sequencer - Sequence
  3. Module UVC
  4. Scoreboard
  5. DUT Functional Coverage
  6. Testbench
  7. Test
  8. Configuration
  9. Factory
  10. Phases
  11. Objections
  12. Virtual Interface
  13. Class Library Overview

Once more I would like to thank my colleague, Yih-Shiun Lin for his great job in translating the audio. It is his voice you hear on these videos.


Besides releasing the videos to YouTube, we are also publishing them on

YouKu, the Chinese version of YouTube.


Link to YouTube Playlist (Chinese)

Link to YouKu Playlist (Chinese)


Link to YouTube Playlist (English)

Link to YouKu Playlist (English)



Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer


What Does it Take to Migrate from e to UVMe?

$
0
0

So you are developing your verification environment in e, and like everyone else, you've been hearing a lot of buzz surrounding UVM (Universal Verification Methodology). Maybe you would also like to give it a try. The first question that pops in your mind is, "What would it take to migrate from e to UVM e?"

Well, this is a bit of a trick question. The short answer is that if you've adopted eRM in the past, migration to UVM e will only take a few minutes. If your environment is not eRM-compliant, it will take you longer.

And now to the details. What exactly is UVM e, in comparison to native e (IEEE 1647), and to eRM? What is in UVM? And what's all the fuss about?

Let's start with a high-level view of the methodology. The UVM describes the creation of a reusable universal verification component (UVC). Each UVC defines a reusable verification environment for one protocol (AXI, PCIe, etc.) or a system (an interconnect, a bridge, etc.). The UVCs are built of agents, sequence drivers (sequencer), monitors, etc. Sounds familiar? Of course. This is eRM, and UVM is based on eRM.

So, the concept and methodology are the same. No "migration" required here.

Let's take a look at the technical details. Other than documentation, what utilities and infrastructure does the UVM package contain? UVM provides a messaging mechanism, synchronization between components, infrastructure for sequences definition and driving. Again, this should be no big news to any e user. These are things you learned in basic e training. It might seem like some terminology is not aligned. But let's look more closely. In UVM documentation or discussions you might have heard the terms "UVC," "sequencer," and "report." These are simply the UVM SV names for "eVC," "sequence driver," and "message,"; all terms you should be familiar with.

And this is why "What would it take to migrate from e to UVM e?" is a trick question. If you took Specman basic training and adopted its guidelines, you are already using UVM e. The "bread and butter" of UVM e has been part of e LRM (and Specman) for several years now.

To paraphrase the French playwright Molière, "Good heavens! For 10 years you have been using UVM without knowing it!"

So if you were simply concerned about trying out UVM, well, you already are using it!

But  Cadence has more to offer. UVM e is being extended with features and methodology examples that target system verification challenges. These new capabilities are part of Cadence UVM open source, which contains the following:

  • Base Types - The UVM base types define the components role in the environment, and can be used by other tools (also tools developed by users). For example, to perform some action on all units deriving from uvm_monitor. This is the only element in UVM e that requires modification of existing code, and is not a must in order for the environment to be UVM-compliant.
  • Testflow-run-time phases - Breaks run time into various phases, allows defining domains (with or without synchronization), re-runs to a specific phase and also skips to a future phase. (Note: There is ongoing work in the standards committee to implement this in SystemVerilog, but it is ready within UVM e.)
  • UVM Scoreboard - Infrastructure for data check, with searching and data transformation algorithms.
  • UVM Low Power - A UVC, created automatically based on a CPF file, verifying and stimulating the power aspect of the device.
  • UVM Acceleration - User-friendly interface to Palladium, implementing SCE-MI communication between the acceleration machine and the UVC running on the host.
  • vr_ad - This register and memory package is now part of the Cadence contribution to UVM e. It became open source, and is being enhanced with additional capabilities (and performance improvements, which should be great news to all of you vr_ad users).

To get a real feeling for what a "UVM UVC" is -- and how it differs from what you already know about verification components in e -- take a simple step: Run the Incisive Verification Builder, and create a UVM e UVC. Next, you can try to convince your colleagues and management that they should also start using UVM e.

Efrat Shneydor

UVM Testflow Phases, Reset and Sequences

$
0
0

In this post, we will discuss the interesting challenge of reset during simulation.

Specman has a very robust implementation of reset during test, which imitates a return to cycle 0. All threads are terminated, the run() method is called again, and evaluation of temporal expressions is restarted. UVM Testflow has the option to go back to any phase, not just to cycle 0, by calling rerun_phase(target phase). When issuing rerun_phase, making the extreme decision to "just kill all threads" is generally a bad idea. For example, some monitoring threads should run continuously throughout the test, uninterrupted, recording all activities on line.

The UVM Testflow contains an API providing the verification engineers and test writers fine tuned control of component behavior during the rerun_phase.

Let us look, for example, at sequences. Three entities are part of the sequence mechanism:

  • 1. Sequence(s)
  • 2. BFM
  • 3. Sequence driver (seq-er).

Sequence

Some sequences are phase independent. These sequences should continue running completely unaffected by the rerun_phase().

On the other hand, some sequences define a scenario which is phase dependant. Example -- a series of initialization transactions is phase dependant: If the INIT_DUT phase is terminated with a rerun_phase, the initialization sequence should stop.

If a sequence is phase dependent, you should register it to the appropriate phase, so as to achieve the following:

  • 1. The sequence is terminated if the phase is terminated.
  • 2. If you registered the sequence with blocking == TRUE (that is, the sequence blocks the phase), the domain will not proceed to next phase before the sequence had finished.

You register sequences to a phase using register_sequence. For example:

extend MAIN MAIN_TEST my_seq {

    !sub_seq1 : SEND_DATA my_seq;

    body() @driver.clock is only {

        gen sub_seq1 keeping {.driver == me.driver; .ctr == 1};

        sub_seq1.start_sequence();

        // register to the FINISH_TEST phase, as blocking

        driver.tf_get_domain_mgr().register_sequence(sub_seq1,

                                                     FINISH_TEST,

                                                     TRUE);

    };

};

 

Bus Functional Model (BFM)

If a BFM serves one domain, it can be seen as belonging to the domain, and get rerun whenever the domain undergoes reset. On the other hand, if a BFM serves sequences from various domains, the BFM should not be affected by rerun_phase, and should run continuously throughout the test.

Registration of a BFM to a phase is done using the register_thread_by_name api: For example:

extend my_bfm {

    tf_main_test()@tf_phase_clock  is also {

        // start the main TCM

        start getting_items();

        driver.tf_get_domain_mgr().register_thread_by_name(me,

                                                           "getting_items",

                                                           POST_TEST,

                                                           FALSE);

    };

};

 

Sequence Driver (seq-er)

The seq-er maintains a list of ‘do' requests coming from the running sequences. When rerun_phase is issued, there is a question of what to do with the items in the queue.

One option is for the seq-er to clean the queue, that is, to remove all items and "start from fresh."

However, if the seq-er handles items coming from higher levels that are unaware of the reset, it should not clean the queue. Instead, once reset is finished and BFM up and running again, the seq-er should continue passing to the BFM the items have been waiting in the queue there since before the reset. In this case, the reset of the low level and its BFM is said to be transparent to the high level sequences.

Defining the seq-er behavior upon rerun_phase is done using the seq-er Testflow API. For example:

extend my_driver {   

    tf_to_clean_previous_bfm_call(next_phase: tf_phase_t) : bool is {

        result = TRUE;

    };

    tf_to_clean_do_queue(next_phase : tf_phase_t) : bool is {

        result = TRUE;

    };

};

 

Read more about Testflow, rerun_phase, and registration of objects in the UVM e User Guide and UVM e Reference manual.

Efrat Shneydor,

UVM e

Lessons for EDA When Low Power vs. Heat Dissipation Isn’t a Fair Fight: A Case Study With the GoPro Hero2 Camera

$
0
0

Right up there with functional verification, the challenges of low power design and verification present an existential threat to our customers' products, and ultimately their businesses.  Clearly both sides of the low power coin -- reducing generated heat and/or increasing efficiency to make the most of every available joule -- are of primary concern.  But what happens when external, environmental factors conspire to betray even the best low power electrical design?  In the case of the GoPro "Hero2" camera, ironically the device's waterproof housing that has help propel this amazing system to some incredible heights can sometimes undermine its operational performance in certain corner cases.

First, let me describe the system setup.  As shown in the image below, the GoPro Hero2 camera is the little cube on the left.  In the middle is the (empty) polycarbonate protective housing that's gasketed to keep out water and dirt.  On the right the camera is sealed inside the housing.

As per this video I shot exclusively with GoPro Hero2 this summer, I can personally attest that the camera and housing make for a very solid and effective mechanical system.  (If the embedded video doesn't play, click here.



Overall the camera performed brilliantly.  However, there was one issue that emerged: whenever the camera got too hot it would automatically shut down to avoid damaging itself.  For example, when the system is in direct sunlight the housing seems to act like a greenhouse and/or it insulates the camera such that the camera heats up significantly faster than when it's out of the housing (especially on an already hot summer day).  So while I praise the Hero2's designers for building-in this automatic fail safe, at the same time it's frustrating to wait for the camera to cool down.  (FYI, keeping the system submerged in the pool, or putting it in a cooler with the drinks and snacks expedites the turnaround process.  But I digress ...)

What are the lessons for EDA here?  I see three:

* The need to model "out of band" behaviors in the digital design domain - i.e. consideration of expected environmental factors in addition to the device's specified logic and firmware performance - will need to become more prevalent as our customers develop more devices destined for mobile use.

* Electronic Design Automation (EDA) will need to get even closer to Mechanical Design Automation (MDA).  I know my colleagues in PCB design and IC packaging are well down this road, but this GoPro case study suggests that SystemC/RTL design and verification must also consider macro-level physical factors as well.  Beyond today's UVM Low Power flows, should there be a "UVM Thermal Behavior" verification flow?

* Last but not least, the general moral of this story is that as far as we've come, collectively the EDA industry has a ways to go before our innovations make low power one of our customers' lower priorities.

Joe Hupcey III

On Twitter: http://twitter.com/jhupcey, @jhupcey

P.S.  For fellow parents:
Chances are you have probably heard of GoPro cameras before, but might have dismissed them as being only for young adventurers.  Certainly GoPro supports such customers and their extreme sports very well.  However, I quickly realized that this camera and its ruggedized shell is as effective for capturing my constantly on-the-go daughter's activities as it is for motor sports and outdoor adventures.  Since I'm constantly asked about the Hero2 system by other parents, allow me to anticipate the question "Does the GoPro ‘work' for capturing family-style activities?" with a hearty "Yes".   In a nutshell, it's a perfect second camera for the many times where you would rather avoid subjecting your "regular" camera (be it a cell phone camera, point&shoot, DSLR, or camcorder) to rigors and risks of family-oriented activities like swimming, bicycling, rainy hikes, and winter sports.

Speed of “Light” – My First iPhone 5 Impression

$
0
0

So what’s the big deal with the iPhone 5?

Some folks have commented: "It is just a bit faster, taller, lighter – no big deal."

Let me tell you one thing: Seeing, no handling and touching is believing. Like so many devices in the past, you have to try it yourself to appreciate it. It is a similar experience as with the original iPhone and iPad.

The minor weight difference actually makes a huge difference in holding and handling the device. It feels unbelievably light and still very sturdy.

The other major take away is speed. It is particularly noticeable in web browsing. (See video below.) The engineering efforts put into the A6 SOC have paid off. It is very impressive!

As expected the build quality is superb. It is a very noticeable improvement over the iPhone 4S. I love great, high quality tools and hardware. The iPhone5 is smoking and takes the crown.

Go check it out for yourself.

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Shameless Promotion: Free Club Formal San Jose (with Lunch) on Wednesday 10/17

$
0
0

Please join Team Verify and other design and verification engineers at the next "Club Formal" on the Cadence San Jose campus on Wednesday, October 17 at 11:30am. This free, half-day event (including lunch) is a great opportunity to learn more about general advances in formal analysis and assertion-based verification, and to network with others in your field.  Based on attendee feedback from previous events, we will deep-dive on the following topics:

* How customers are using the new Coverage Unreachability formal app to save time, power, and die area

* A presentation of the HVC-2011 paper, "Liveness vs Safety - a practical viewpoint", by B. A. Krishna, of Chelsio Communications Inc.

* The award-winning DAC User Track paper on bypass verification with formal techniques, reviewed by Vigyan Singhal of Oski Technology

* Updates and product roadmaps for Incisive Formal Verifier (IFV), Incisive Enterprise Verifier (IEV), and Assertion-Based Verification IP, presented by Chris Komar of Cadence R&D  (You might remember Chris from our DVCon tutorial on formal apps this past spring.)

Again, this free event will run from 11:30am to 4:30pm on the Cadence San Jose campus, Building 10, in the Kirra Point conference room. (Building 10 is the high-rise on the intersection of Montague Expressway and Trimble - street address 2655 Seely Avenue, San Jose, CA 95134).

Sign-in and lunch start promptly at 11:30am.

Register today!  http://www.secure-register.net/cadence/SILR_Club_Formal_4Q12

We look forward to meeting with you soon,

Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And on Facebook:
http://www.facebook.com/pages/Team-Verify/298008410248534

 

P.S.  What's a free "Club Formal" user group event like?  Here are highlights from prior events we've held here in San Jose and around the world: http://goo.gl/3xOK8


Snapshot from the last Club Formal San Jose (with 50 attendees!)


P.S.S. We also support private Club Formal events too, held on a customer's site so attendees feel free to talk about their proprietary challenges.  Let Team Verify know (and/or let your friendly local AE or Salesperson know) an onsite Club Formal would be of interest.  Alternatively, to get the latest data ASAP we can also host live webinars with an agenda focused on your company's formal and ABV top concerns.  (Of course, the downside of this approach vs. going to a public event is you don't get to network with other engineers outside your company to glean new insights, tips, and tricks from their experiences.)

 

Using pli_access for Stubless Indexed Ports

$
0
0

Indexed ports are used to access composite HDL objects in SystemVerilog (SV). Their most frequent use is to access SV multi-dimensional arrays by defining a simple indexed port and accessing the array elements with the port indexes.

Ports in general, and Indexed ports specifically, are static objects that need to be known in the environment build up. Indexed ports were implemented in such a way that each port needs SV stub code, meaning, you have to load your e code and write its stub file using ‘write stub <agent>' command. Otherwise, you would get a checksum error during simulation.

There are several disadvantages of using a stub file:

  1. In general, generating a new stub file requires HDL recompilation and re-elaboration. This might take a very long time since it actually rebuilds the entire DUT.
  2. Some additional port attributes are required to describe the indexed port characteristics. Those additional port attributes are for stub purposes only, and cthey reate a need for extra coding
  3. Non IES (Incisive Enterprise Simulator) users can't use Indexed ports to access Verilog memories; they can only access SV arrays. The reason for this limitation is that the code we generate in the stub file uses constructs that are supported only in SV (DPI-C functions).

After saying this, wouldn't it be nice if we could make indexed ports to work without a stub file?

This is where the new port attribute pli_access() comes to our help. pli_access() allows  you to access the HDL object, in this case the SV or Verilog array, with the PLI -- which is the C interface of the simulator -- rather than through stub access.

Let's look at an example which demonstrates how pli_access() can add flexibility to your code:

//top.sv
module top();
   reg [9:0] arr [10:0][10:0];
endmodule // top

top module has a 2 dimensional array named arr.

Typically, you would define an indexed port to access this array like that:

<' //top.e
method_type foo(a:int,b:int);
unit u {
    p : simple_port (foo) of uint(bits:10) is instance;
    keep p.hdl_BLOCKED EXPRESSION == "<p>[<a>][<b>]";
    keep p.hdl_path() == "arr"; 
};
‘>

But, what if instead of 10, my memory size is determined by parameter? Alternately, the size can be bigger than 32 bits but I don't know it up front. With the regular usage, I have to define the type of the port element prior to run time, so it will go into the stub.

However, using the pli_access() attribute will not create an entry in the stubs, and therefore will not need to know the size before the run. We then can use the port type as a list of bit, which will be dynamically allocated according to what we will assign:

<' //top.e
method_type foo(a:int,b:int);
unit u {
    p : simple_port (foo) of list of bit is instance;
    keep p.pli_access() == TRUE;
    //keep p.hdl_BLOCKED EXPRESSION == "<p>[<a>][<b>]"; //not needed
    keep p.hdl_path() == "arr"; 
};
‘>

As you can see above, we applied the following changes:

  1. We set pli_access() port attribute to TRUE, indicating this port should not affect the stub.
  2. We don't need to set hdl_BLOCKED EXPRESSION port attribute anymore; this used to define the indexed port access in the stub.
  3. We changed the type of the port element to be list of bit instead of a fixed type (this would not have worked without pli_access() in place - Specman would have told us it needs a static type with a known size). In general, If you use pli_access(), you also don't need to use declared_size() indexed port attribute

Please note that pli_access() can be used to access only static arrays which are not defined inside classes.

pli_access() also eliminates the need to create a stub file when using the part select on a signal's hdl_path() when using the VCS simulator.

Please refer to documentation for more information about pli_access().

Nir Hadaya and Avi Farjoun

Recorded Webinar: Using Metric-Driven Verification and Formal Together For Higher Productivity

$
0
0

[Preface: the upcoming "Club Formal" on October 17 here at the Cadence San Jose campus will also touch on this topic - please join us!]

While it's now common knowledge that there are many benefits to using simulation technology within a metric-driven verification (MDV) flow, as it turns out there are also an equal number of benefits to using formal analysis technology in such a flow as well.  Even better, users can combine the resulting metrics from simulation and formal to take advantage of the best each technology has to offer.  However, combining metrics of different types from completely different types of engines is not trivial without common semantics, methodologies, and technologies to harmonize heterogeneous data into something that is meaningful to a metric-driven functional verification flow. 

Recently Team Verify's Chris Komar (a Product Expert that you may remember meeting at DVCon 2012) and our colleague John Brennan (an expert in coverage and metric driven methodologies and tools) gave a webinar covering all these issues and solutions, entitled "Combining the Best of Both in an MDV Flow - Simulation and Formal".  A recording of this free webinar is available at http://www.cadence.com/cadence/events/Pages/event.aspx?eventid=684 (registration is required).

What you will learn from this free presentation is the detailed operational and technical information on how to combine verification metrics from both simulation and formal analysis, allowing you to substantially save in the overall verification effort. A new metric methodology --"enriched metrics"-- managed by Cadence® Incisive® Enterprise Verifier ("IEV") enables the co-operation of engines and, combined with higher level management tools, better visualization and a more refined verification flow.  Consider the following example of enriched metrics in action:

On the left hand side of the diagram are the results from simulation and dynamic assertions; on the right are formal cover and proof results.  In the above example, the results from the formal analysis are a mathematical proof that it's impossible to write a test to hit this cover point.  Hence, you should halt your simulations or formal analysis and begin debugging why this is the case.  The next diagram shows the happier case where the formal results on the right prove with mathematical certainty that this case can never fail.

It's important to note that that while the simulation and dynamic assertion results on the left hand side of this diagram are positive, those results are only true for the relatively narrow cases that the user encoded in the given test(s).  In contrast, the positive formal result on the right is proven true for all inputs, for all time -- truly a relief to know!  Plus: you can also safely stop developing new tests and/or testbenches to hit this coverpoint, often saving substantial amounts of time.  (There are many customers that run formal on an IP block first.  If verification of the block can be completed with firmal alone, they use this results display to show management it's save to move on and/or skip block level simulation.)

Putting all this in perspective: there was a time when formal was merely a point tool - albeit a very powerful one - used in isolation.  So called "hybrid" flows that mixed formal and simulation were certainly an improvement, but in most cases the mapping of the joint simulation+formal analysis results into the overall project database was quite painful, or only done verbally (Engineer: "Hey Boss, we found X number of bugs, but have a few more ‘Explores' left to run down.  Boss: That's great -- I think.  When do you think you are going to be done, again?"

Fortunately, in the webinar Chris shows how the results can be fed back into the main project verification plan and results database in a useful way, to wit:

* Checks and coverage are visibly linked to the human and machine readable verification plan

* The user can easily implement appropriate checks as assertions manually, or have the tool generate them automatically given certain specs, and/or leverage Assertion-Based Verification IP

* Assertions can be run on all available formal and simulation engines

* All Ccntributions from all engines shown in a unified view

All of these activities output data in format that makes sense to the simulation-centric management -- and thus, all of the sudden the isolation of formal and multi-engine flows ends, and these tools and related solutions gain mainstream acceptance.

Again, the webinar recording is free (registration is required):

http://www.cadence.com/cadence/events/Pages/event.aspx?eventid=684

and clocking-in at well under an hour, it's perfect for informative lunch time viewing.

Enjoy!

Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And now you can "Like" us on Facebook too:
http://www.facebook.com/pages/Team-Verify/298008410248534

 


UVM SystemVerilog in a Multi-Language SoC World: UVM-ML Webinar

$
0
0

Every SoC project uses multiple languages. Even if the design itself is purely Verilog RTL, it's likely that you have some PLI-based stimulus. In many cases there are multiple languages in use due to multiple suppliers, globalized teams, multiple abstractions, and more. Integrating e, SystemVerilog, SystemC, and C/C++ into one simulation is basic but insufficient for SoC verification.  The question asked by SoC verification teams is "how can these work together in a cohesive environment?"

Cadence saw this need in the years leading to the UVM and was the first to contribute a multi-language solution. That work was first contributed to the now offline OVMWorld in 2009. It was updated to align with the Accellera Systems Initiative UVM standard and contributed to the UVMWorld in 2010.  Since then, this solution was updated several times to remain synchronized with the UVM and add new functionality. With more than 1,500 downloads, it remains the first and leading open-source solution for UVM multi-language applications.

On Thursday October 25 at 9:00 am PDT, we'll review the solution and discuss the latest new features.  This technical discussion will be lead by Gabi Leshem, Solutions Architect, and Guy Mosenson, Senior Solutions Architect using the Incisive Verification Kit delivered with the Incisive Enterprise Simulator.  The Incisive Verification Kit is a superset of the Cadence UVM reference flow (with 4,000+ downloads covering v1.0 and v1.1) available on UVMWorld.  During the discussion you will learn about the following topics:

  • Requirements for modeling multi-language UVM-based environments
  • How to implement and integrate a UVM-ML verification environment
  • Multi-language communication and synchronization features
  • Advanced debug techniques key to analyzing multi-language environments and resolving multi-language issues 

So if you are a verification engineer, designer, or manager interested in leveraging existing VIP and improving reuse, this webinar is for you.  You can register for the webinar here:  http://www.cadence.com/cadence/events/Pages/eventseries.aspx?series=Functional%20Verification%20Webinar%20Series%202012&CMP=Home.

Regards,

 Adam "ML" Sherilog, Incisive Product Marketing Director 

Do you MOOC? Expanding Access to e (IEEE 1647) Verification Training Globally

$
0
0

Two of the key factors for successful and productive simulation-based hardware verification are a efficient verification language and an associated methodology. As the global design and verification eco system is scattered and evolving rapidly, it is hard to keep all engineers trained.

We are looking at MOOCs, Massive Open Online Courses, as popularized by organizations like Khan Academy, to address the need for more accessible and globalized training. To determine how MOOCs can be applied to verification training, we are partnering with Udacity, one of the premier players in this field.

MOOCs require a very different approach. Repurposing traditional, in-person training is not sufficient. The challenge is to keep students engaged while keeping the material at a high standard. This is not trivial, as online users can be distracted very easily and the online drop out rate is even higher than in the real world.

Our class on Udacity is called: Functional Hardware Verification: How to verify chips and eliminate bugs. More information on this class can be found here:
http://www.udacity.com/overview/Course/cs348/CourseRev/1

Udacity announced our class and classes from other industrial partners such as Google, Microsoft, AutoDesk, and Wolfram, in their press release today

See the promotional video below.


Get your MOOCs on!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Event Report: Club Formal San Jose – Features and Techniques for Experts, Verification Apps for All

$
0
0

Last week over 35 power users from over a dozen companies came together for the latest installment of "Club Formal" -- a user group meeting exclusively focused on topics in formal analysis and Assertion-Based Verification (ABV).  This instance of Club Formal featured several papers from Silicon Valley power users on expert-level techniques, as well as highlights of new "verification apps" that are highly automated such that any engineer can run them.  In addition to networking with industry peers, Cadence R&D and field specialists were on hand to share our product roadmap and discuss new requirements from the attendees to better align our R&D development with their needs.

[

Here are some specific highlights of the event:

Expert Presentation
: Liveness vs. Safety
B. A. Krishna of Chelsio Communications Inc. treated the attendees to an encore presentation of his HVC-2011 paper, "Liveness vs. Safety - a practical viewpoint" (full citation below).   The DUT at the heart of the paper was a Deficit Weighted Round Robin (DWRR) arbiter, and the critical verification task was to check if the port will be eventually given access to a given grant, regardless of the weight distribution across ports. One verification option is to write this as a "liveness" property.  As Krishna explained, on the plus side this is easy and/or intuitive to write. However, for verification purposes it required considerable effort to identify abstractions that could get conclusive results for the property. 

The other option is to write a "safety property".  Unfortunately, this requires a lot of effort in finding out the upper bound for forward progress. This was a painstaking process, but once they had written the property, verification did not require any abstractions - it's practically DUT independent.  For the given project they had the opportunity to apply both methodologies and compare the two, and thus the conclusion of the paper was an insightful review of their results and/or which approach would make more sense in a particular scenario.   Given the amount of questions and discussion this paper prompted, it was clear the merits of both approaches were of keen interest to the audience.


Expert Presentation: Bypass Logic Verification
Bypass logic verification is a common and difficult challenge for modern VLSI design that arises in the verification of CPU, GPU, and networking ASICs.   If you miss a bug in the bypass logic the whole system can simply freeze.  In this presentation, Club Formal alumni and favorite speaker Vigyan Singhal of Oski Technology gave an encore of the 2012 DAC User Track Best Presentation award-winning paper on this challenging topic, entitled "Deploying Model Checking for Bypass Verification" by engineers from Cisco and Oski Technology (full citation below). 

For starters, the DUT was a bear featuring a tough-to-verify, 25-deep bypass logic schema.  In a nutshell, their technique was to use the DUT itself as a reference model based on the fundamental principal of bypass logic: whether the bypass is active or not, the results should be the same regardless. In this case, the input commands to the reference model (1st DUT instance) have been separated by 25 cycles where the bypass logic is inactive. However, the challenging twist is that input commands to the 2nd DUT instance are randomly separated by anywhere from 1 to 24 cycles.  Another key factor to their success was using "memory random" as a simple abstraction of the design depth.  This allowed the tool to concentrate on the key elements of the DUT/state space. 

Bottom-line: they achieved phenomenal results, with 10 bugs found in this already heavily simulated IP.   Indeed, many corner cases they reached with formal would have been practically impossible to reach with only a constrained-random, simulation-based testbench given the permutation of command-combinations, the number of cycles that each command pair was spaced out, and so forth.

Roadmaps and New Product Previews
Chris Komar of Cadence R&D - specifically, a leader of the Product Expert Team - took the stage to give sneak previews of a new verification app coming out in just a few weeks, as well as the 18 month roadmap for our whole verification apps portfolio, expert level flows, and enhancements to the Incisive Formal Verifier (IFV) and Incisive Enterprise Verifier (IEV) platforms.

Allow me to again thank the attendees for their warm reception of our product roadmap, and being generous with comments about where you'd like to see more attention.  This feedback is invaluable to R&D, and as you saw we were all taking careful notes.

Lunch, Snacks and Networking!
Last but not least, the intermissions and social segments were of high value as well.   Whether it was the casual lunchtime discussions, or informal Q&A during the breaks, truly these venues - comfortable, community settings where users get to swap stories with other users to brainstorm solutions and share tips&tricks -- were some of the best parts of the day.  I know my Cadence colleagues appreciated your feedback and ideas!

Bottom-line:
An engaging, informative time was had by all, and I believe I speak for everyone in looking forward to the next Club Formal!

Until the next time, happy verifying!

Joe Hupcey III
for Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And on Facebook:
http://www.facebook.com/pages/Team-Verify/298008410248534

Reference Info: Paper Citations
Haifa Verification Conference 2011: "Liveness vs Safety - a practical viewpoint"
B. A. Krishna, Chelsio Communications Inc, San Jose, CA
Jonathan Michelson, Cisco Systems, San Jose, CA
Vigyan Singhal, Oski Technology, San Jose, CA
Alok Jain, Cadence Design Systems, Noida, India

DAC 2012 User Track: 8U.2 - Deploying Model Checking for Bypass Verification
Prashant Aggarwal - Oski Technology, Inc., Gurgaon, India
Michelle Liu - Cisco Systems, Inc., San Jose, CA
Wanli Wu - Cisco Systems, Inc., San Jose, CA
Vigyan Singhal - Oski Technology, Inc., Mountain View, CA


P.S. Team Verify is working on the 2013 event calendar now.   Hence, this the perfect time to let us know If you would like to see a Club Formal in your area!  Simply jump to the Team Verify home page and "send Team Verify a private message".

P.S.S.  In case you are unable to attend a Club Formal near you, be sure to check out our calendar and archived recordings of free, technical webinars:
http://www.cadence.com/cadence/events/pages/default.aspx

Function Level C Interface – New C Interface for Specman

$
0
0

Working with the conventional Specman C language interface has two major disadvantages:

1.       There is a tight dependency between the e code and the C code. The user must include the Specman header file which was generated based on the e code. Every minor change in the e code requires regeneration of the header file.

2.       The C interface doesn't support calling e TCMs (Time Consuming Methods) from C code.

Let's take a look at the following C interface example:

<' //top.e

struct packet { ... };

unit u { 

    send(p:packet) is dynamic C routine libtest.so:;

    update()@sys.any is {

        wait [10];
        message(LOW,"[e] update after waiting 10 cycles");
        //do something

    };

    main_thread()@sys.any is {

        gen p; //a packet
        send(p);
        stop_run();

    };
};

C export u.update();

'>

In the above example we show the send() method which is implemented in C and therefore defined as dynamic C routine. update() is exported so you can call it from C code.  

The C code then implements send() and calls update() with SN_DISPATCH:

/* test.c */
#include "sn_top.h"
#include <stdio.h>

void send(SN_TYPE(u) me, SN_TYPE(packet) p) {

  /* Do something */
  SN_DISPATCH(update,me,u,(me));

}

This simple example illustrates the two disadvantages we mentioned earlier:

1.       Tight dependency - sn_top.h which is included in the C code must be regenerated each time the e code is changed no matter what the change was. Of course, regenerating the header enforces recompilation of the C code...

2.        Calling TCMs - update() is a TCM, "The run-time behavior of a TCM called from C is undefined" (quoted from Specman docs). Trying to run the example above will give an OS11 error. Undefined indeed...

The new Function Level C interface addresses those issues. Let's take a look in the following modified example:

unit u {

    //send(p:packet) is dynamic C routine libtest.so:; old declaration
    send(p:packet)@sys.any is import C libtest.so:;

};

//C export u.update(); old declaration
export C u.update();

As you can see, the e code remains the same but we changed the way we export and import the functions. Please note that send() was changed to be a TCM; this way we can call update() TCM from the C code.

And the new C code looks like:

#include "sn_fli_top.h"
#include "e_func_intf_user.h"
#include <stdio.h>

void send(eStructHandle me, eStructHandle p) {

  /* Do something */
  update(me);

}

You might want to pay attention to the following:

  • We use a header file sn_fli_top.h which is a generated header file, but in this case we need to generate a new file only if we change one of the export or import prototypes.
  • Structs and lists parameters are passed as handles (eStructHandle or eListHandle).
  • e_func_intf_user.h contains some auxiliary routines to take care of the handles. For example, to get or to set an item of a list. This file resides in Specman installation directory.
  • We call update() directly, SN_DISPATCH and other C interface macros are not needed anymore. Note that the first argument is the instance which routes the call.

How to generate the new header file? Which auxiliary routines does e_func_intf_user.h define?  What about save-restore flow?  You can get the answers to these questions and more details in Specman documentation.

Enjoy,

Nir Hadaya

Specman R&D 

Need e/Specman Expertise ASAP? Free Training and Verification Alliance Partners Are Available Now

$
0
0

Recently an EDA industry observer relayed some Specmaniacs' concerns about satisfying the increasing demand for e/Specman trained verification engineers in Europe and other geographies.   Team Specman is seeing this growth too, and here is what we [Cadence] are doing to help:

* First, we have a network of expert e/Specman service providers located around the world (many of which who can support projects in geographies other than their headquarters location).  Our list of "Verification Alliance" partners is here: http://www.cadence.com/Alliances/verificationalliance/pages/default.aspx

I welcome specific queries to me offline so I can help introduce Specmaniacs to these firms -- just use the "Contact" button at the upper right hand side of this page.

* Current Cadence customers and partners should be aware that they now have access to free online e/Specman training for refreshing their skills, or training someone from scratch.  To apply for this, send an email to training_enroll at cadence dot com with "SPECMAN" in the subject line, and the following information in the message body - Your Name, Company, Address, Phone, name of your Cadence Sales person or partner program contact -- and our Educational Services team will follow-up with you.

* Finally, in partnership with leading massive open online course (MOOC) provider Udacity, in early 2013 we are offering a course on functional verification with the e language (IEEE 1647).  The course is free and open to all - pre-register on the Udacity web site now: http://www.udacity.com/overview/Course/cs348/CourseRev/1

Hannes Froehlich
Team Specman Solutions Architect

UVM e vr_ad -- Specman Read/Write Register Enhancements

$
0
0

If you are a Specman vr_ad user, you probably know that register access is implemented  using the read_reg / write_reg. For reading/writing a register, you have to

1. Extend a vr_ad_sequence

2. Add a field of the type of the register you want to access

3. In the body() , call the read/write_reg

For example:

extend MAIN vr_ad_sequence {

    !tx_data_reg : VR_AD_TX_DATA  vr_ad_reg;

    !tx_mode_reg : VR_AD_TX_MODE  vr_ad_reg;

    body() @driver.clock is only {

        read_reg {.driver == reg_driver} tx_data_reg;

        read_reg {.driver == reg_driver} tx_mode_reg;

    };

};

 

For simplifying test writing, starting Specman 12.1 the read/write_reg actions can:

  • Be called from any TCM, not only from within a sequence
  • Access not only a local field, but also a variable, or a reference to a field within the e registers model

One of the nice capabilities achieved with these enhancements is that you can embed access to registers among other activities. For example -- in a system level sequence, as in this code example :

extend SEND_AND_CHECK system_seq {

  !serial_frame : LEGAL frame;

  body() @driver.clock is {

    // Perform activity on serial interface, via the i/f driver

    do serial_frame on driver.serial_driver; 

    // Local variable of a register, read it and check the value

    var status_reg : STS vr_ad_reg;

    read_reg {.driver == driver.reg_driver} status_reg;

    check that status_reg.get_cur_value() == 0x37;

 

    // Access registers without defining fields nor variables

    write_reg {.driver == driver.reg_driver} driver.reg_model.reg0 val 0x12;

    write_reg {.driver == driver.reg_driver} driver.reg_model.reg1 val 0xFA;

    write_reg {.driver == driver.reg_driver} driver.reg_model.reg2 val 0x3;

    write_reg {.driver == driver.reg_driver} driver.reg_model.reg3 val 0x0;

    };

};

Note that register accesses are handled by the vr_ad_driver. So, when accessing a register from a TCM which is not a vr_ad sequence, you have to specify the vr_ad_driver to handle this operation.

We encourage you to check out the latest UVM e reference manual and user guide for details (both are part of Cadence Help).

Enjoy verification

Efrat Shneydor & Reuven Naveh,

UVM e

New Product: ARM ACE Assertion-Based Verification IP (ABVIP) Available Now

$
0
0

As anyone who has worked with ARM's AMBA 4 AXITM Coherency Extensions -- a/k/a the "ACETM" protocol -- knows, there are a ton of different configuration options and operational scenarios available to the designer.  Of course, this flexibility and power presents a significant verification challenge.  Hence, building on the success of our ACE Universal Verification Component (UVC) Verification IP product, we are excited to announce the immediate availability of the complementary Assertion-Based Verification IP (ABVIP) for ACE.  Written in standard IEEE System Verilog Assertions (SVA), this new ACE ABVIP simultaneously supports simulation-centric ABV, pure formal analysis, and mixed formal and simulation verification flows. 

In this 3 minute video, R&D Product Expert Joerg Muller outlines the main capabilities of this new product --  how it offers specific configuration, run time performance, and context-sensitive work-flow advantages in the SimVision debug environment vs. competitive offerings:

If the video doesn't play, click here.


In a nutshell, this new product marries all the next generation ABVIP capabilities we introduced early this year with Cadence's deep knowledge of the ACE protocol and its many configuration options.

This product is available immediately - please contact your Cadence representative for more details, or ask us more about it via the "Contact" button at the upper RHS of this page.


Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And now you can "Like" us on Facebook too, where we post more frequent updates on formal and ABV technology and methodology developments:
http://www.facebook.com/pages/Team-Verify/298008410248534

 

Reference Links

CDNLive Silicon Valley 2012: Mirit Fromovich on automating ARM "ACE" verification



If the video fails to play, click here.

Cadence ACE VIP Accelerates Development of Multi-Processor Mobile Devices 

How to Verify ARM ACE Coherent Interconnects with UVM verification IP

Richard Goering's Industry Insights: ARM ACE Verification IP: Verifying Hardware Cache Coherency

July 2012 Product Update: New Assertion-Based Verification IP (ABVIP) Available Now 

Cadence's Verification IP Catalog

 

 


Techniques to Boost Incisive Simulation Performance

$
0
0

Functional verification is the biggest challenge in delivering more complex electronic devices on increasingly aggressive schedules. Every technique for functional verification relies on a fast simulation engine for execution, so performance is of prime importance to all users.

Simulation performance can't be a single number or optimization because each environment is unique in terms of the methodology deployed, the languages involved, the size of the design, and the verification environment.

Hence, Cadence Incisive performance team has developed a handbook to cover the major aspects of Simulator performance. It leads users to follow certain steps to better understand, and then address the performance need.

In many situations, the steps to localize the performance bottleneck and resolve it take a considerable amount of time. The application notes in the handbook help speed this process by giving you the information to improve performance.  In the cases where you need more performance than the notes provide, they will guide you to create the well articulated and defined performance requirement making it easier for Cadence to optimize Incisive for speed.

Here is a series of application notes that focuses on areas (techniques and technologies), which helps you in improving the performance with Incisive Simulator.

Topic and LinkBrief Description
Incisive Performance Analysis Checklist A flow-based checklist to analyze the performance with Incisive Simulator
Top Focus Areas to maximize your simulation performance Detailed analysis with Top causes for Performance bottlenecks. 
Maximize Incisive Performance with Assertions Assertions related guidelines and commands to help in Incisive performance analysis 
Maximize Incisive Performance with Coverage Coverage related guidelines and commands to help in Incisive performance analysis. 
Analyzing Incisive Profiler for Performance Understanding Profiler entries for better action-oriented performance analysis. 
Maximize Incisive Performance with GLS Describes command options, delays and timings check that can affect Gate-Level Simulation performance. 
Incisive Debug Memory Consumption Command options and utilities/steps to debug system memory consumption 
 Maximizing Productivity with  Multi-Snapshot Incremental Elaboration MSIE Example Describes a new technology, which allows a large, invariant, portion of the environment to be pre-elaborated and then shared with many varying tests.
Analyze UVM Environment Performance using IprofDescribes the use model of the Incisive Advance profiler (Iprof) and how to use the profiler call graph reports to debug the performance bottlenecks of UVM based design verification environment.
Maximize Incisive Performance with SystemVerilog RandomizationUnderstanding testbench structure, TCL commands, and profiler analysis for incisive performance with SystemVerilog 
Specman Performance HandbookPerformance Aware Coding for e testbenches. Advanced Command Options and Performance Tips for working with Specman.


NOTE - To access the documents listed in the table, click a link and use your Cadence credentials to logon to the Cadence Online Support: http://support.cadence.com/ web site.

Cadence Online Support website http://support.cadence.com is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you've likely to notice new solutions, Application Notes (Technical Papers), Videos, Manuals, etc.

To help us improve, do send us your feedback by adding a comment below or using the Feedback box on Cadence Online Support. You will be provided with a feedback window on the top of each document view on http://support.cadence.com

Let us know if these documents helped you in improving the performance of your environment. If yes, then it will be good to know by how much. You will be provided with a feedback window on the top of each document view on http://support.cadence.com

Sumeet Aggarwal

Avoid Overly Long Expressions in Specman e Code

$
0
0

When you write your e code, a good practice is to avoid expressions that are "overly long" even though they are completely legal. While there is no hard definition of what constitutes an overly long expression, such long expressions can lead to human errors and parser processing errors.

Very long expressions are hard to read and understand. This also makes them error prone, as an accidental syntax error in the middle of such an expression is hard to notice.

Furthermore, such an expression can lead to undesirable results from the Specman parser. It can take it long time to parse, and in some cases (especially if you use a 32-bit platform) it can eventually run out of memory resources and crash. This is all the more likely to happen if the expression contains a real syntax error (which in a shorter expression would normally lead to a syntax error message). Thus, by avoiding expressions that are too long, you benefit twice:

1.  You better avoid introducing accidental syntax errors in the first place.

2.  You help the Specman parser to detect such errors faster in the event that they do occur.

On top of that, it is a good habit to use parentheses in long expressions where appropriate. This not only makes the code more readable, it can actually, in certain cases, make parsing faster.

One last recommendation is to break a long expression into several smaller ones. To illustrate the above recommendations, let's take a look at the following code:

print x == 0 or x == 1 or x == 2 or x == 3 or x == 4 or x == 5 or x == 6 or x == 7 or x == 8 or x == 9 or x == 10 or x == 11 or x == 12 or x == 13 or x == 14 or x == 15 or x == 16 or x == 17 or x == 18 or x == 19 or;

This expression contains a syntax error -- it has an extra "or" at the end. Because the expression is very long, the parser starts to perform a very long calculation, instead of immediately reporting the error. (In this case, the long calculation is avoided when the error is fixed.) Also, the long calculation might eventually lead to a crash.

If you add parentheses as follows, the expression becomes more readable, and you will likely not make the above syntax error in the first place:

print (x == 0) or (x == 1) or (x == 2) or (x == 3) or (x == 4) or (x == 5) or (x == 6) or (x == 7) or (x == 8) or (x == 9) or (x == 10) or (x == 11) or (x == 12) or (x == 13) or (x == 14) or (x == 15) or (x == 16) or (x == 17) or (x == 18) or (x == 19);

You can also break the expression into two smaller ones:

var tmp1 := (x == 0) or (x == 1) or (x == 2) or (x == 3) or (x == 4) or (x == 5) or (x == 6) or (x == 7) or (x == 8) or (x == 9);

var tmp2 := (x == 10) or (x == 11) or (x == 12) or (x == 13) or (x == 14) or (x == 15) or (x == 16) or (x == 17) or (x == 18) or (x == 19);

print tmp1 or tmp2;

In order to improve usability, and ease the pain of this limitation, Specman will automatically issue a warning message, where possible, when parsing an expression actually starts taking too long. This warning will not refer to an exact syntax error, even if there is one; it will, however, refer to the source line in which the problematic expression resides. You will then be able to stop parsing by pressing Control/C, and then examine your code and correct it if needed. This warning will be added in the coming HF releases of Specman, starting from 10.2.

Yuri Tsoglin

e Language team, Specman R&D

Specman: Determining a Good Value for optimal_process_size

$
0
0

Specman's Automatic GC Settings mechanism is aimed at eliminating the need for users to control the parameters which determine each Garbage Collection's behavior.

Setting config mem -automatic_gc_settings=STANDARD tells Specman to calculate all the parameters, to ensure that Specman's memory management system works in an optimal way.

The only parameter that is left for the user to play with is the -optimal_process_size (aka OPS). The importance of this parameter is that many of the other automatically calculated parameters are its derivatives. To set this parameter optimally, one should ask the following question:

WHAT SIZE MEMORY IMAGE DO I WANT MY PROCESS TO HAVE?

Let's say, for instance, that you have 20 GB free on your machine, and you have 2 simulations running in parallel. The optimal process size for each simulation would be 10GB, so you just assign OPS a value of 10GB.

Now, what if you don't know? Or don't care?

Specman then sets this value itself, based on the amount of available RAM on the machine on which the Specman process is to run on. Note - This can be quite a big number. It might make Specman run fast, (hardly perform GCs,) but it consumes lots of memory. The reasoning is that if the user does not care about process size, Specman will use the maximum available memory in order to create a smooth run.

Realistically though, most users do care about their process size and would want to limit it on one hand, while giving it enough liberty to avoid memory issues on the other. So, is there a good way to calculate efficient OPS, one that will ensure that the machine only uses as much resources as needed?

Let's start by saying it is not possible to know exactly how memory settings will affect a program run, if not already measured on EXACTLY the same run. Nevertheless, if we have such knowledge on similar runs, it may give us some hints. We can collect run information and analyze it in a way that can help us understand how efficiently the environment runs and if we need to take some actions to achieve fewer OOM (Out Of Memory) failures, better performance, more effective memory utilization, etc. However, when dealing with batch tests, choosing a specific test "as representative" for purposes of measurement can be as difficult as choosing specific memory settings, so the information must be collected on a representative group of runs, or on all the batch runs.

So what exactly do we need to search for in our log file, in order to calculate the optimal size for our environment? Let's first introduce three concepts:

1)  "Static" (Live) Specman heap - this is the basic size of Specman dynamic memory that mostly belongs to persistent objects and generally remains stable during the simulation

2) "Static" non-Specman heap - Same as 1) but for all the rest of the players in the process

3) We would also like to define the memory requirement for the process during copy GC (since whilst Copy GC is operating, we are most likely to hit the peak of the memory consumption, as Specman might double its memory to perform the GC.

 (2 X Maximum SN live heap) + (Garbage) + (Maximum non-SN heap)

Collecting the Relevant Data

Now in order to sample these values, we will need to collect the relevant memory and garbage collection related information. To collect this information, we need to set the following configuration parameters in our pilot simulation:

  • print_process_size=TRUE - Prints the entire process size in three stages for every GC: Before, At peak time, and After
  • show_mem_raw=TRUE - Prints Specman memory consumption, including top consumers
  • print_debug_msgs=TRUE - Prints messages, including the exact phases of GC
  • setenv SPECMAN_MEMORY_ACCOUNTING - Gives us information about Specman's Dynamic allocation.

Finding a value for Specman Live heap

Analyzing the result log, let's estimate the static Specman heap first. There are three printouts that can help us determine this value:

1.       Last line in Copy (or disk based) GC printout; which is the new process size "after GC":

"Done - new size is nnn bytes"

For example:

MEMORY_DEBUG: process size after GC:

MEMORY_DEBUG:   VSIZE = 1990940, RSS = 1792688

Done - new size is 1804478256 bytes.

2.       "Total size of reachable data" line in show mem "Process sizes" table

For example:

Total allocated size of numerics:       12176 +

Total allocated size of structs:       34568K

Total size of reachable data:           1719M +

Total size in free blocks:               343K +

Total size of unreachable data:          375M

Heap size:                              2096M

 

3.       Last line in OTF GC printout: "Done - total size of reachable data is nnn..."

For example:             

MEMORY_DEBUG: process size at the peak memory usage:

MEMORY_DEBUG:   VSIZE = 3653716, RSS = 3514240

Done - total size of reachable data is 1,096,707,344 bytes (plus 2,417,146,640 free).

There will be several instances of those printings (the same number of GCs that you had in the simulation), and we need to choose the highest value that was printed. Printout no. 1 above should be most correct, and we should take the value from it, but the others should also be considered (if show mem is used and OTF GC is encountered).

Finding a value for Static non-SN heap

To estimate static non-SN heap, you need to take VSIZE after copy (or disk based) GC and subtract from it the "Done - new size is nnn bytes" value from the line below it (values obtained from OTF GC prints are not good).

MEMORY_DEBUG: process size after GC:

MEMORY_DEBUG:   VSIZE = 1990940, RSS = 1792688

Done - new size is 1804478256 bytes.

In this case: 1990940K - 1804478256.

So the maximum of VSIZE after copy GC, which supposed to be "static SN heap" + "static non-SN heap", is an estimation of what is the minimum requirement for the environment.

Finding a value for Dynamic allocation (Garbage)

There is one more thing to be estimated -- the amount of memory used for dynamic allocations collected during GC. It will depend on the environment and how fast it allocates "transient" objects. If it happens fast, you need to have a large buffer so that GC is not triggered too often; if there are few dynamic allocations, it can be relatively small. In most cases, it's the same order of magnitude as static SN heap.

Example Calculation

Let's look at an example on how you could come up with some numbers for OPS on a typical simulation run.

As per the above example:

"process size after GC" : VSIZE = 1990940 (~1944M)

"Done - new size is"        :  1804478256 bytes (~1721M)

Static non-SN heap         = 1944 - 1721 = 223M

Live SN heap                      = 1721M

 

Recommended Optimal Process Size = (Static non-SN heap) + 2 X (Live SN heap) + Dynamic allocation

Or

*OPS= (Static non-SN heap) + ~3X (Live SN heap)

                OPS= (223) + (3 * 1721) = 5386M

*Since we estimated Dynamic allocation to be of the same magnitude as Live SN heap, we used 3 times the value of Live SN heap. In most cases the dynamic allocation value would be lower so we can round down the result we got.  For instance, in the above example

OPS = 5386M =~ 5G.

Notes:

  • The result recommended OPS will not be effective if we notice disk-based GC occurrences in that pilot simulation. In this case we would want to give a higher value than the result OPS we calculated, in order to avoid disk-based GC, and re-calculate the OPS in the same manner, if we were successful avoiding it.
  • If you know your environment does not use a lot of dynamic allocations (in such cases you will see that the difference between values of Live SN heap before and after GC are small), you can change the above formula to something closer to

OPS= (Static non-SN heap) + 2X (Live SN heap)

And round it up. This way you won't encounter a situation where you have a simulation which consumes a lot of memory, but performs no GCs.

  • If you try setting the OPS to a very low value, Specman will automatically adjust it. Specman will notify you if it sets the optimal_process_size to a value other that what a user specifies, if you set the -notify_gc_settings option to TRUE. In that case, you will see a message as follows:

auto_gc_settings: Application too big, setting optimal_process_size to

760803328 which is sn_uintptr2ep(760803328) 

Summary:

Calculating the optimal OPS is a bit tricky. You want it to limit Specman usage while giving it enough space not to encounter memory issues. The above example calculation gives you no more than a recommendation, which is based on the previous run. To get a better, more realistic value, you should run several pilot simulations, and perform the above analysis on the average of those runs.

* From SPMN 12.10s4 and forward this calculation is done automatically when you apply the config memory -print_process_size

Avi Farjoun

Muffadal Laila 

2013 CES: Top 4 Trends Benefiting EDA

$
0
0

While a variety of EDA customer segments are growing, consumer electronics continues to drive the lion's share EDA of industry revenues.  Hence, many events at last week's annual Consumer Electronics Show (CES) in Las Vegas can be extrapolated as leading indicators for the EDA business.  While I couldn't personally attend CES this year, like last year my two trusted agents (specifically, Unified Communications (UC) expert David Danto of Dimension Data, and Joseph Hupcey Jr., video & communications systems architect and father of yours truly) were on the ground to field check the myriad of reports streaming in from legacy and new media.  Thus, allow me to highlight the following trends from CES 2013 that I suggest will have a big impact on EDA this year.

1 - TV's ongoing evolution: clearly the most visible product category at CES were the new crop of "UltraHD" a/k/a "4K" resolution TVs.  That's 3840 X 2160 pixels, or twice the horizontal and vertical resolution of the 1080p HDTV format, with four times as many pixels overall.  Concurrently a lot of very pretty, very large screen OLEDs were given center stage in many booths, suggesting that after over 10 years of CES previews this vibrant, richly color saturated display technology is finally ready for prime time.  My agents report that that 4K screens are noticeably better than today's HD - it's not quite the same dramatic leap from SD to HD - but the difference is visible enough to tempt people to upgrade if the price is right.  And thus the key question(s) revolve around volume production availability and pricing, i.e. when will the cameras, DVRs, streaming support boxes and services, and the TV sets themselves be available at the price points consumers expect?

As it turns out many professional and even some prosumer cameras already support 4K today.  Quite a few productions are shot in 4K and after the final edit are down sampled to standard HD (don't ask me why, but somehow 4K video down sampled to 1080p looks richer than natively shot HD.)  There are also a handful of professional grade theater projectors that support 4K.  However, the good news for EDA and our customers' perspective is that's pretty much where the equipment support ends.  The entire video data flow after the editor is up for grabs - DVRs, routers, and any other apps you can think of for TV need to be re-created to support consumer UltraHD.  Given the bandwidth required to shuffle 4K frames around, clearly hardware-assisted verification products will continue to enjoy robust demand.  With the ongoing growth of apps on TV platform, I further assert hardware/software design and verification solutions will also see ongoing growth.  Last but not least, low power design and verification requirements - whether from regulatory bodies or end-customers themselves -- will continue to be a factor in this new generation of equipment. 

Bottom-line: I agree that UltraHD will inspire demand for new TVs and supporting equipment, which means many more SoCs and peripheral ICs will need to be designed and shipped.

2 - "Born Mobile": this tag line was the theme of Qualcomm's opening keynote presentation, and indeed could be applied to over half of CES where smart, mobile devices of all forms - and a plethora of supporting accessories - took up a large chunk of the exhibit hall acreage.  I see EDA being well positioned to benefit in several major categories: low power (self-explanatory), advanced node tool chain support, and design and verification IP.

At the risk of stating the obvious, the demand for increasing performance and functionality is clearly unabated, and hence the investments being made in 14nm and lower is money well spent by our industry.  Another trend expertly observed in this EETimes interview of Broadcom's co-founder and CTO Henry Samueli is that almost everything on the show floor had embeded WiFi connectivity.  Beyond the opportunities for network infrastructure equipment growth, I believe this significant step toward the "Internet of Things" heralds opportunities in design and verification IP - not for just WiFi and other radio IP, but IP to enable the rapid smartening up of previously unconnected, dumb devices like refrigerators.

3 - Born Mobile, Automotive Style: CES 2013 devoted a massive area to in-car entertainment and supporting accessories.  Such was its scale that my agents were barely able to scratch the surface of this pavilion, but they came away impressed at how this category has visibly grown year-over-year in size and scope.  It used to be all about glitzy car stereos, speakers of all shapes and sizes, and amusing arrays of blinking lights to decorate the audio installation.  Today, the offerings are all about outfitting the passenger cabin like a home entertainment center, where you can customize the standard platform with apps like any other self-respecting modern device.  The obvious point: in addition to the growth in electronics being used under-the-hood, the demand for multiple mobile entertainment centers in the driveway is good news for semiconductor growth.

4 - Standards-Based IP Enabling Clever Innovation:  Perhaps a better case in point for anticipating growth in standards-based IP and low power design & verification is the eminently practical StickNFind Bluetooth Stickers.  Simply affix their special sticker to something you often lose (car keys, TV remote control, phone, luggage), and when it goes missing you can hunt it down using their iOS or Android smartphone app to follow a radar-like display to sweep for the lost item.  Clearly products like this are enabled by the availability of high quality, standards-based design and verification IP; and in turn we can expect clever new applications like this to drive growth.

If you went to CES this year – or not -- please share your observations in the comments below, or offline.

Until next CES, may your throughput be high and your power consumption be low.

Joe Hupcey III

On Twitter: @jhupcey[[[link]]]

P.S. Speaking of trade shows, in the verification space the annual DVCon's clear focus on functional verification technology and methodology has made it a growing, high value technical and trade forum.  Hence, my colleagues and fellow bloggers will be there in force February 25-28 at the DoubleTree Hotel in San Jose, CA!  In particular I welcome you to join me at the Wednesday lunch panel, "Expert Panel: Best Practices in Verification Planning" and the Thursday tutorial entitled "Fast Track Your UVM Debug Productivity with Simulation and Acceleration" (includes coffee & lunch)  Register today!

 

Reference Links and/or Other Interesting CES 2013 reports

David Danto of Dimension Data's report on CES 2013: A View From The Road Volume 7, Number 1 -2013: International CES

SemiWiki: Battling SoCs: QCOM vs NVIDIA vs Samsung

EETimes DesignNews: CES Slideshow: The Next Big (or Little) Things

 

 

Specman: An Assumed Generation Issue and its Real Root Cause

$
0
0

Random generation is always a complex task, and differences in results are usually very hard to debug. Besides, generation misbehavior always rings many bells in R&D :-)

A customer reported a random stability issue, explaining that the generator (IntelliGen) generated different values with the same seed. One simulation was started from vManager, the other in a Unix shell, and they ran in different run modes (compiled vs. interpreted).

Looking into the (quite complex) environment, it turned out that the beginning of the simulation was identical, but as time advanced the results started to differ. I assume some of you have experienced similar behavior in the past.

A first look revealed that complex list manipulations were performed in many levels of nested method calls. Each list -- a list of units -- was manipulated by several list methods (sort, add, unique, etc.). The results were printed out to the screen where, after a while, the lists started to differ.

So an idea came to mind: The problem is probably not a generation issue, as the static generation was identical in all cases; rather it is a runtime issue, most likely caused by list manipulation. But how could the way the simulation was launched or the run-mode be responsible for the differences?

With no alternative, we proceeded to debug through the source code step by step, and examined the list after every manipulation. The complexity and deep nesting of the code (there were even recursive methods that touched those lists), resulted in about two days of painstaking analysis without finding a difference. Then, we hit pay dirt -- we came across the construct where the lists began to differ. Below is the sample code:

So where is the problem? The code identified that the list of units that was sorted, but did not identify the specific field used for sorting -- the argument (it) referred to the unit itself.

Looking up the sort() method in the Incisive/Specman documentation, we found the following:

The Note in the description seemed to suggest a possible clue.

A scan of the log files showed that the vManager run started a garbage collection at some point before the sorting action, and that the plain Specman simulation logs did not show this garbage collection. This difference in behavior was the result of different memory settings between different simulation runs.

The bottom line: Garbage collections can change the physical memory address of the units in the list, which can affect the sorting of these addresses before and after such a memory operation.

Summary:

  •  The root cause for the problem was that the sorting statement contained a bad argument (the result of a copy and paste error) -- the construct was taken from code where it worked perfectly fine with a list of strings.
  •  The problem was not generation-related, even if it looked that way in the first place.
  •  More importantly, we uncovered usage of a problematic construct: The user based the sort on the physical address. This should be avoided even if garbage collection is not performed (and all the more so when garbage collection is performed).
  •  Uncovering such an issue is a perfect task for the Specman Linter. Cadence intends to enhance the linter's capabilities in that direction.

Hans Zander

Viewing all 673 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>