Quantcast
Channel: Cadence Functional Verification
Viewing all 673 articles
Browse latest View live

Photo Essay and Comments on DAC 2012 in San Francisco, CA

$
0
0

In addition to the annotated image gallery (click here or on the image), below are some long form comments on particular aspects of this year's Design Automation Conference (DAC 2012).


Verification momentum
- I grant that I might be influenced by some amount of selection bias, but I could swear that this year there was way more interest and vendor presence in the functional verification space than at recent DACs.  Our group was certainly out in force; where in addition to the popularity of our demo suites and demo pod we were plenty busy supporting customer meetings, booth theater presentations, User Track papers and posters, and standards organization activities like the UCIS 1.0 launch luncheon.  And across the show floor it seemed like every third booth was touting a functional verification offering of some form.  With the 20nm node on the horizon -- and thus gigagate chips with 100s of IPs becoming mainstream vs. the province of only our largest customers - all this verification-related energy comes as no surprise.  (Recall there was similar momentum in this space at DVCon and at CDNLive San Jose earlier this year.)

Formal & ABV momentum - similar to last year's experience, I sensed an increased visibility and interest in formal & assertion-based verification (ABV) related technologies.  For starters, we received many such queries at the booth demo pod about our Coverage Unreachability app (some fortuitously inspired buy our Formal-driven Lego Rubik's Cube solving robot).  There were also novel, formal-based initiatives like the "Oski Challenge", where services house Oski Technology took a sight-unseen IP block from NVIDIA and found 4 serious issues in the 72 hour time frame of DAC (full disclosure: Oski used Incisive Enterprise Verifier for this ambitious project).  Finally, the DAC User Track best paper was about a bypass memory verification project that used Incisive Enterprise Verifier.  (Coincidence?)

The low profile of cloud computing - in sharp contrast to last year's DAC (recall Richard Goering's report on the 2011 DAC panel, "DAC Panel Says ‘Yes' to EDA in the Cloud -- But Differs on When" captures the prevailing sentiment of that time well), there was scant evidence of cloud-oriented solutions anywhere on the floor.  This is not to suggest that EDA-centric cloud solutions are dead (for example, I know Cadence's Hosted Solutions continues to build on their loyal customer base).  Instead, it's apparent that the majority of EDA customers are either (a) reluctant to embrace cloud solutions in general for whatever reasons, or (b) are reluctant to have EDA vendors handling this aspect of their operations.  Granted there are a number of applications and data sets that are either too inefficient or just too sensitive to move in and out of the cloud.  But in the greater world outside EDA, with every passing day this list appears to be shrinking ...

DAC itself - despite the marginal increase in attendance vs. last year, the show was dramatically smaller than the last time it was in San Francisco. It seems that many customers are too busy to come to a general forum like DAC.  Conversely, they seem to make time for smaller, topic-focused events like DVCon, ESC/Design West, or even specific technology tracks at vendor events like CDNLive.  Additionally, in 2012 there are numerous channels where customers can get information and support when and where they need it.  The bottom-line: while customers continue to vote with their feet year-after-year, at least in 2012 it was clear to me that the declining DAC attendance figures do not reflect the health of the EDA industry, or the electronics and semiconductor industries that we serve. 

To conclude on a positive note, the Cadence Denali Party was just as well attended and fun as ever -- this set has a handful of images and a brief video for you to see for yourself.

Until next DAC, may your throughput be high and your power consumption be low.

Joe Hupcey III

 


DAC 2012 Best User Track Paper Review: Deploying Model Checking for Bypass Verification

$
0
0

Bypass logic verification is a common and difficult challenge for modern VLSI design that arises in the verification of CPU, GPU, and networking ASICs.  Get it wrong and/or miss a bug in the bypass logic and whole system can simply freeze. 

Fortunately, the 2012 DAC User Track Best Presentation award-winning paper titled "Deploying Model Checking for Bypass Verification" by engineers from Cisco and Oski Technology (full citation below) describes an easily replicated, nearly push-button flow that does not require users to put in a lot of effort to write complex input constraints.  And full disclosure: they used my favorite combined simulation+formal tool, Incisive Enterprise Verifier (IEV)!

The paper was presented by Vigyan Singhal, Oski Technology CEO (right). Here are my highlights of this ground breaking work:

* Again, it bears repeating that the flow they created is nearly push-button since it does not require users to put in much to effort to write complex input constraints.  Their creativity is particularly impressive since the DUT is a bear, with a tough-to-verify, 25-deep bypass logic schema.

* In a nutshell, their technique was to use the DUT itself as a reference model based on the fundamental principal of bypass logic: whether the bypass is active or not, the results should be the same regardless. In this case, the input commands to the reference model (1st DUT instance) have been separated by 25 cycles where the bypass logic is inactive. However, the challenging twist is that input commands to the 2nd DUT instance are randomly separated by anywhere from 1 to 24 cycles.

* Another key factor to their success was using "memory random" as a simple abstraction of the design depth.  This allowed the tool to concentrate on the key elements of the DUT/state space.

* Bottom-line: they achieved phenomenal results, with 10 bugs found in this already heavily simulated IP.   Indeed, many corner cases they reached with formal would have been practically impossible to reach with only a constrained-random, simulation-based testbench given the permutation of command-combinations, the number of cycles that each command pair was spaced out, etc.

* Although they didn't go into this in the paper, speaking with the authors afterward I learned that IEV was also used to generate "formal environment coverage" to give them the confidence that the design was well covered given the verification depth.

If you are tasked with bypass verification in any way, I strongly recommend that you to review this paper.  It will give you a lot of food for thought in general, and there is high probability that the methodologies they used can apply to your project as well. The paper is available at the Oski Technology web site.

Finally, congratulations to all the paper's authors for their well-deserved award!

Darrow Chu
Sr. Sales Technical Leader
For Team Verify

Reference Info: the paper's complete citation
8U.2 - Deploying Model Checking for Bypass Verification

This paper describes how we applied model checking, a formal verification approach, to establish correctness of the bypass logic in our design and how we found corner case bugs that are almost impossible to find with simulation. We used the RTL design both as the DUT and as a component of the reference model in the formal verification setup and experimented with different initialization approaches. Further, by adopting end-to-end verification, we saved time on writing and verifying complex functional constraints. Since bypass logic is prevalent in many processor and networking designs, we believe our methodology will benefit such designs.

Speaker:

Vigyan Singhal - Oski Technology, Inc., Mountain View, CA

Authors:

Prashant Aggarwal - Oski Technology, Inc., Gurgaon, India

 

Michelle Liu - Cisco Systems, Inc., San Jose, CA

 

Wanli Wu - Cisco Systems, Inc., San Jose, CA

 

Vigyan Singhal - Oski Technology, Inc., Mountain View, CA

 

Photo by Joe Hupcey III

Video: Oski Technology’s Courageous "72 hour Verification Challenge" Using Incisive Enterprise Verifier (IEV)

$
0
0

I've seen a lot of intriguing promotions over the years, but at DAC 2012 our partners at Oski Technology tackled a truly unique challenge. To show off their formal verification prowess they took an IP block from NVIDIA sight unseen  (actually, on Sunday evening before the DAC they received a spec and a 15 minute briefing) and over the course of 72 hours from Sunday at 5pm to Wednesday at 5pm they used Incisive Enterprise Verifier ("IEV")) and their years of verification experience to deliver impressive results.  In this video Oski's CEO Vigyan Singhal gives a snapshot of the challenge while it was in progress, and then reports on the results after the dust has settled.

(Click here if the embedded video doesn't play)

Again, I can't recall any EDA vendor or services house ever attempting such a compelling and relevant challenge, let alone delivering such impressive and meaningful results.  If Oski (and IEV) can deliver results like this in 72 hours, imagine what they can do on a longer project ...

Joe Hupcey III
for Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And now you can "Like" us on Facebook too:
http://www.facebook.com/pages/Team-Verify/298008410248534

P.S. Oski Technology was on fire at DAC: in concert with Cisco they also won the DAC User Track Best paper award (for a project using Incisive Formal Verifier ("IFV"))!  Here is a brief review of this paper: http://goo.gl/5Bxbg

 

Video: DAC 2012 Discussion with EET's Brian Fuller on EDA and Video

$
0
0

Continuing our conversation on leveraging social media for EDA, at the Design Automation Conference (DAC 2012) I had the honor of interviewing again with EETimes editor Brian Fuller -- this time the focus of the conversation was on video. Specifically  we talked about which video formats have proven to be most popular, and which are most effective for delivering complex technical information.

 

To play the video, click on the photo above or click here: http://goo.gl/BtDOp (Until the intro material from the live feed is edited out, you might need to manually skip ahead to the 5:06 mark in the video.)

Looking back at this conversation, the key point that I hope you come away with is that product managers -- not just corporate marketers -- need to include video as part of their social media campaigns and overall promotional strategy for their products.  Like any other form of collateral, if the video is focused on a high value topic presented in a no-nonsense way, customers and prospects will watch.  And thanks to modern search engines, they can discover your videos months and even years after they are first posted.  Plus, these days high-quality video is relatively cheap to produce, and essentially free to distribute.

Product managers: are you seeing similar positive responses to your product-related videos?  Please share your thoughts below, tweet them, or contact me offline.

Joe Hupcey III

On Twitter: @jhupcey --  http://twitter.com/jhupcey

 

Reference Links
My YouTube channel where most of the videos referred to in the interview are hosted:
http://www.youtube.com/jhupcey

An example of a "snack sized" tech tip video:
http://youtu.be/gQ5ozp5NuO8

Our prior video discussion on effective social media channels for EDA:
http://bcove.me/oj7mkdrb

Brian's EETimes+AVNet "Drive For Innovation" home page, hosting numerous videos of innovators across the country:
http://www.driveforinnovation.com/

 

DAC 2012 Video: R&D Fellow Mike Stellfox on the Emerging Bottlenecks in SoC System Verification

$
0
0

R&D Fellow Mike Stellfox leads a group of trailblazers inside Cadence.  Specifically, Mike's group is tasked with moving our most promising prototypes and methodological theories out of their incubators and into production.  In this interview on the floor of the Design Automation Conference (DAC 2012), Mike gives a brief snapshot of how innovations in debug automation have moved from the lab to the show floor, and how ad-hoc hardware-software SoC verification processes are breaking down, thus calling for more repeatable, automated solutions.

If the embedded video doesn't play, click here.

Question: are you seeing similar trends in your company and/or customer base?   Please share your thoughts below, or contact me or Mike offline.

Joe Hupcey III


On Twitter: @jhupcey, http://twitter.com/jhupcey

 

Video: DAC 2012 Update on AMIQ’s DVT IDE – New RTL Design Work Flow Support

$
0
0

Readers of this blog and of Team Specman will recall that Integrated Development Environment (IDE) and verification services provider AMIQ has been in the vanguard of supporting functional verification methodologies and testbench creation for years.  The success of verification engineers using AMIQ's "DVT" IDE product has been increasingly noticed by their RTL designer colleagues such that AMIQ is now adding new capabilities to DVT to support RTL design work flows.  In this interview shot on the DAC 2012 expo floor, AMIQ CEO Cristian Amitroaie describes how they have extended the DVT IDE to address the needs of design engineers, including powerful new capabilities to refactor and visualize the code and signal flow.

If the video doesn't play, click here.

Question: are your RTL design engineers becoming more concerned about integrating with and/or perhaps even adopting verification-style methodologies?

Joe Hupcey III

On Twitter: http://twitter.com/jhupcey, @jhupcey


Reference Links
DVCon 2012: AMIQ launches "Verissimo" - a verification-centric, UVM-aware SystemVerilog linter

The DVT website: http://www.dvteclipse.com/

Also recall AMIQ's evaluation process is a breeze -- just fill out this form:
http://www.dvteclipse.com/design_and_verification_tools_trial_request.php
and they email you a demo license.  That's it!
 

DAC 2012 Video: Dr. Kerstin Eder, University of Bristol, About Her Course on Functional Verification

$
0
0

Dr. Kerstin Eder, a Senior Lecturer in the Computer Science department at the University of Bristol, UK, teaches a course on functional verification.  In this interview she outlines how the course is structured, what makes for a good verification engineer, and anecdotes of how students are getting snapped up by industry immediately upon graduation. 

If the embedded video doesn't play, click here.

Brief digression in regard to the industry demand for her graduates:

Anecdotally I can confirm the high demand for verification engineers -- fresh out of school or experienced -- here in the USA and other geographies.   For example, I can tell you first-hand that here in Silicon Valley we are seeing an increase in poaching: a few weeks before DAC I had scheduled a meeting with a verification group based at the Santa Clara offices of a large, world-wide semiconductor company.  They had to cancel because two key verification engineers that we were going to meet with had just quit to go to another company!

In short, if you are an engineer or computer scientist between jobs, earning your way through courses and/or training like that offered by Dr. Eder will give you a leg up in this tough economy -- the verification field seems to be about as recession proof as it gets in the technology business.  If you can't go back to school, you can get a running start on your own by taking advantage of the many resources introducing the Universal Verification Methodology (UVM).  For starters, there is a ton of great, free material on the Accellera UVM World site -- http://www.uvmworld.org/.  Cadence has also published two books on verification: A Practical Guide to Adopting the Universal Verification Methodology (UVM) provides a great overview of UVM, and Advanced Verification Topics uses UVM as a framework for functional verification with mixed-signal, multiple languages, low power, metric-driven verification, and more.

Joe Hupcey III


On Twitter: @jhupcey, http://twitter.com/jhupcey

 

Reference Link
Dr. Eder's home page: http://www.cs.bris.ac.uk/~eder/

 

Using Flexible Specman License Searches

$
0
0

Until recently, Specman used to look for its licenses in the following strict, hardcoded order:

Either

1. "Incisive Specman Elite"

2. "Incisive Enterprise Simulator"

3. "Incisive Enterprise Verifier"

Or

1. "Incisive Enterprise Simulator"

2. "Incisive Enterprise Verifier"
3. "Incisive Specman Elite"

Starting from Specman 12.1, Specman supports -uselicense and -noievlic command line switches. These switches provide you a high degree of control over what licenses Specman will look for and in what order:

  • -uselicense - allows you to explicitly list which licenses Specman should look for, in the desired search order. This command can be abbreviated -uselic or -usel.
  • -noievlic - prevents Specman from looking for the "Incisive Enterprise Verifier" license. This command can be abbreviated -noiev.

In typical Specman fashion, the same functionality is also provided via environment variables:

  • SPECMAN_USE_LIC - this variable is the equivalent of the -uselicense switch
  • SPECMAN_NO_IEV - this variable is the equivalent of the -noievlic switch

If you use both the switch and its corresponding variable (e.g., both the SPECMAN_USE_LIC variable and "-uselicense" switch), the switch takes precedence. This precedence can be especially useful,because you can use the variables to set a permanent, long-standing policy, and then override them when desired via the switches or the env command.

When using Specman's -uselicense switch, you specify as a parameter a colon-separated list of mnemonics for the licenses you want searched, in search order (in this way, it works like NCSIM):

 

License key

"-uselicense" mnemonic

Incisive Specman Elite

SN

Incisive Enterprise Simulator

IES

Incisive Enterprise Verifier

IEV

Default lookup order

DEFAULT

 

Notice that you can specify DEFAULT as the mnemonic; this value causes Specman to switch to its default, hardcoded lookup order. Note: If you specify DEFAULT, any other values specified in the parameter are ignored.

There are, or course, multiple ways to invoke Specman,  and each of them supports -uselicense and
-noievlic, as the following examples illustrate:

  • specman -uselic SN:IEV
  • specview -uselic IES:SN
  • specrun -uselic SN:IES
  • irun -uselic SN:IES -gui top_file.e

Note that when irun is used, if only e files are passed to it, irun invokes standalone Specman rather than NCSIM; the -uselicense parameter is interpreted in Specman context here.

Now, let's examine some practical usages, add some environment variables to the mix, and look at Specman's behavior. We begin with the following:

  • env SPECMAN_USE_LIC=IES:IEV specman -uselic SN

In this case, the switch prevails, so Specman will look only for the "Incisive Specman Elite" license key.

Now let's look at another example:

  • env SPECMAN_USE_LIC=IES:IEV specview

No switch is specified, so the variable is in effect. Specman will look for "Incisive Enterprise Simulator," and then for "Incisive Enterprise Verifier."

And several more examples:

  • env SPECMAN_NO_IEV=1 specman

Specman will look for "Incisive Specman Elite" and then for "Incisive Enterprise Simulator".

  • env SPECMAN_NO_IEV=1 specman -uselic IEV:IES

In this case, the SPECMAN_NO_IEV variable takes effect together with the -uselic switch, so Specman will end looking for "Incisive Enterprise Simulator" only.

  • env SPECMAN_USE_LIC=SN:IES SPECMAN_NO_IEV=1 specman -uselic IEV:IES:DEFAULT

The DEFAULT value overrides all other specified values; the default Specman lookup order (SN:IES:IEV) is used, and the rest is ignored.

And now, let's add a "cherry on top". What happens if the -uselicense switch contains inappropriate (e.g. IUS-only) mnemonics, or just plain garbage?

  • - If SPECMAN_USE_LIC is set and its content is OK, it will be used
  • - Otherwise, the default lookup order will be used.

The following examples illustrate this point:

  • env SPECMAN_USE_LIC=SN:IES specman -uselic foo:bar

Falls back to SN:IES.

  • env SPECMAN_USE_LIC=foo:bar specman -uselic baz:bat

Falls back to default lookup order.

  • specman -uselic foo:bar

Falls back to the default lookup order.

Alex Chudnovsky

Specman R&D


My Clark Kent Moment – How I Discovered Aspect Oriented Programming in e (IEEE 1647)

$
0
0

Growing up on VHDL, moving on to Verilog and then to SystemVerilog, I eventually discovered e (IEEE 1647)

Initially I thought: "What is the fuss all about?"

While exploring the language during the development of the cowbell videos, it hit me -- I started to recognize the power of Aspect Oriented Programming (AOP). Indeed, it is the antidote to Verification-Kryptonite!

Let me explain my newfound capabilities. When you verify complex systems you will certainly end up in situations where you have to deal with unanticipated changes to the original requirements, as well unanticipated requirements of your verification environment. These are just two of many areas where AOP can provide enormous flexibility and efficiency.

Using AOP you can take any environment, and with a flip of a finger change the behavior of any component, transaction type, coverage model or the entire system. The best part is, this can all be done without altering your existing code base. Therefore, code maintenance does not need to compete with flexibility any longer.

Consider the following situation. It is summer and the project is well underway. You are writing and running some tests and you encounter a bug. In your team, the roles and responsibilities are clearly separated and you need information that the current verification environment does not provide. However, the person owning the verification environment is on vacation.

This dilemma can have serious consequences on productivity.

Even if a backup resource for the verification environment is available it would still take time and effort to communicate what you need. In addition, you are introducing risk by altering code, and potentially adding bugs. Lastly, you are dependent on another player the team. Altogether you are in a bad place.

With AOP, however, you can mitigate the risk, break the dependencies, and create flexibility. In fact, you can add the required functionality to the verification environment from the outside, without having to bother anyone, and without touching the existing, stable environment.

To be more specific, let's say you need to log the start time when an APB transaction is driven. Traditionally, you would have to add a field to store the value of the transaction in the sequence item file, and then you have to change the method call in the BFM file. Instead, with AOP, you just create a new extension file for your particular debug purpose. For example:


// apb_trans_s.e (abridged file) - sequence item definition - unaltered

struct apb_trans_s like any_sequence_item {

   addr : uint;
   data : uint;

   ...
}


// apb_master_bfm.e (abridged file) - BFM definition - unaltered

unit apb_master_bfm like apb_bfm {

   ...

   drive_transaction_address (cur_transaction : apb_trans_s)

   @tf_phase_clock is {
      ...
   }
}


// debug_start_time.e - (entire file) - extension with AOP

extend apb_trans_s {
   start_time:time;

};

extend apb_master_bfm {

   !history_list: list of
   apb_trans_s;

   drive_transaction_address () @clk is first {
cur_trans.start_time = sys.time; history_list.add(cur_trans); }; };

In this example you added a new field, start_time, to the sequence item and you added additional functionally to the beginning of the drive_transaction_address method of the BFM.

After you load this extension file in your next simulation run, you automagically gain new features in your environment using AOP without introducing risk to other users and without depending on another person.

After trying this on some examples I felt as if my inner Superman had been unleashed and I had gained a new superpower.

Unleash your superpower too, with AOP.

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

UVM SystemVerilog Class Library Overview Video – Inspired by 1600 Cowbells in Action

$
0
0

Just after releasing the original cowbell video series I found that Ben and Jerry's had discovered a great way to combine cowbells and charity.  In April of this year, they held an event for a new world record of over 1600 cowbells in action. It is a must see for the cowbell aficionado.




Coincidentally, this happened up north in Burlington, Vermont, which is the home of the University of Vermont. As the University has been using the acronym UVM much longer than us, a lot of confusion in Internet searches can occur.





For example if you google "UVM library" you end up with this. However, if you want to know more about key aspects relevant to UVM SystemVerilog library users, you want to check out our latest cowbell video.


Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

UVM Testflow Phase Debugging- Identifying Blocking Activities

$
0
0
UVM Testflow debugging capabilities have been recently enhanced through the addition of more information to the output of the show domain command. In this post, we demonstrate how this information can be used to answer such questions as  
  • 1. What domains are in the environment? What units do they contain?
  • 2. What phase is running now?
  • 3. Why are we still in this phase? Which activity is still running, and blocking us from proceeding?
  • 4. ... and more...
The following screen shot shows the output of a show domain command, where blocking activity prevented a domain from proceeding to the next phase.

 

Notice that from the output of this command, we can learn the following:

  • There is a domain named ENV_A_DOMAIN, and it contains three units: my_bfm-@2, my_driver-@1, and env_a-@3.
  • The domain is running the phase MAIN_TEST.
  • The domain has not proceeded to the next phase because of the following blocking activities:   
    • The unit env_a is running the blocking thread tf_main_test.
    • There is also a blocking sequence: my_seq-@7.
    • As you can see, there are also blue hyper links to the sources of the two blocking threads
      1. We can also see that the timeout value of this phase is 1 millisecond, and that we are at the beginning of the phase – still ~999 microseconds before the watchdog timer will expire.

        It may happen that a domain had finished all its current phase activities, but is not proceeding to the next phase, because it waits for a domain it depends on. The show domain command gives this information.

        In the following screenshot example, the ENV_B_DOMAIN completed its FINISH_TEST phase activities, but waits for ENV_A_DOMAIN to finish its FINISH_TEST activities before it can proceed to the next phase.

         

          

        To display all defined dependencies at any time during the test, you use the show dependencies command. The following show dependencies screenshot example lists these dependencies between the three domains defined in this environment:

         

        Read more about Testflow, defining domains activities and domains dependencies, in the UVM e Reference Manual.

        Enjoy verification! 

        Efrat Shneydor,

        UVM e

        Global Cowbell Fever Spreads – We Are Launching 12 “UVM SystemVerilog Basics” Videos in Chinese

        $
        0
        0

        A little over two and a half months ago we started sounding the "cowbell" with the release of the UVM SystemVerilog Basics videos.

        The resonance has been strong. As there can (almost) never be too much of a good thing, we are expanding this series by re-releasing the videos audio dubbed into Chinese.

        We are kicking it off with the first 12 videos titled:


        1. Introduction
        2. DUT Example
        3. UVM Environment
        4. Interface UVC
        5. Collector
        6. Monitor
        7. Sequence Item
        8. Sequence
        9. Driver
        10. Sequencer
        11. Agent
        12. Agent types

        I would like to thank my colleague, Yih-Shiun Lin for his great job in translating the audio. It is his voice you hear on these videos.

        Besides releasing the videos to YouTube, we are also publishing them on YouKu.

        http://www.youtube.com/playlist?list=PLA1A32A7461300910

        http://www.youku.com/playlist_show/id_17869812.html

         

        We plan to complete the audio translation for the remaining tracks in the future - So stay tuned to this blog so you don't miss any of them.

         

        Axel Scherer
        Incisive Product Expert Team
        Twitter, @axelscherer

        My Constraint was Ignored – Is it a Tool Bug? – Part 2

        $
        0
        0
        In a previous post we showed some cases of user code that can cause ignored constraints, and how to debug that code using the Gen Debugger. In this post, we shall demonstrate another important example -- where the user code violates IntelliGen's coding guidelines.

        Incorrectly written constraints can negatively impact aspects such as generation order or input sampling, leading to incorrect or problematic generation. A typical example is when a method accesses a generatable field which has not been passed as a parameter to the method; not passing the parameter violates the coding guidelines and prevents IntelliGen's analysis from determining the correct generation order.

        The following test case demonstrates such a problem. Note that the ‘calc_checksum()' method is not passing ‘data' as a parameter, resulting in calling the method before the actual list is generated.

        <'

        struct packet {

            checksum : uint;

            data : list of byte;

            zero: bool;

            one : bool;

            keep data == {1;1;1};

            keep soft !zero;

            keep soft !one;

            keep checksum == calc_checksum(); // calc_checksum() samples data

            keep checksum == 0 => zero == TRUE;

            keep checksum == 1 => one == TRUE;

            calc_checksum() : uint is {

                for each (b) in data {

                    result = result ^ b;

                };

            };

                post_generate() is also {

                check that checksum == calc_checksum();

            };

        };

        extend sys {

            p:packet;

        };

        '>

        The best way to detect and fix such code is through the IntelliGen guidelines linter. Running ‘gen lint -g' will trace all such issues in the user code and issue proper warning for each. For the given example, the following warning will be issued:

        Specman my_cons2> gen lint -g

        Gen Linter - analyzing...

        Selected mode: guidelines mode

           *** Warning: WARN_GEN_LINTER_G42: Accessing a generatable me field me.data

        in methods called from constraints (at line 11 in @my_cons2 ) is not

        recommended.

                   at line 16 in @my_cons2

                for each (b) in data {

        To change the severity, type:

        set notify -severity=<new-severity-level> WARN_GEN_LINTER_G42

        To see possible severity levels use 'set notify -help'

        Gen Linter - analysis complete

        In an ideal world, the linter can be used to keep the code 100% clean. However, in a world less ideal, the users do not often run the linter and are not always aware of the coding violations.

        So suppose you ran a test and a specific constraint was ignored. The question remains how to understand what the problem is, and how to fix it. If the problem is the one discussed here, the solution is simple and composed of the following steps:

        1. Open the Gen Debugger to the problematic Connect Field Set (CFS). As in the case in the previous blog, the simplest way is to run with ‘config gen -collect = ALL' (to keep all the generation information), and issue ‘show gen -instance' with the problem generated field as a parameter. Here we should issue ‘show gen -instance me.checksum' after breaking on the error.

        2. When opening the Gen Debugger, choose the CFS in the process tree and trace the problematic constraint in its constraint list.

        3. After choosing the constraint, examine its Linter tab. If it displays linter warnings, these could indicate the source of the problem!

         

        Note that this is not the only case in which the Gen Debugger interacts with IntelliGen's linter. When debugging ICFSs, you can use the "inconsistency view" which includes the output of the inconsistency linter.

        One final point: Starting with version 12.1, another Linter tab has been added -- this one appears in the displayed CFS information, and includes the CFS performance recommendations. (This is in addition to the constraint's Linter tab seen in this post.)

        The last post in this series will demonstrate a different case of unenforced constraint -- when the constraint is a soft constraint.

        Reuven Naveh

        Intelligen R&D

        Video: DVCon 2012 Digital-Mixed Signal (DMS) Expert Neyaz Khan on UVM Mixed Signal (UVM-MS)

        $
        0
        0

        E-mail reminders for the DVCon 2013 Call For Abstracts prompted me to look through my DVCon 2012 folder -- lo and behold I came across the following video interview.  It was shot during the show, but the official approval fell between the cracks and didn't come through until recently.   Regardless, the issues raised in the paper that's the subject of the interview (From Spec to Verification Closure: A Case Study of Applying UVM-MS for First Pass Success to a Complex Mixed-Signal SoC Design) are as challenging as ever.  Here the paper's author Neyaz Khan, a mixed signal verification R&D manager at Maxim Semiconductor, discusses what needs to be considered when the Universal Verification Methodology (UVM) is extended to support mixed signal verification projects, the implications for circuit modeling, and the optimal R&D team composition.

        If the embedded video fails to play, click here.

        Note: if you have never attended a DVCon, you can expect to meet design and verification experts like Neyaz everywhere you turn - clearly worth the price of admission!

        Question: if you are in the digital-mixed signal field, are you seeing similar trends in your company and/or customer base?  Please share your thoughts below, or contact me offline.

        Until next DVCon, may your throughput be high and your power consumption be low!

        Joe Hupcey III


        On Twitter: @jhupcey, http://twitter.com/jhupcey

         

        Reference Links
        DVCon 2013 Call For Abstracts

        Neyaz's DVCon 2012 paper, From Spec to Verification Closure: A Case Study of Applying UVM-MS for First Pass Success to a Complex Mixed-Signal SoC Design

        Richard Goering Industry Insights report on the book Neyaz co-authored: "Advanced Verification" Book Brings UVM to Mixed Signal, Low Power, Multi-Language

        My Photo Essay, Video Playlist, and Comments on DVCon 2012

         

        Product Update: New Assertion-Based Verification IP (ABVIP) Available Now

        $
        0
        0

        Verifiers rejoice: R&D has just released all-new Assertion-Based Verification IP (ABVIP) code as part of Cadence's Verification IP (VIP) and SoC Catalog offerings.  Specifically, the ABVIP code in the July 2012 release has been completely re-architected to be: 

        • Higher performing for both Incisive formal and simulation engines (with gains from 1.5x to ~ 10x!)
        • Simpler to instantiate and configure
        • Easier to use with context-sensitive IP title support in the SimVision waveform debug environment
        • Inclusive of new protocols: APB4 and AXI4

        Here are the details:

        * The ABVIP code itself has been internally re-architected to reduce complexity, and thus provide higher performance and better quality of results.  For starters, the code has been re-implemented in System Verilog Assertions (SVA) to take advantage of performance enhancements made for the SVA engines in both Incisive formal tools (Incisive Formal Verifier (IFV) and Incisive Enterprise Verifier (IEV)) and Incisive Enterprise Simulator-XL (IES-XL).  In terms of the AXI3/AXI4 titles, the complexity is now controlled by the number of outstanding transactions rather than the width of the ID bus.

        * The new ABVIP is simpler to instantiate and configure than its predecessors.  The user simply instantiates the correct model of the ABVIP: master, slave or monitor, and the constraints are automatically configured -- no more need for Tcl configuration.  Furthermore, there are additional capabilities depending on the title selected.  For example, an instance of the AXI3 Master module automatically configures the ABVIP to configure all master properties as constraints without user intervention.

        * Waveform debug has been enhanced to automatically provide an IP-title, context sensitive grouping of signals in all formal counter-example, and witness waveforms.  Specifically, when IFV or IEV is being used, the tools are aware of the ABVIP's presence and they create interface signal groupings to aid in the viewing and/or debug of waveforms.  For selected ABVIPs, all instances of the ABVIP interfaces will have their signals available in the waveforms; and each instance will have a separate group of signals in the waveform.  As is evident in the screen shot included below, this is a huge time saver when trying to view witness waveforms or debug failures.

        * In addition to enhanced waveforms, for the AXI family of protocols, transaction tables are available to show the currently active transactions.  As shown in the following screen shot for the AXI3 ABVIP, this feature makes it easier understand the currently active transactions and which state they are in. 

        In this example, you can see that the ABVIP is configured for a maximum of 2 deep transaction queue where there are 2 valid write transactions in flight, and one valid read transaction in flight, as indicated by the "Valid" column.  Hence, with the waveform cursor it's very easy to deduce the state of the bus at any time in the waveform.

        * The following table lists all the supported protocols and features available with each protocol, including the new AXI4 and APB4 titles.  Please note the migration guides are supplied to help existing users migrate to the new ABVIP.

        In summary, the new ABVIP models incorporate enhancements to improve performance, simplify instantiation and configuration, provide a more productive debug environment, and expand the catalog to include APB4 and AXI4 protocols.


        Jose Barandiaran
        R&D Product Expert Team

        On Twitter: http://twitter.com/teamverify, @teamverify

        And now you can "Like" us on Facebook too, where we post more frequent updates on formal and ABV technology and methodology developments:
        http://www.facebook.com/pages/Team-Verify/298008410248534

         

        Reference Link: Cadence's Verification IP Catalog

         


        Video: Interview with Professional Teenage Technology Coach Kristine Bonhoff

        $
        0
        0

        Over the past several years at various EDA trade events, one of the more popular forums have been panel discussions and interviews asking teenagers about the technology in their daily lives.  However, those forums have been comprised of amateurs, whereas in this interview I've secured a professional technology consultant -- Ms. Kristine Bonhoff, a college student by day, and a paid technical coach and volunteer in her spare time.  Specifically, people of a certain age pay Kristine to coach them on how to get the most out of the various gadgets and related apps they own.  She also volunteers to give tech training courses to inner city residents. 

        In this interview Kristine shares her clients' most common FAQs, their biggest positive and negative misconceptions about various technologies, and her wish list for the future.

        If the video doesn't play, click here.

        I believe you will find much food for thought in her remarks.  My take-away is that there are two clear and very challenging implications for the EDA industry:

        * Apps will continue to drive the requirements and demand for their respective host devices, and not the other way around.

        * Enabling low cost to the end-consumer - whether it's a low retail price or via clever rent-to-own business models - is as important for our customers as ever.

        Joe Hupcey III


        On Twitter: http://twitter.com/jhupcey, @jhupcey

         

        Constrained Random Test Generation In e [IEEE 1647], Ernie * Duracell ≈ Infinity Minus

        $
        0
        0

        Ernie & Duracell

        "I feel great" - long pause - "I feel great, I feel great".

        6 weeks later: "I feel great, I feel great, I feel great" - pause  - "I feel great".

        I hear this sound coming out of my son's room. What is going on in my house? Is there such a thing as too much euphoria? No, sometimes my son does utter this phrase, but most of the time it is coming from an Ernie toy he inherited from his cousin several years ago.

         

         

        This particular toy is over 14 years old and still "feels great". We had it for over 5 years. So, by now the batteries should have given up. Nonetheless, we still get these random, out of the blue utterances of the phrase: "I feel great". This is supposed to be triggered by some sort of child-toy interaction. However, it mutated into sound generation at random intervals. This phenomenon is very bizarre, as the pauses are very long, causing completely random operation.

        The other day the toy went off again: "I feel great". My suspicion was that this might wake up my son in the middle of the night. As Ernie's electronics and wiring obviously have some issues, and as Ernie does not have an on/off switch, my first recourse was to remove the batteries.

        To my surprise, the batteries are more than 10 years old and we have never replaced them since we received the toy. The circuit is not supposed to draw a lot of current. However, it is always on and ready to "speak". Overall, this is pretty amazing battery longevity - Hats off to Duracell!

         

         


        Moving From Sesame Street to the Real World


        In verification, your goal is to put the DUT into all known scenarios, and as many unknown scenarios as possible. Constrained random test generation is particularly helpful in achieving the latter. In e [IEEE 1647] constrained random generation is front and center. It is a core principal of the e language and the associated methodology. By default, everything, every aspect and every field, is random in e. This will ensure that you get to as many unknown and unanticipated scenarios to test your device as thoroughly as possible, and that you identify the associated bugs during simulation.  

        When everything is random by default you are not at risk of forgetting to randomize any aspects. Consequently, you are less dependent on the quality of your coverage model to detect flaws in the test generation ability of your environment.

        JL Gray, VP of Verilab North America puts it like this : "Setting up a verification environment where you have to decide what not to randomize ends up being far more randomized in the end than one where you have to decide what to randomize."

        Another effect of default randomization is that it enables early bug detection. More randomization shakes out more bugs. Since detecting bugs early is less costly than detecting them later, this has a positive impact on the overall verification cost.

        Default randomization does not imply that you bring up your environment with the wildest transactions imaginable. You still control what you want to see initially. However, it does mean that you are typically moving to a high level of randomization more quickly.

        This principal of a default randomized environment, is called Infinity Minus (∞-) and is illustrated by the following code:

        
        // apb_trans_s.e (abridged file) - sequence item definition
        struct apb_trans_s like any_sequence_item {
        
           addr : 		uint;
           data : 		uint;
           direction :	read_write_t;
           delay :		uint;
        
           ...
        }
        

        In this simplified and abridged example of an APB transaction definition, all fields are randomized by default. In other words, you get random addresses combined with random data, random direction (read or write), and random delays between transactions.  Subsequently, you impose rules by adding constraints and reigning in the randomization to suit your particular testing needs.


        E * D   ≈ ∞-      Ernie * Duracell ≈ Infinity Minus


        The broken electronics in the Ernie toy, and the almost infinite longevity of the Duracell batteries, are behaving just like they are driven by an infinity minus stimulus generator. Ernie feels great at random times, but you will feel great all the time knowing your designs have been verified using the infinity minus verification approach.

        Keep on verifying!

        Axel Scherer
        Incisive Product Expert Team
        Twitter, @axelscherer

        SimVision Watch Window Now Accommodates Specman Watch Items

        $
        0
        0

        Starting from version 12.1, the SimVision Watch Window accommodates Specman watch items together with HDL watch items. Now you can use the same window to inspect all your watches.

         

        Hyperlink support in the SimVision Watch Window is still on its way, so right now Specview is the default for Specman watches. Nevertheless, you are invited to try out the new feature and voice your opinion.

        To choose which watch window should display Specman watches, use the newly introduced
         "-use_watch_window" configuration option belonging to the "debug" category:

        config debug -use_watch_window=window

        The ‘window' parameter can have the following (self-explanatory) values:

        • specview
        • simvision
        • both

        You can change the configuration "on the fly". Your watch items will be moved to the appropriate window (or copied if you choose both).

        You can also change the configuration while still in batch mode. It will take effect whenever you connect to the GUI.

        Alex Chudnovsky

        Specman R&D

        A “Reflection” on Chip-Level Debugging with Specman/e and SimVision

        $
        0
        0

        Last week, a favorite customer of mine called me in a panic, just days from tape-out of a large multimedia SoC. After a minor change in their RTL code their Specman testbench started crashing, even though the e code wasn't changed. Could I help?

        Knowing that this customer compiles their e code, and that Specman doesn't tend to crash, the first thing I did was to get them to recompile the e code with the debug switch set to irun: "irun -sncompargs -debug". This turns off some of the compiler optimizations. Significantly in this case it turns on null pointer checks in the user's code, since these checks are normally turned off to get higher performance. With debug enabled, the test was re-run and we quickly saw the point of failure: a call to the vr_ad register model was dereferencing a null pointer. Phew! At least I knew Specman's reputation wasn't about to be blemished by some random crash, but why would a minor RTL change cause such a dramatic effect in the testbench?

        Knowing the scale of the testbench, and the proximity of the tape-out, I figured we would track down the null pointer problem another day. The more immediate problem was to identify where the RTL had gone wrong so it could be fixed ASAP, but how to pinpoint that bug on such a big chip, from the other end of the phone without even knowing the design? A big challenge...

        First we tried running the simulation using UVM FULL verbosity and comparing the log against the previous iteration of the RTL, but this was slow and not particularly easy. I needed a better solution, and quick.

        What I hit upon was to take the binary search approach: split the problem into RTL and testbench, and determine which side of the boundary the problem occurred. To enable such an approach, I wrote a small e file that loaded on top of Specman, using the e language's powerful Reflection API to scan the entire design for any "simple_port" connections to the RTL. By fetching the ports' hdl_path() attributes, the script dynamically created "probe" commands for all the ports. My customer then loaded this e code into her "good" and "bad" RTL versions, saving the ~4000 waveforms into two separate database files.

        Next we loaded the databases into SimVision and used the powerful SimCompare feature to locate the differences between the two simulations. We nailed the problem in moments: the RTL change had left a register without a reset, leading to large X propagation problems once the reset signal was de-asserted.


        Figure 1: Results from a SimCompare analysis highlighting the differences between identical signals from two different databases

        With hindsight I found myself thinking, could we have debugged this any faster? Perhaps, if we'd known the exact nature of the RTL changes and where in the design those changed modules were instantiated, we could have started from the changed file and worked forward towards the testbench, but that doesn't work if you don't know every last detail about the design. As an e expert, could I have debugged back from a vr_ad call to understand why that pointer was null? Probably, but it was most likely not an easy thing to trace; after all we didn't know if we should be tracing a wrongly null pointer or an errant call that used a legitimately null pointer.

        As it stands, I'm happy that we took the most efficient route; the time to write the e code was minimal, and it made the analysis really simple. Best of all, it required no design knowledge and is totally reusable on any Specman testbench, which is quite a result! All thanks to Reflection and SimVision's SimCompare GUI...

        For the curious, here's the e code, a mere 83 active lines including a convenience macro and debugging output code.

        Steve Hobbs

        >// Dumps an ncsim Tcl script for probing all the signals that Specman is connected to.
        //
        // Usage:
        //
        //   load sn_waveports;
        //   wave_ports [-db <name>] [-tb <name>] [unit_instance]
        //     Where -db is the ncsim SHM database name to generate.
        //           -tb specifies the VHDL testbench name to be stripped off.
        //           <unit_instance> is something like sys.tb_env.my_agent.smp
        //
        // Limitations:
        //
        //   - Macro options are parsed in a basic fashion and must be in the order shown.
        //   - Testing has been against mixed Verilog+VHDL designs,
        //     Verilog-only names may be mangled.
        //   - No support yet for dynamic use in interactve simulation,
        //     this is primarily meant for batch-mode use.
        //
        // Plans:
        //
        //   - Add support for sending commands direct to ncsim / simvision.
        //   - Add grouping to SimVision waveforms, based on unit tree.
        //   - Use default "ncsim" SHM if -db not given.
        //   - Allow multiple invocations to share the same SHM file.
        <'

        struct waveports_util {

          !all_signals : list of string;

          !user_tb_name : string;

          dump_ports( db : string, dump_unit : any_unit = sys ) is {
            var version   : string = "0.1";
            var nctcl     : file;
            var log       : file;
            var logName   : string = append(db,".log");
            var nctclName : string = append(db,".tcl");

            nctcl = files.open(nctclName, "w", "Tcl script");
            log   = files.open(logName,   "w", "Log file");

            all_signals.clear();

            files.write(nctcl, "# Auto-generated by sn_waveports utility");
            files.write(nctcl, append("database -open ",db," -shm"));

            files.write(log, append("# sn_waveports version ", version));
            files.write(log, append("#   db   : ", db));
            files.write(log, append("#   unit : ", dump_unit.e_path()));
            files.write(log, append("#   tb   : ", user_tb_name));
            files.write(log, append("#   tcl  : ", nctclName));

            for each (u) in rf_manager.get_all_unit_instances(dump_unit) {
              if u is not a message_logger {
                var subtype: rf_struct = rf_manager.get_exact_subtype_of_instance(u);
                var signals : list of string;
                for each (port_field) in subtype.get_fields().all(it.is_port_instance()) {
                  if port_field.get_type() is a rf_simple_port {
                    var port : any_port = port_field.get_value_unsafe(u).unsafe();
                    var full_hdl_path : string = port.full_hdl_path();
                    files.write(log,append("e_path    : ", port.e_path()));
                    files.write(log,append("hdl_path  : ", full_hdl_path));
                    files.write(log,append("connected : ", port.is_connected()));
                    files.write(log,append("agent     : ", port.agent()));
                    full_hdl_path = str_replace(full_hdl_path, "/~\//", "");   // remove ~/ prefix
                    full_hdl_path = str_replace(full_hdl_path, "/[./]/", ":"); // replace . and / with :
                    full_hdl_path = str_replace(full_hdl_path, "/::+/", ":");  // remove multiple colons
                    // ignore ports which are not connected or are already probed
                    if port.is_connected() and not all_signals.has(it==full_hdl_path) {
                      all_signals.add(full_hdl_path);
                      if user_tb_name != NULL {
                        // search and replace explicit VHDL-TB name if given
                        full_hdl_path = str_replace(full_hdl_path, appendf("/^%s/",user_tb_name), "");
                      };
                      signals.add(full_hdl_path);
                      files.write(log,append("hdl_path' : ", full_hdl_path));
                    };
                  };
                };
                if 0 != signals.size() {
                  files.write(nctcl, appendf("# Unit %s\n", u.e_path()) );
                  files.write(nctcl, appendf("probe -create -database %s %s\n",
                                             db, str_join(signals, " ") ) );
                };
              };
            };

            if 0 == all_signals.size() {
              out(
                "*** Error: No e ports were found",
                "    Try again after invoking 'gen' to construct the unit tree."
              );
            };

            files.close(nctcl);
            files.close(log);

            out("Wrote ", nctclName, " and ", svcfName,".\nSourcing ",nctclName,"...\n");
            simulator_command(append("source ",nctclName));
          };
        };

        extend global {
          waveports : waveports_util;
        };

        define <wave_ports'command> "wave_ports[ -db <db'file>][ -tb <tb'any>][ <du'struct_member>]" as computed
        {
          if NULL == global.waveports {
            global.waveports = new;
          };
          if <tb'any> != NULL {
            out("Setting '",<tb'any>,"' as the VHDL testbench name.");
            global.waveports.user_tb_name = <tb'any>;
          };
          var db : string = "sn_waveports";
          if <db'file> != NULL { db = <db'file>; };
          var du : string = "sys";
          if <du'struct_member> != NULL { du = <du'struct_member>; };
          return appendf("global.waveports.dump_ports(\"%s\", %s);", db, du);
        };

        '>

        Report From Silicon Valley With Application Engineer Bin Ju

        $
        0
        0

        Luckily I was able to track down my very busy colleague Bin Ju between assignments and interview her about her first-hand observations of what's going on here in Silicon Valley today.  Bin is an expert on formal and assertion-based verification (ABV), so her remarks focus on the trend toward increasing adoption of formal analysis, how users are leveraging "formal apps" to enable rapid adoption of this technology by all team members, and thus how customers' are improving their return on investment.

        If the embedded video doesn't play, click here.

        Are you seeing these trends in your company?  Please share your thoughts below, tweet or Facebook them, or contact me offline.

        Joe Hupcey III
        for Team Verify

        On Twitter: http://twitter.com/teamverify, @teamverify

        And on Facebook:
        http://www.facebook.com/pages/Team-Verify/298008410248534

         

        Reference Links
        DVCon 2012: Product Engineer Chris Komar reviews the tutorial on formal apps

        February 2012: article by industry analyst Richard Goering on, "How Formal Analysis ‘Apps' Provide New Verification Solutions"

        Viewing all 673 articles
        Browse latest View live