Quantcast
Channel: Cadence Functional Verification
Viewing all 669 articles
Browse latest View live

Improve Debug Productivity - SimVision Video Series on YouTube

$
0
0

Most verification customers claim that they are spending over 50% of their verification effort in debug. If so, you should check out these latest SimVision debug videos since you will quickly see how SimVision can enable you to be much more productive in less than an hour after viewing the videos.

Take the time to browse through these videos.  Everyone will benefit, even if you are a new user looking for a debug solution or if you are an experienced SimVision user looking for new and enhanced debug functionality in our latest 12.2 release.

Cadence Debug Verification Expert, Corey Goss, has recently uploaded 13 SimVision videos on YouTube that you can view. The series focuses on a number of key debug features that support various debug flows (RTL, Testbench, SystemC/C/C++ Debug, etc.) and a common debug environment (HDL and Testbench).

You can view the entire playlist of Debug videos from the link below:

http://www.youtube.com/playlist?list=PLYdInKVfi0KYzCjnkgRgDXFJcKyQRz6eM

Let's Debug with SimVision

Kishore Karnane

Team Debug


DVCon 2013 for the Specmaniac

$
0
0

At the upcoming DVCon (in San Jose, CA February 25-28), Cadence will cover all aspects of our verification technologies and methodologies (full list of Cadence-sponsored events is here).  Of course, Team Specman cannot resist drawing your attention to the many activities that will feature Specman and e language-related content, or be of general relevance to Specmaniacs.  Hence, if you are going to the conference, please consider printing out the following "DVCon 2013 Guide for the Specmaniac".

* Specman-centric posters at the poster session on Tuesday from 10:30-11:30am

1P.21   "Taming the Beast: A Smart Generation of Design Attributes (Parameters) for Verification Closure using Specman", presented by Meirav Nitzan of Xilinx, Inc., with co-authors Yael Kinderman and Efrat Gavish of Cadence R&D.

1P.25   "Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e", presented by Horace Chan of PMC-Sierra, Inc., with co-authors Brian Vandegriend and  Deepali Joshi also of PMC-Sierra, Inc., and Corey Goss, a Solutions Architect in Cadence R&D.

The best part about the poster session is you can easily interact with the authors - asking them questions on the fly in a way that would be awkward if they were presenting the paper in a lecture format.

* The Cadence booth at the free expo on Tuesday & Wednesday Feb. 26-27, 3:30- 6:30pm each day

As always, Specman technology is directly or indirectly a cornerstone of the various demos -- UVM, Verification IP, metric-driven verification & Enterprise Manager updates, ESL & TLM updates, etc.  This year we will be showcasing new automated debug technology - the Incisive Debug Analyzer - that works great with e/Specman testbenches. Even better: R&D leader Nadav Chazan will be present to walk through the tool with you and answer your questions.  Of course, at a relatively small show like DVCon there is often the opportunity to digress from the primary demo(s) and discuss Specman technology updates in specific - Nadav and other members of Team Specman will be happy to give you the highlights of the new capabilities release in Specman 12.2 and more.

* Thursday morning Feb. 28 tutorial (8:30am-Noon), "Fast Track Your UVM Debug Productivity with Simulation and Acceleration"

In this comprehensive tutorial, Specman R&D's Nadav Chazan along with hardware assisted verification expert Devinder Gill will show how you can reduce debug turnaround time of class-based, software-like environments (i.e. like an e/AOP testbench).  Specifically, they will show how to leverage low latency interactive debug techniques to improve debug efficiency, where the user has a much broader range of capabilities at their disposal.  This includes interactive features such as forward and backward source code single-stepping, searching for arbitrary values and types, and automated go-to-cause analysis. Come prepared to take plenty of notes because Nadav and Devinder will walk through many detailed examples.

* Bonus: A free lunch on "Best Practices in Verification Planning" Wednesday Feb. 27!

On the Wednesday of DVCon Cadence is hosting an expert panel on "Best Practices in Verification Planning".  Panel moderator, R&D Fellow Mike Stellfox (yes - *that* Mike Stellfox who's been with the team since Verisity days), will kickoff this important discussion on how creating and executing effective verification plans can be a challenging mix of art and science that can go sideways despite the best efforts of engineers and managers.  Note that this won't be confined to RTL verification planning only -- the panel also includes experts on analog-mixed signal verification and formal analysis.

 

Panel discussion at DVCon 2012

We look forward to seeing you in-person soon!

Team Specman

 

Reference Links

The official DVCon website

Comprehensive list of Cadence-sponsored events & papers

Images from last year's show to give you an idea of what it's like, in case you have never been to a DVCon before.

DVCon 2012 video playlist 

60 second highlights video from DVCon 2012

On Twitter: http://twitter.com/teamspecman, @teamspecman

And on Facebook: http://www.facebook.com/teamspecman

DVCon 2013 for Formal and ABV Users

$
0
0

At the upcoming DVCon (in San Jose, CA February 25-28), Cadence will cover all aspects of our verification technologies and methodologies (full list of Cadence-sponsored events is here).  However, Team Verify would like to alert users of Cadence Incisive formal and multi-engine tools, apps, and assertion-based verification (ABV) to the following papers and posters focused on this domain.

* Session 2, Tuesday Feb. 26, 9-10:30am features two papers:

Paper 2.1, "Overcoming AXI Asynchronous Bridge Verification Challenges with AXI Assertion-Based Verification IP (ABVIP) and Formal Datapath Scoreboards".  Speaker: Chris Komar of Cadence; Authors: Bochra Elmeray - ST-Ericsson and Joerg Mueller of Cadence

Paper 2.3, "How to Succeed Against Increasing Pressure - Automated Techniques for Unburdening Verification Engineers".  Speaker: James S. Pascoe - STMicroelectronics; Authors: James S. Pascoe - STMicroelectronics, Steve Hobbs - Cadence, Pierre Kuhn - STMicroelectronics.  (Note: while it's not clear from the title, this paper covers the "Coverage Unreachablity" app running on Incisive Enterprise Verifier (IEV) - more on this "app" below.) 

* Session 3, Tuesday Feb. 26, 9-10:30am (Unfortunately a conflict with paper 2.1 - flip a coin?)

Paper 3.1, "How to Kill 4 Birds with 1 Stone: In a Highly Configurable Design Using Formal to Validate Legal Configurations, Find Design Bugs, and Improve Testbench and Software Specifications"
Speaker: Saurabh Shrivastava - Xilinx, Inc.; Authors: Saurabh Shrivastava, Kavita Dangi, Mukesh Sharma - Xilinx, Inc, Darrow Chu - Cadence Design Systems, Inc.

* Poster session on Tuesday from 10:30-11:30am

1P.6, "A Reusable, Scalable Formal App for Verifying any Configuration of 3D IC Connectivity"  Speaker: Daniel Han - Xilinx, Inc., Authors: Daniel Han, Walter Sze, Benjamin Ting - Xilinx, Inc., Darrow Chu - Cadence Design Systems, Inc.

(Ed. Note.: the best part about the poster session is you can easily interact with the authors - asking them questions on the fly in a way that would be awkward if they were presenting the paper in a lecture format.)

* The Cadence booth at the free expo on Tuesday & Wednesday Feb. 26-27, 3:30- 6:30pm each day

Among the other demos available, Team Verify experts will be on hand to show you our Coverage Unreachability app, one of a number of free apps available to users of IFV and IEV.  [Ed. Note.: What do we mean by the term "app" in this context?  Verification apps in general put the focus on "problems vs. EDA technology" such that a verification app is a well-documented tool capability or methodology focused on a specific, high-value problem.  In this instance - with IFV or IEV as the platform -- the given problem is more efficiently solved using formal-based methods and/or a combination of formal, simulation, and metric-driven techniques than simulation-based methods alone.  Finally, the barrier to creating the necessary properties and/or the need for ABV expertise is significantly reduced through either automated property generation built-in to the tool(s) or pre-packaged properties (provided).]

* Bonus: A free lunch on "Best Practices in Verification Planning" Wednesday Feb. 27!

On the Wednesday of DVCon Cadence is hosting an expert panel on "Best Practices in Verification Planning".  Panel moderator and R&D Fellow Mike Stellfox will kickoff this important discussion on how creating and executing effective verification plans can be a challenging mix of art and science that can go sideways despite the best efforts of engineers and managers.  Note that this won't be confined to RTL verification planning only -- the panel also includes experts on analog-mixed signal verification and formal analysis.  Specifically, the CEO of long time Cadence partner Oski Technology, Vigyan Singhal, will be on the panel to share how advanced planning can greatly improve the efficiency and effectiveness of formal analysis and ABV.  (Recall that at the last DAC Vigyan's team successfully verified a sight unseen DUT from NVIDIA in 72 hours.  The key their success was resisting the enormous temptation to jump in and start running IEV, and instead taking a whole evening to thoroughly understand the design and scope out the most critical areas for analysis.)

We look forward to seeing you in-person soon!

Joe Hupcey III
for Team Verify

On Twitter: http://twitter.com/teamverify, @teamverify

And on Facebook too:  www.facebook.com/cdnsteamverify

 

Reference Links

The official DVCon site

Comprehensive list of Cadence-sponsored events & papers

Images from last year's conference to give you an idea of what it's like, in case you have never been to a DVCon before.

DVCon 2012 video playlist: http://www.youtube.com/playlist?list=PL66DB89BCDB6E841A

60 second highlights video from DVCon 2012: http://youtu.be/qEzIUX9VvOc

 

Using the ‘restore -append_logs' Feature

$
0
0

As described in Specman Advanced Option appnote, Specman Elite supports dynamic load and reseeding. This allows the user to run the simulation up to a certain point (often until right after reset) and save the simulation. The user can then restore the simulation and run many different tests either by changing the random seed (reseeding) or by loading additional e files which will change the test, e.g., adding constraints (dynamic load).

But many customers who use this new methodology have come across a problem. If a DUT error occurs in one of the new runs, and there is a need to debug the failure, usually the first step is to check the various log files. However, with this methodology we only have log files from the restore point and later; anything written to the log file from the original run until the save is lost. So we don't actually have the full log file, and this can make debugging more difficult.

To avoid this problem and be able to see the full log file, the user must first save the simulation with the log files (do not worry about the size, the file is compressed). Then, when restoring the simulation, the user must add a switch the tell Specman to append the current log files to the previously saved ones.

To support this capability, the following switches were added:

  • Specman:
    • The save command will have an additional switch: -with_logs
    • The restore command will have an additional switch: -append_logs
  • Ncsim:
    • The save command will have an additional switch: -snwithlogs
    • The restart command will have an additional switch: -snlogappend
  • Irun:
    • The command line will have an additional switch: -snlogappend

So, how do you use this feature? We will show you, using the basic xor example which we shortened to 2 operations. If we run using the command:

irun xor.v -snload xor_verify.e -exit

the Specman log file will look like this:

Starting the test ...

Running the test ...

Running should now be initiated from the simulator side

  it = operation-@7: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             0

1       %b:                             0

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 150

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

Now let's run the example with save and restore. First we'll do the save:

irun xor.v -snload xor_verify.e -tcl

ncsim> run 200ns

ncsim> save foo -snwithlogs

ncsim> exit

Now, if we run using the command:

irun -r foo -exit

the Specman log will contain:

Restored Specman state INCA_libs/worklib/foo/v/savedir/sn_save.esv

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

However, if we run using the command:

irun -r foo -snlogappend -exit

The Specman log will contain:

Starting the test ...

Running the test ...

Running should now be initiated from the simulator side

  it = operation-@7: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             0

1       %b:                             0

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 150

Restored Specman state INCA_libs/worklib/foo/v/savedir/sn_save.esv

  it = operation-@8: operation   of unit: sys

        ----------------------------------------------  @xor_verify

0       %a:                             1

1       %b:                             1

2       !result_from_dut:               0

  p_out$ = 0

  (it.a ^ it.b) = 0

  sys.time = 350

Calling stop_run() from at line 45 in @xor_verify.

Last specman tick - stop_run() was called

Normal stop - stop_run() is completed

Checking the test ...

Checking is complete - 0 DUT errors, 0 DUT warnings.

 

We see that in this latest run the log file was appended to the log file of the run of the save command.

It is important to note that only Specman log files are affected by this switch; irun and ncsim log files are not affected.

Avraham Bloch

Specman R&D

P.S. Reminder: To discuss this feature, Specman in general, and the new Incisive Debug Analyzer, R&D’s Nadav Chazan will be at DVCon Feb. 26-28, 2013.  Ask for him by name in the booth, or sign-up for his tutorial on Thursday February 28 on “Fast Track Your UVM Debug Productivity with Simulation and Acceleration”.  More info: http://dvcon.org/2013_event_details?id=144-5-T

IBM and Cadence Collaboration Improves Verification Productivity

$
0
0

Technology leaders like IBM continuously seek opportunities to improve productivity because they recognize that verification is a significant part of the overall SoC development cycle. Through collaboration, IBM and Cadence identify, refine, and deploy verification technologies and methodologies to improve the productivity of IBM’s project teams. 

Tom Cole, verification manager for IBM’s Cores group, and I took a few minutes to reflect on verification productivity and discuss what the future holds.

Tom, can you describe the types of products your teams verify? 

Our groups develop IP cores for IBM internal and external customer SoC projects.  Among these are Ethernet, DDR, PCIe and HSS communications cores and memories. Our projects tend to be on the leading edge of performance and standards.

What are some of the verification challenges your teams face? 

Our verification challenges fall into three major categories – mixed-signal, debug, and product-level productivity.  All of our cores include PHYs, which makes mixed-signal intrinsic to their functionality, but we all know that transistor-level mixed-signal simulation is too slow for methodologies like OVM and UVM.  OVM and UVM increase productivity because they reduce the test-writing effort, but they create another challenge in debugging the enormous amount of data they produce.  A part of that data set - coverage - is a critical metric for us because it enables us to measure our verification progress. But it also leads to a capacity challenge due to the enormous data volume.

How are IBM and Cadence collaborating to address these challenges?

Several innovative projects are underway with Cadence to address these verification challenges.  For example we have applied the metric driven verification methodology as documented in Nancy Pratt's video summary. Another project that has been running for more than a year models analog circuits with digital mixed-signal models, and shows an order of magnitude performance improvement in preliminary results.  As a result, we were able to use the same models in our pre-silicon verification and in our post-silicon wafer test harness.  As industry leaders, we also share knowledge derived from our collaboration through technical papers.  One example is the SystemVerilog coding for performance paper delivered at DVCon 2012 and the constraint optimization paper we will deliver at DVCon 2013. 

What’s next for verification productivity?

Given the complexity of verification, there are several opportunities to improve productivity.  For example, a promising approach uses formal checks at the designer level to reduce the time to integrate the testbench and blocks of the design for verification.  We are currently collaborating to place these static checks in our code for reuse throughout the verification cycle.  This may catch unintended instabilities introduced by ECO design changes earlier in the verification process and further improve our overall verification productivity.

If you have questions for Tom or me, please post your comment and we’ll do our best to answer you quickly!

=Adam Sherer, Cadence

It’s Coming: Udacity CS348 Functional Hardware Verification Course Launches on March 12, 2013

$
0
0

On October 18, 2012 Google, NVIDIA, Microsoft, Autodesk, Cadence and Wolfram announced their collaboration with Udacity. Working with Udacity, each of the companies listed above is developing new massive open online courses (MOOCs).

The Cadence contribution is CS348 Functional Hardware Verification.

You can enroll in this course by clicking on"Add to my Courses" on this page.

https://www.udacity.com/course/cs348

Today, we are happy to announce that our course will launch on Tuesday, March 12, 2013. The full course consists of 9 units and will include industry cameos from several distinguished engineers from different companies around the world. These engineers provide additional perspective to the topics of the particular units in the course.

To give you a little taste of the course, we are releasing the first clip of the first unit today.

As you watch the video, you will notice this is going to be different.

One key aspect that is not shown in the first clip is the high level of student engagement and interaction. Besides micro-lectures, the course will contain lots of interactive quizzes and many online coding exercises to ensure the concepts are well understood and can be put into practice immediately.

We will preview some of the interactive capabilities in the next weeks.

This is the list of units:

  1. Introduction to Hardware Verification
  2. Basic stimulus modeling and generation
  3. Interfacing to the Hardware Model
  4. Monitoring and Functional Coverage
  5. Checking
  6. Aspect Oriented Programming
  7. Reuse Methodology
  8. Debugging
  9. Conclusion and Exam

The course will be completely self-paced, which means you can take it at your own pace and leisure.

Finally, the course will close with a final exam and Udacity certificate to show your performance.

Get ready to verify and check for course news on Facebook and Twitter!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Planning to Go to DVCon 2013 Next Week? If So, Don't Miss the Debug Tutorial Feb. 28th!

$
0
0

TUTORIAL: Fast Track Your UVM Debug Productivity with Simulation and Acceleration

Session: 5T on Thursday, Feb. 28th from 8:30AM - 12:00PM

For more details on the debug tutorial, click here

This debug tutorial will highlight how customers can reduce their debug turnaround time by employing the most efficient debug tools available. Class based software-oriented environments are best debugged using interactive debug techniques where the user has a much broader range of tools at their disposal. Traditional post process debug techniques can be valuable -- however, many limitations such as performance, and the lack of interactive features such as source level stepping, make debugging difficult. To be more efficient in post-process debug, these restrictions must be removed. The debug techniques presented in this tutorial will provide design and verification engineers with the latest techniques and tools available in the debug space, including a new novel solution that combines the best features of both interactive and post process debug.

Novel debug methodologies that will be discussed in this tutorial will allow users to:

  • Explore their test environment for static and dynamic information
  • Step forward or backward through the simulation, or jump to a specific point in the simulation
  • Investigate possible reasons why the simulation has reached a particular state through advanced go-to-cause features
  • Filter all messages coming from any platform (HVL and HDL code) and explore the cause of the messages

Additional topics to be covered in the tutorial are:

  • Advantages of interactive debug over traditional post-process debug
  • Preparing UVM environments for hardware acceleration
  • Advanced post-process debug techniques improving debug productivity by 40 - 50%
  • Unique advantages of class-based aware debug technologies 
  • Application of debug techniques to both simulation and acceleration engines, including assertions and coverage driven verification 

We will also explain how new data access to coverage and assertions in acceleration extend these methods to catch bugs unique to system verification. In short, we will provide design and verification engineers with the latest techniques and tools available in the debug space, including solutions combining the best features of both interactive and post process debug using both simulation and acceleration engines. 

So, if Debug is a big bottleneck in your overall verification effort, do not miss this debug tutorial on Thursday, February 28th at 8:30AM in the Donner Ballroom.

Looking forward to seeing you all at DVCon!

Kishore Karnane

 

JBYOB (Just Bring Your Own Browser): Interactive Labs on Udacity CS348 Functional Hardware Verification – No Installation Required

$
0
0

On February 19, we announced the launch date for our Udacity MOOCs course: CS348 Functional Hardware Verification, which will launch in exactly one week from now on March 12, 2013.

When we communicated the launch date, we also released the first clip of the first unit.

Now we want to give you a glimpse of one the coolest features of this course: Interactive labs executing in the web browser.

The best way to describe it is to give you a short demo that shows you how this works, even before you can try it out yourself.

The first time I saw this it totally blew my mind. No installation, no setup of labs, everything is fully sandboxed!

Just enroll here, and when the course is live log into the course on a web browser, and you are ready to go. It's simply amazing!

Get ready to code!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer


DVCon 2013: Functional Verification Is EDA’s “Killer App”

$
0
0

With another year of record attendance, DVCon has again proven that a functional verification-focused mix of trade show and technical conference is what customers need to get their jobs done.  Here are some of the some of the highlights I took away from this informative event:

DVCon 2013 was a one stop shop for panels, papers, posters,
live demos, and tutorials on functional verification

* Great panels on Verification Planning and Drastically Improving D&V

Two panels at the conference provided valuable food for thought in their own ways.  First, in regard to the Cadence lunch panel on "Best Practices in Verification Planning", EDA industry observer Peggy Aycinena wrote:

Sometimes magic happens at panel discussions at technical conferences, and that was the case mid-day on Wednesday at DVCon in San Jose this week, where the conversation was lively, entertaining and informative on the pedestrian, albeit foundational, topic of "Best Practices in Verification Planning."  Ironically, the hour-long conversation did not appear to be planned at all, but to be organic and spontaneous ...

Granted I'm biased - but I have to agree whole heartedly.  The panelists were generous in sharing their experiences with the mixture of art and science required by verification project planning, and I urge you to review either of Peggy's account of the panel or Industry Insights' Richard's Goering's in depth report.

Later that day "panel magic" happened again at the Industry Leaders panel on "The Road to 1M Design Starts".  To everyone's delight, the panelists embraced the spirit of brainstorming how design and verification can be made significantly (think 20x, even 100x) more efficient.  Sound impossible?  One panelist gamely recalled that not many years ago there was a "software crisis" where the best software managers could expect was a net of 10 tested lines of code per day per engineer.  Fast forward to the present, and teenagers with a lot of imagination but limited programming experience are creating money-making apps on incredibly complex mobile platforms thanks to very well thought out development tools and libraries.  The panel challenged the audience to consider the lessons of such anecdotes in increasing abstraction and automation for EDA tool providers and their customers alike.

Richard Goering covers this panel in depth here in his Industry Insights blog.

* Apps as the new EDA paradigm

At last year's DVCon one of my product teams ("Team Verify") introduced the idea formal apps in our tutorial.  (In a nutshell, a formal app enables usage of powerful formal engines "under-the-hood" by an engineer who has never used formal before, to solve specific problems.)  At the time we were the only ones promoting this concept and offering the underlying product support.  What a difference a year makes -- not only have our immediate competitors adapted this approach, but the "app" term was being applied to both formal, multi-engine, and pure dynamic simulation offerings and every thing in between.  Of course, it's hard to be surprised by this given the EDA-related appeal is obvious: because apps are focused on specific, painful problems -- i.e. they are customer-centric by definition and in practice -- they are a clear win for both end users and vendors.

* The e/Specman Surge

After years of having waves of Specman-related abstracts be rejected seemingly out of hand, this year the assembly finally got to see what Specmaniacs have been eager to share with this verification community.  One look at the posters by Meirav Nitzan of Xilinx (1P.21, Taming the Beast: A Smart Generation of Design Attributes (Parameters) for Verification Closure using Specman) and Horace Chan of PMC Sierra (1P.25   Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e) and it's obvious that ‘e' and Specman usage are both thriving and they remain at the forefront of verification innovation.

Until next DVCon, may your power consumption be low and your throughput be high.

Joe Hupcey III

On Twitter: @jhupcey, http://twitter.com/jhupcey

Reference Links

DVCon 2013 Proceedings, http://dvcon.org/

DVCon 2013 YouTube playlist of speaker and panelist video interviews:
http://www.youtube.com/playlist?list=PLYdInKVfi0Kantj1U3H8pk9NkxFykT0rG

Richard Goering Industry Insights report: DVCon 2013 Expert Panel: How to Succeed with Verification Planning
http://www.cadence.com/Community/blogs/ii/archive/2013/03/05/dvcon-2013-expert-panel-how-to-succeed-with-verification-planning.aspx

Richard Goering Industry Insights report: DVCon 2013 Panel: 1 Million IC Design Starts - How Can We Get There?
http://www.cadence.com/Community/blogs/ii/archive/2013/03/01/dvcon-2013-panel-1-million-ic-design-starts-how-can-we-get-there.aspx

Peggy Aycinena, EDA Café: DVCon 2013: Best Practices in Verification Planning
http://www10.edacafe.com/blogs/whatwouldjoedo/2013/02/28/dvcon-2013-best-practices-in-verification-planning/

 

Launch Time – Udacity CS348 Functional Hardware Verification Hits the Web Today, March 12, 2013

$
0
0

Coinciding with the first day of CDNLive! Silicon Valley, our UdacityMOOCs course on Functional Hardware Verification will go live today! Developing this course has been a very rewarding experience and we are happy this day has finally come.

Last week we gave you a sneak preview of the interactivity featured in the course. However, as you all know there is nothing like trying something by yourself to really get it.

So now it is your turn. Go ahead - enroll and check it out.

To give you more motivation to enroll, we are providing you another glimpse of the course. This time the clip is from unit 2, where we model packet for a data router.

Let's verify!

Axel Scherer
Incisive Product Expert Team
Twitter, @axelscherer

Specman: Getting Source Information on Macros

$
0
0

When you write a define-as or define-as-computede macro, you sometimes need the replacement code to contain or to depend on the source information regarding the specific macro call, including the source module and the source line number.

For example, a macro may need to print source information, or it may need to create different code when used in one module than it needs to create when used in other modules.

You can achieve this as follows.

Define-as macro

In a define-as macro, you can use the following two special kinds of replacement terms inside the replacement block:

<current_line_num>

This is replaced with the decimal numeric representation of the source line number in which the macro is called.

<current_module_name>

This is replaced with the name of the module in which the macro is called.

For example, the following macro modifies the given left hand side expression (such as a field or a variable) to the given value, and prints an informational message reporting this change.

<'

define <my'action> "modify_and_print <field'exp> <value'exp>" as {

    <field'exp> = <value'exp>;

    out("*** The value of <field'exp> was changed to ", <value'exp>,

        " at line <current_line_num> in @<current_module_name>");

};

‘>

Assume the following code is then written in a module called my_module.e:

<'

extend sys {

    !x: int;

    run() is also {

        modify_and_print x 10;

    };

};

‘>

This code will assign the value 10 to the field x, and will print the following output:

*** The value of x was changed to 10 at line 5 in @my_module

 

Define-as-computed macro

In a define-as-computed macro, the following two pre-defined routines can be used to query the current source line number and the current module.

get_current_line_num(): int

This routine returns the source line number in which the macro is called.

get_current_module(): rf_module

This routine returns the reflection representation of the module in which the macro is called.

To get the module name string, similarly to <current_module_name> in define-as macros, use the get_name() method of rf_module.

For example, the following macro adds a new field of type int to the struct in the context of which it is called. The field name is constructed from the current module name and line number. However, if the module name has the "_xxx" suffix, no field is added.

<'

define <my_field'struct_member> "special_field" as computed {

    var m_name: string = get_current_module().get_name();

    if m_name !~ "/_xxx$/" then {

        result = append("f_", m_name, "_",

                get_current_line_num(), ": int");

    };

};

‘>

The following code, if written in a module called some_module.e, adds two integer fields to sys: f_some_module_3 and f_some_module_4:

<'

extend sys {

    special_field;

    special_field;

};

‘>

Note however that if the same code is written in a module called some_module_xxx.e, nothing is done.

Yuri Tsoglin

e Language team, Specman R&D

Incisive Debug Analyzer is a Finalist for EETimes and EDN ACE Software Product of the Year

$
0
0

Great news.... Incisive Debug Analyzer (IDA) is one of five finalists for the EETimes/EDN Annual Creativity in Electronics (ACE) Awards in the Software Product of the Year category. In addition to IDA, Lip-Bu Tan and Cadence are also finalists for ACE Executive of the Year and Company of the Year, respectively.

Check out the Press Release.

The awards program honors the people and companies behind the technologies and products that are changing the world of electronics. Winners will be announced April 23 during Design West in San Jose.

Companies today spend more than 50% of their verification effort in debug because bugs are hard to find at both the HDL level or at the testbench level.  This has created a critical market need for sophisticated debug solutions that can find bugs quickly, thereby lowering design costs and speeding time to market.

In 2012, Cadence met this market need with the introduction of the Incisive Debug Analyzer (IDA) - a new and unique multi-language, "interactive" post-process debug solution that can help customers find bugs in minutes instead of hours.  As the only debug tool in the market to deliver comprehensive and innovative debug functionality in a single, integrated, and synchronized debug environment, the new IDA can cut customer debug time by 40-50% and by more than 2X on really complex bugs.

About IDA

IDA provides sophisticated debug solutions to address RTL, testbench and SoC verification debug needs. Additionally, IDA provides an interactive debug flow in a post-process debug environment. This means that customers have all the functionality of an interactive debug flow while debugging in a post-process mode. Since users have access to all the data files, they only need to run the simulation once -- a significant time saver in debug.

IDA has several unique debug capabilities which are all very tightly integrated and synchronized into a single-multi-pane debug window. Here are just a few:

  • Playback Debugger: Unique functionality which allows customers to either step or jump through time to any source code line or variable change, both forward and backward in time.
  • Cause Analysis: Intuitive, flow-oriented debug environment which presents suggestions about where to look, in order to debug.
  • SmartLog: Integrated message window that shows logfile messages from HDL, testbench, C/C++/SystemC/Assertions, etc.

Learn more about Incisive Debug Analyzer.

Cadence is taking the lead in debug, and there are loads of new features, improved ease of use, and better performance planned for Incisive debug solutions in 2013 and beyond.  Contact your Cadence representative for more information, in-depth demos/presentations, or hands-on technical workshops.

Happy Debugging!

Kishore Karnane

 

 

 

Develop for Debugability – Part 1

$
0
0

Debugging is the most time-critical activity of any verification engineer. Finding a bug is very often a combination of having a good hunch, experience, and the quality of testbench code that you need to analyze. Since having a good hunch and experience is something everyone needs to acquire for themselves, I am going to focus on potential code optimizations that help reduce debug time.

Encapsulate your Aspects

As in any other object-oriented language, modeling should be a planned rather than an ad-hoc process. However, as a verification engineer you are heavily reliant on others to help you in the planning process to develop your testbench. As such, you will quite often be forced to do ad-hoc programming to model a new requirement, or rewrite already existing code to meet a slight change in an already existing requirement. The UVM-e guidelines already provide a very solid basis; however even within those guidelines, your scoreboard can be very prone to becoming a dumpster for anything that you have to do on an ad-hoc basis.

You might be fine with just using your scoreboard for modeling all the RTL-to-testbench output checking. However, your testbench might have to handle more complex input-to-output transformations to provide the testbench output. This is where using the scoreboard as a dumpster for anything you can think of is a bad idea, and you should think about using a dedicated reference model to provide a well encapsulated input-to-output transformation or even an input predictor, based on the output you received.

As an e user you are in luck, because it is very easy to perform ad-hoc programming in e and avoid "the dumpster." In a series of steps, I am going to guide you through how to integrate your reference model into your block-level monitor unit.

  1. Declare your scoreboard

<’

// This is just a place-holder for your scoreboard

unit my_scbd_u like uvm_base_unit {

   // Place-holder method for input-to-output transformation

   transform_received_to_expected( src_tr: src_prot_tr_s ): target_prot_tr_s is empty;

};

‘>

2.       Instantiate your scoreboard in the Block-Level Monitor

<’

unit my_block_monitor_u like uvm_base_unit {

    // the scoreboard instance

    scbd: my_scbd_u is instance;

};

extend my_scbd_u {

    // reference, do not generate

    !p_block_mon: my_block_monitor_u;

   

    connect_pointers() is also {

        p_block_mon = get_enclosing_unit( my_block_monitor_u );

    };

};

‘>

3.       Create your reference model aspect feature and instantiate it in the monitor

<’

unit my_model_aspect_a_u like uvm_base_unit {

    // reference to your monitor unit

    !p_block_mon:   my_block_monitor_u;

    // All your fields, events, methods etc go in here

    // ...

   

    my_transformation_algorithm( src_tr: src_prot_tr_s ): target_prot_tr_s is {

        // algorithm that models the transformation from input to output

    };

   

    connect_pointers() is also {

        p_block_mon = get_enclosing_unit( my_block_monitor_u );

    };

   

    // Add a name for your model

    short_name(): string is also {

        result = ”ASPECT_A”;

    };

};

extend my_block_monitor_u {

    model_aspect_a: my_model_aspect_a_u is instance;

};

‘>

 4.      Integrate the first model aspect into the scoreboard

<’

extend my_scbd_u {

    // Reference the aspect

    !p_model_aspect_a: my_model_aspect_a_u;

    

    // Add the transformation hook

    transform_received_to_expected( src_tr: src_prot_tr ): target_prot_tr is {

        result = p_model_aspect_a.my_transformation_algorithm( src_tr );

    };

   

    connect_pointers() is also {

        p_model_aspect_a = p_block_mon.model_aspect_a;

    };

};

‘>

 5.   Adding more aspects to the verification environment.

<’

extend my_scbd_u {

    // Reference another aspect

    !p_model_aspect_b: my_model_aspect_b_u;

   

    // Add the transformation hook for another model aspect

    transform_received_to_expected( src_tr: src_prot_tr ): target_prot_tr isalso {

        // This could be an example of a filter that alters or removes transactions

        result = p_model_aspect_b.apply_filters(  );

    };

    connect_pointers() is also {

        p_model_aspect_b = p_block_mon.model_aspect_b;

    };

};

‘>

In this flow, steps 1 through 4 have to be done once, and step 5 is simply just extending your transformation hook method with any additional algorithms you need to add to your scoreboard.

 

By following these steps as a guideline, you can quickly change reference model aspects to your scoreboard without creating a dumpster and this will help you debug any issues tremendously. Don’t forget to extend the short_name() method in your units for your messaging!

 

Daniel Bayer

 

Develop For Debugability – Part II

$
0
0
Looking at Coding Styles for Debug

In this blog post we are going to discuss 3 different cases where coding style can help you debug easier:

1.      Declarative vs. Sequential Coding

2.      Method Call Depth

3.      Calculating if-else Conditions

Declarative vs. Sequential Coding

When modeling your testbench you will need to write code that describes time-consuming or complex steps of some intended behavior. This will be considered sequential code.

You will also most likely need code that keeps the current state of your object accessible to other objects, as well as tracking these object states over time. This will be considered declarative code.

While developing your testbench you will find yourself asking quite often: "Should I make this declarative or sequential?" As a general guideline, reducing the number of sequential lines of code and keeping as much information as possible in the declarative code will help you to debug a lot quicker.

Specman has a superb data-browser and step-debugger and it is important to keep this in mind. When debugging code you usually want to see where the testbench and the RTL go out of sync. The common approach would be to fire up your failing simulation and set the breakpoints:

Specman> break on error

Specman> break on gen err

This will make your simulation stop on an error and open up the debugger, highlighting the precise line where the error occurred. In an ideal case, you can already tell from your error description what went wrong. This however is rather rare and you should get acquainted with the current state of your testbench and gather all declarative information you can get through Specman's data browser. This will give you an understanding of where the simulation has headed, and chances that you will understand the error are pretty good if you tracked enough information in your declarative code. If you are fortunate enough you can already resolve the error or at least have a conversation with the person that might have to fix this scenario.

However, there are still quite a few cases where you need to rerun the simulation. Here you will have to dive into the remaining sequential code and do a step-by-step debugging session. Step-debugging is very tedious and will get more cumbersome the more sequential code you have to examine. It gets even more cumbersome if you are relying on a lot of temporary variables inside your methods.

Method Call Depth

While developing sequential code you will be defining and implement a bunch of methods. Crafting methods is usually a straightforward process. In verification you need to think aboiut whether you need a time-consuming method (TCM) or a timeless method. As a general rule, if you are modeling event-based models or checks, then you need a TCM -- otherwise try to stick with a regular, timeless method.

Mostly you will be developing methods for:

  • Interface and Virtual Sequences
  • Interface Monitors (not block-level monitors)
  • Bus-Functional Models (BFMs)
  • Reference Models
  • Scoreboards

Due to this separation of functionality, given by the UVM-e, there is already a given blueprint for how to integrate these methods with each other.

One issue that may come from developing methods is that one may feel tempted to create methods for reusability and hence encapsulate even trivial steps in a method and have that method called, instead of typing out these steps. The problem with debugging code that relies heavily on method calls is that you always step into a new method and lose the scope of your method's callee.

The opposite problem of creating too many methods implements mega-monolithic methods. These kinds of methods are hard to debug and understand as well, since they usually not only carry the context of one specific modeled aspect.

Pre-Calculating if-else Conditions

The essence of code execution is handling conditional branching constructs. Generally there is nothing wrong about simply writing a condition into your if-else actions. However, complex Boolean expressions should be evaluated before entering the expression query. By creating a temporary variable and assigning a Boolean evaluation to it, you will gain two advantages:

  • Create a meaningful variable name
  • Break on the if-execution with the condition already evaluated

To read part 1 of this blog series, click here.

Daniel Bayer

Mode Support for SimVision “Stop Simulation” Button

$
0
0

Prior to Incisive Enterprise Simulator (IES) 12.1, clicking the SimVision "Stop Simulation" button would stop the simulation both in an HDL context and in a Specman context if Specman was present in the simulation. To provide better flexibility in the exact place where you want to pause, the "Stop in Specman only" functionality has been introduced.  

As of IES 12.1,whenever Specman is present in the simulation, the SimVision "Stop Simulation" button provides a drop-down menu that lets you choose between two switchable modes: "Stop Simulation" (wherever the simulation is now), and "Stop in Specman" (see the following figure).

 

 

SimVision will immediately perform the "Stop" operation in the mode you select, and keep that mode persistent. It will also indicate the selected mode in the button icon and tooltip (see the following figure).

 

Note the "e" over the button. The "e" indicates that you selected "Stop Specman" mode. (Had you chosen Stop Simulation, no indicator would appear over the button.)

When you click the "Stop" button while SimVision operates in "Stop Specman" mode, the simulation will stop only when it is in Specman, not before.

To toggle between the Stop Simulation modes, just make your new choice in the dropdown menu.

Note that if you run the simulation without Specman, or invoke standalone Specman, the drop-down menu is absent and the "Stop" button works in "Stop Simulation" mode (see the following figure).

 

 Alex Chudnovsky


New Specman Coverage Engine - Extensions Under Subtypes

$
0
0

This is first in a series of three blog posts that are going to present some powerful enhancements that were added to Specman 12.2 in order to ease the modeling of a multi-instance coverage environment. In this blog we're going to focus on the first enhancement, while the other two enhancements will be described in the following coverage blogs.

Starting with Specman 12.2, one can define the coverage options per subtype. Using per-instance, Specman checks the subtypes of each instance, and applies only the relevant subtype options. We will demonstrate the power of this in both "per unit instance" and "per subtype instance."

  • I. Utilizing when extensions for modeling per subtype instances

One intuitive use of extensions of covergroups under "when" subtypes relates to covergroups which are collected per subtype using the per_instance item option:

type packet_size_t: [SMALL, LARGE];

struct packet{

   size: packet_size_t;

   length: uint(bits:4);

   event sent;

   cover sent is{

   item size using per_instance;

   item length;

   };

};

 

The results of the above covergroup are collected per each value of the size item, so in practice this covergroup is collected separately per each "size" subtype.

Let's assume that small packets can only have length < 8, and big packets can only have length >= 8. The following code was needed in pre 12.2 releases for refining the irrelevant values of each subtype:

 

extend packet{

   cover sent(size==SMALL) is also{

   item length using also ignore=(length >= 8);

 };

   cover sent(size==BIG) is also{

   item length using also ignore=(length < 8);

   };

};

 

This code can now be replaced with a native "when" subtype extension:

extend packet{

   when SMALL packet{

     cover sent is also{

     item length using also ignore=(length >= 8);

     };

};

     when BIG packet{

     cover sent is also{

     item length using also ignore=(length < 8);

     };

   };

};

 

  • II. Utilizing when extensions for modeling per unit instances

Extension of covergroups under "when" subtypes can also be used to model the different instances of a covergroup that are collected per-unit instance, according to the exact subtype of the containing instance.

Let's see a code example that illustrates the power of this capability.  In this code we model a packet generator unit that generates packets of different sizes. The packet generator unit has a field which describes the maximal size of a packet that a packet_generator instance can generate:

 

type packet_size_t: [SMALL, MEDIUM,LARGE,HUGE];

struct packet{

   size: packet_size_t;

};

unit packet_generator{

   max_packet_size: packet_size_t;

   event packet_generated;

   cur_packet: packet;

   generate_packet() is{

     gen cur_packet keeping {it.size.as_a(int) <= max_packet_size.as_a(int)};

     emit packet_generated;

   };

};

extend sys{

   packet_gen1: packet_generator is instance;

   keep packet_gen1.max_packet_size == LARGE;

   packet_gen2: packet_generator is instance;

   keep packet_gen2.max_packet_size == MEDIUM;

   packet_gen3: packet_generator is instance;

   keep packet_gen3.max_packet_size == HUGE;

};

Oh, right, there's that coverage thing we need to define in order to check that each valid packet size was generated in each instance of the packet_generator :

extend packet_generator{

   cover packet_generated using per_unit_instance is{

   item p_size: packet_size_t = cur_packet.size;

   };

};

OK, so the above code enables the coverage collection of p_size separately for each instance of packet_generator. Let's generate 100 packets in each packet generator instance. Surely we'll get 100% coverage?

Well, we won't. When launching Incisive Metric Center (IMC), three of the coverage instances are not fully covered. For example the grade of the instance under sys.packet_gen1 is only 75%:

 

The reason for that is the constraint that prevents the generation of HUGE size packets in instance sys.packet_gen1, so no matter how many packets are generated in that instance, the ‘HUGE' bucket (bin) will never be covered.

We need to refine the valid buckets according to the generatable packet size in each instance. We can use the instance specific covergroups extensions for that:

 

extend packet_generator{

   cover packet_generated(e_path==sys.packet_gen1) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'LARGE.as_a(int);

   };

   cover packet_generated(e_path==sys.packet_gen2) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'MEDIUM.as_a(int);

   };

   cover packet_generated(e_path==sys.packet_gen3) is also{

   item p_size using also ignore = p_size.as_a(int) >

   packet_size_t'HUGE.as_a(int);

   };

};

Now we can achieve 100% grade for each instance:

 

  

But in 12.2 we can use the following subtype extensions instead:

extend packet_generator{

   when SMALL'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'SMALL.as_a(int);

     };

   };

   when MEDIUM'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

    packet_size_t'MEDIUM.as_a(int);

     };

   };

   when LARGE'max_packet_size packet_generator{

     cover packet_generated is also{

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'LARGE.as_a(int);

     };

   };

   when HUGE'max_packet_size packet_generator{

     cover packet_generated is also{// for extend packet_type_t: [GIGANTIC]; ...

     item p_size using also ignore = p_size.as_a(int) >

     packet_size_t'HUGE.as_a(int);

     };

   };

};

At first look, the later solution doesn't look more efficient than the former one. It includes 4 extensions of the covergroup instead of 3 extensions that were needed before. But what would have happened if instead of only 3 packet_generator instances we would have 100 instances? If we extend each instance by itself, as we've done in the first solution, we will need to extend each one of the 100 covergroup instances.

While using the "when subtype extension" solution, the 4 extensions above satisfy the requirement for any number of instances.

Even more important, the solution which uses "when subtype extension" is reusable, since it doesn't use the full path of the covergroup instances. So it is much more suitable for verification IPs and for module verification environments which are later integrated into system level verification.

But before you run and start extending your covergroups under subtypes, I'd like to mention that there is another newly supported option in the e language which is even a better suited for that exact scenario which is described above - -it is called the "instance_ignore" item option.

  • What is this "instance_ignore" option?
  • Why it is better suited for the above scenario?
  • For which scenarios is the ‘extension under when subtypes' better suited?

Answers for all of the above questions (and more) will be found in the next Specman coverage blog -- "Using Instance Based Coverage Options for Coverage Parameterization"

Team Specman 

 

Introducing UVM Multi-Language Open Architecture

$
0
0

The new  UVM Multi-Language (ML) Open Architecture (OA) posted to the new UVMWorld is the result of a collaboration between Cadence and AMD.  It uniquely integrates e, SystemVerilog, SystemC, C/C+, and other languages into a cohesive verification hierarchy and runs on multiple simulators.  Moreover, the new solution is open for additional collaboration and technology enhancement. 

Since Cadence introduced ML verification four years ago, the need for it has never been greater.  Complex SoCs are verified with a combination of industry-standard languages and frameworks including IEEE 1647 (e), 1800 (SystemVerilog), 1666 (SystemC), and Accellera UVM, as well as C/C++, VMM, Perl, and others.  The previous ML solution enabled the standard connections but had some limitations.  Among the limitations included are a focus on “quick-stitch” integration that allowed for data communication but required significant additional coding to synchronize the communication. In addition, the solution was built primarily for the Incisive Enterprise Simulator.

Bryan Sniderman, Verification Architect for AMD, introduces the requirements that drove the development, the limitations in existing solutions, and the features you can expect in this UVM ML OA video.  In the video, Bryan describes how the new solution enables hierarchical integration of the frameworks, seamless phasing, seamless configuration, and has the ability to run on multiple simulators.

You can also learn more about the solution in our webinar, “Introducing UVM Multi-Language Open Architecture,” archived on Cadence.com. If you are at DAC, stop by the Cadence Theater on Wednesday at 4:30pm to hear Mike Stellfox present the solution and take part in our Q/A that will follow.  Of course, you can also stop by the booth to learn more as well or send an email to support_uvm_ml@cadence.com if you have any questions.  Finally, note that you will need to register in the Accellera Forums to download the UVM ML OA and that registration is open to all.

Think of the UVM ML OA as a new beginning.  As you read through and watch the background materials, you’ll probably see a mix of exciting new features and opportunities to further improve the solution.  We welcome that input.  The solution you see here represents a solid foundation, but there is more that we can do and we are happy to expand the collaboration to bring in those new ideas.

 

=Adam Sherer on behalf of the AMD and Cadence collaboration

 

 

How Can You Continue Learning About Advanced Verification at Your Desk?

$
0
0

How much time do you spend "playing" and "learning" before you try a new EDA tool, feature, or flow?
Do you really take a training class and sift through the documentation or books about the subject before you start project work? Or are you the type who has the knack of figuring things out on your own by taking a deep dive, head first?

Learning is an iterative and repetitive process.  Human beings spend most of their lives learning through a structured learning program in their school years, then an expensive and elective college adventure, leading to years of learning during their professional lives.  The big challenge that I have faced with learning is how to find the right learning vehicle that helps me discover what I didn't already know in a short period of time.   If you struggle with this aspect, you should look at Cadence Rapid Adoption Kits (or, RAKs).

Rapid Adoption Kits from Cadence help engineers learn foundational aspects of Cadence tools and design and verification methodologies using a "DIY" approach.   Don't get me wrong, instructor-led, structured training programs work beautifully if you can invest the time and money. But there is always demand for learning something simply and quickly in some corner of the world. 

Today, we have made available eight RAKs focused to help our users learn various aspects of digital IP and SoC functional verification methodologies and tools.

The RAKs provide an introduction to state-of-the-art verification solutions including Universal Verification Methodology (UVM), based on the industry-standard UVM Reference Flow donated by Cadence for digital and mixed-signal verification using Incisive Enterprise Simulator and Metric Driven Verification methodology (MDV). For implementation, the flow uses Incisive Enterprise Manager, as well as SoC verification techniques such as I/O connectivity using Incisive Enterprise Verifier and optimization of simullation performance  for large SoCs.

The examples referenced in the Rapid Adoption Kit exercises are based on the Cadence SoC Verification Kit.  You can view presentations, app notes, videos and/or download the package that also contains lab exercises, relevant scripts and instructions.

Download your RAK today at http://support.cadence.com/raks.

Happy Learning!

Umer Yousafzai

The Art of Modeling in e

$
0
0

Verification is the art of modeling complex relationships and behaviors. Effective model creation requires that the verification engineer be driven by a curiosity to explore a design's functionality, anticipate how it ought to work, and understand what should be considered an error. The model must be focused and expressed as clearly as possible, as it transitions from a natural language to a machine-understandable artificial programming language. Ideally, the process should be aided by the modeling language itself.

In this article, we'll highlight such a modeling process - one that describes the structure and problem of the popular Sudoku puzzle. A Sudoku puzzle is a three-dimensional problem, accompanied by a set of rules that actually define its full solution space.

Defining the data structure is the first step in our modeling process. The playing field consists of a box of exactly N by N fields, whereas N may be an arbitrary integer.

The rules by which this game is played is that we look at lines, columns, and boxes which contain a set of symbols. The size of lines, columns, and boxes, respectively, is N; therefore, we need N different symbols which are shared across the playing field. We will have N lines, columns, and boxes.

The actual rules are that each line, column, and box contains every symbol once. Duplication and omissions are not allowed.

First, we want to create a configurable list of symbols. In e, we do this by creating a list and constraining that list properly:

symbols_l: list of uint(bits: 32);

keep SYMBOLS_L is for each in { it == index + 1; };

To represent a set of N lines with N elements, we declare a two-dimensional matrix and ensure that this matrix has lines with one of each of the defined elements:

matrix_lin: list of list of uint(bits: 32);

keep MATRIX_LINES_C is for each in matrix_lin { it.is_a_permutation( symbols_l ); };

Now we do the same thing for the columns:

matrix_col: list of list of uint(bits: 32);

keep MATRIX_COLUMNS_Cis for each in matrix_col { it.is_a_permutation( symbols_l ); };

And we'll do the same thing for the boxes.

matrix_box: list of list of uint(bits: 32);

keep MATRIX_BOX_C is for each in matrix_box { it.is_a_permutation( symbols_l ); };

Now we constrain the first dimension of each matrix to ensure that we are generating the right number of lines, columns, and boxes:

keep MATRIX_SIZES_C is all of {

  matrix_lin.size() == symbols_l.size();

  matrix_col.size() == symbols_l.size();

  matrix_box.size() == symbols_l.size();

};

The only thing now left to do is to connect the three different fields together:

keep CONNECT_LINE_COLUMN_Cis

  for each (line) using index (i_y) in matrix_lin {

    for each (x) using index (i_x) in line {

      matrix_lin[i_y][i_x] == matrix_col[i_x][i_y];

    };

  };

Connecting the boxes with the lines and columns requires some thinking. We already described and constrained all of the boxes; however, mapping the boxes to columns and lines requires some arithmetic. We must first determine the strides needed to identify the box boundaries within the line coordinates. This is done by calculating the square root of N, which we will call n_sqrt. In terms of mapping this to the line coordinates, this means that we will have a new box every n_sqrt elements:

n_sqrt: uint(bits: 32);

keep FIELD_SIZE_C is symbols_l.size() == n_sqrt*n_sqrt;

Let's assume N := 9 and n_sqrt := 3

Line 0

Line 1

Line 2

line[0] == box[0][0]

line[1] == box[0][1]

line[2] == box[0][2]

line[3] == box[1][0]

line[4] == box[1][1]

line[5] == box[1][2]

line[6] == box[2][0]

line[7] == box[2][1]

line[8] == box[2][2]

line[0] == box[0][3]

line[1] == box[0][4]

line[2] == box[0][5]

line[3] == box[1][3]

line[4] == box[1][4]

line[5] == box[1][5]

line[6] == box[2][3]

line[7] == box[2][4]

line[8] == box[2][5]

line[0] == box[0][6]

line[1] == box[0][7]

line[2] == box[0][8]

line[3] == box[1][6]

line[4] == box[1][7]

line[5] == box[1][8]

line[6] == box[2][6]

line[7] == box[2][7]

line[8] == box[2][8]

Line 3

Line 4

Line 5

line[0] == box[3][0]

line[1] == box[3][1]

line[2] == box[3][2]

line[3] == box[4][0]

line[4] == box[4][1]

line[5] == box[4][2]

line[6] == box[5][0]

line[7] == box[5][1]

line[8] == box[5][2]

line[0] == box[3][3]

line[1] == box[3][4]

line[2] == box[3][5]

line[3] == box[4][3]

line[4] == box[4][4]

line[5] == box[4][5]

line[6] == box[5][3]

line[7] == box[5][4]

line[8] == box[5][5]

line[0] == box[3][6]

line[1] == box[3][7]

line[2] == box[3][8]

line[3] == box[4][6]

line[4] == box[4][7]

line[5] == box[4][8]

line[6] == box[5][6]

line[7] == box[5][7]

line[8] == box[5][8]

Line 6

Line 7

Line 8

line[0] == box[6][0]

line[1] == box[6][1]

line[2] == box[6][2]

line[3] == box[7][0]

line[4] == box[7][1]

line[5] == box[7][2]

line[6] == box[8][0]

line[7] == box[8][1]

line[8] == box[8][2]

line[0] == box[6][3]

line[1] == box[6][4]

line[2] == box[6][5]

line[3] == box[7][3]

line[4] == box[7][4]

line[5] == box[7][5]

line[6] == box[8][3]

line[7] == box[8][4]

line[8] == box[8][5]

line[0] == box[6][6]

line[1] == box[6][7]

line[2] == box[6][8]

line[3] == box[7][6]

line[4] == box[7][7]

line[5] == box[7][8]

line[6] == box[8][6]

line[7] == box[8][7]

line[8] == box[8][8]

 

 

This reveals the pattern

line[i_x] == [((i_y/3)%3)*3 + (i_x/3)] [(i_y%3)*3) + (i_x%3)]

The generalized mapping constraint would hence be:

keep CONNECT_LINE_BOX_Cis

  for each (line) using index (i_y) in matrix_lin {

    for each (x) using index (i_x) in line {

      matrix_lin[i_y][i_x] == matrix_box

            [((i_y/n_sqrt)%n_sqrt)*n_sqrt + i_x/n_sqrt]   // i_y coordinate

            [(i_y%n_sqrt)*n_sqrt + i_x%n_sqrt];           // i_x coordinate

    };

  };

As you can see, e lets you describe data structures and rules that describe complex scenarios in a concise way. Layering constraints is the key to creating stimulus. As a verification engineer, you will spend a good deal of time in your projects doing this.

The above code is legal, valid e code. However, because the constraints are obviously quite complex, to avoid possible ICFS errors, you should load your e code. Before generating your environment, you need to use the generation linter. This, however, is an exercise for a different blog article.

 

Feel free to comment on the code and the process above. Perhaps you'll find a different, flexible way to describe the Sudoku game.

Daniel Bayer

 

Fujitsu Gets 3x Faster Regression with Incisive Simulator and Enterprise Manager

$
0
0

Verification regression consumes expensive compute resources and precious project time, so any speed-up has both a technical and business impact. As announced July 17, Fujitsu was able to improve both the compute resource and project time by using Cadence Incisive products and working closely with Cadence field resources to deploy them.  Results:  1.5x faster per test, 3x faster regression overall, and 30x storage reduction.  Wow.

The first step was to optimize each test.  Fujitsu upgraded to the Incisive 12.1 release (shipped June 2012) and applied a feature called "zlib".  This feature compresses the simulatable snapshot. The smaller executable is written faster, occupies less disk space, and loads faster.  Together with the performance improvements available out-of-the-box with the 12.1 release, each test was able to run 1.5x faster, on average.  The Cadence team expects further gains when Fujitsu moves to the latest 13.1 release.

The next step was to apply a technology called incremental elaboration.  The technology allows one or more elaborated "objects" to be created and then linked prior to simulation.  For an individual engineer, the technology means you can link the few blocks you change to the much larger subsystem or system without re-elaborating the unchanged code.  

Fujitsu employed the technology in a slightly different use model.  In regression, there is a matrix of tests and DUT configurations.  In Fujitsu's case, there were 190 tests but only 24 unique Standard Delay Format (SDF) test scenarios.  Before the incremental elaboration was applied, each test scenario was compiled and elaborated with each DUT configuration, resulting in 190 separate elaborations.  When the incremental elaboration was applied, the 24 SDF primary elaborations were linked to the appropriate DUT.  The resulting reduction in compute time for elaboration and storage for the snapshots was combined with the individual test improvements to yield 3x total regression time speed-up and 30x less disk storage.

The final step was automating this process.  The single-elaboration approach is slower but is straightforward becasue each configuration is created new.  Manually integrating the incremental "primaries" is challenging when there are 190 unique tests.

Fujitsu automated this process by applying the Incisive Enterprise Manager (IEM) in several ways.  First, the IEM test runner was able to automatically build the appropriate incremental primaries and link them for each test run.  Second, IEM was able to detect whether an individual test passed or failed, eliminating the "eye-ball" check of the log file or waveforms.  Finally, IEM was able to aggregate the results back to a verification plan (vplan) to show overall project-level progress for the entire regression.

What does the future hold?  Newer releases of Incisive Enterprise Simulator add more out-of-box performance improvements, add black-boxing features, and the ability to link multiple primaries which will make regression faster and make automation more important.  Keeping pace, the Incisive Enterprise Manager adds new analysis features to better automate the overall process.

Call us.  We can do the same for you.

=Adam Sherer, Incisive Product Management Director 

  

Viewing all 669 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>