Quantcast
Channel: Cadence Functional Verification
Viewing all 673 articles
Browse latest View live

Is it Time to Verify Your Chips in the Cloud? Part 3 of 3

$
0
0

Welcome back to our series on cloud verification solutions. This is the final part of a three-part blog—you can read part one here and part two here.

Now, for the moment you’ve all been waiting for: what can Cadence do for you in the realm of cloud verification. Are you ready?

In case you saw that list of qualities one should look for in a cloud partner in the last part of this blog, and wondered who ticks all those boxes—Cadence does. Tapping into 20 years of hosting experience, Cadence has just announced Cadence Cloud, a portfolio of solutions that meet the needs of companies large and small.  The first of these is the Cloud-Hosted Design Solution—a fully-managed offering that provides access to the full suite of Cadence software products. This offering is an EDA-as-a-Service solution that can help you—no matter who you are—meet your variable compute, time-to-market, and cost requirements without the difficulty of managing the cloud environment yourself.

If you’re concerned about the security of doing all your verification work in the cloud, don’t worry: we were concerned about that, too. Our IaaS partners pass global security standards like:

·         FIPS 140-2 Level 2

·         DoD Security Requirements

·         ISO 27001, 27017, and 27018

·         …and more!

In addition, the Cloud-Hosted Design Solution takes your usual infrastructure-as-a-Service (IaaS) security to the next level, with additional security apps, practices, and testing.

The next solution in the portfolio is Palladium Cloud. This gives users access to the Cadence Palladium emulation platform. Seasoned Palladium users who already have on-site hardware can use it when they need a little extra emulation capacity for their projects. New users find it great for getting instant access to emulation tech. Palladium Cloud supports the full range of emulation use models.

If you don’t like the idea of Cadence managing your environment for you, then the third solution in the Cadence Cloud portfolio is for you—the Cloud Passport model. With Passport, you can choose from a list of cloud-ready tools and manage your own environment using any of the top IaaS vendors.

Now, when you combine the scalability, security, and flexibility of Cadence Cloud with Xcelium’s amazing simulation technology, you get a radical increase in verification productivity that was inconceivable before.

If you’re worried about time to market—and who isn’t, really—Cadence Cloud is here for you. Verification is the biggest time-sink in the development process, so shrinking that should be your first concern—and it’s exactly there that Cadence Cloud packs the most power. Bringing together Cadence’s twenty-plus years of environment managing experience and a wide range of cloud-read and optimized products, Cadence Cloud is ready to help you address your needs.

So, the question remains: are you ready?


Veriest to Host Verification Meetup in Serbia Featuring Specman Macros

$
0
0

Veriest, a member of the Cadence Verification Alliance, is holding a series of Meetups in Serbia to serve the growing technology community. The December 12th event will feature a session on Specman Elite macros and how they help with reusability and maintainability.

Veriest is a long time Verification Alliance member, and recently co-authored a paper with Valens on DVCon on the same topic: "Learn How Valens uses Specman Macros Automate Configuration of Verification Environments".

See the flyer below for information on the Serbian Meetup:

Tales From DAC: How Syntiant Went From Zero to Tapeout in Six Months

$
0
0

Here’s something to chew on:

Syntiant is an AI startup involved in deep learning technology and semiconductor design. Their goal is to create exceptionally low-power designs for always-on devices, like those used in speech detection.

Syntiant went from empty air to tapeout in six months.

That sounds hard to believe, but it’s true. They got their first venture-capital funding in October of 2017, taped out initial test chips in December, hit first product chip tape-out in March, and formally announced their company in May. This kind of timeline is completely unheard of—but thanks to Cadence technology, Syntiant’s scintillating success story—told at the Cadence Theater at DAC 2018—may be soon be more common than you think.

What do you need to start a semiconductor company? Lots of things, obviously—tools, data center access, EDA experts—the whole nine yards. How does a fledgling company like Syntiant gain access to all of these things? Building your own data center is expensive, and finding qualified EDA experts isn’t exactly easy—never mind purchasing all those tool licenses.

Thanks to Cadence, though, Syntiant only needed one thing: the Cadence Cloud.

Syntiant wanted speed. They wanted instant ramp-up, fast releases, a quick transition to SoC development, and—above all—a fast tapeout to silicon. Cadence Cloud-Hosted Design Solution got them all of those things, and it can get you all of those things, too. Cadence Cloud-Hosted is ready to go right out of the box, and you can use it while VPN connected to your secure chamber. If you’re not sure what you need yet, there’s the DEMO chamber, where you can familiarize yourself with the design environment and any of the Cadence tools. Then there’s the POC (proof-of-concept) where you test out a chamber created to your unique design specifications and tool requirements to ensure you purchase what’s right for you.

Cadence Cloud-Hosted allowed Syntiant to easily produce their test chips with Virtuoso, and use SystemC for their system modeling needs. They ended up using SystemC to build their full system model, taking advantage of SystemC’s object-oriented source language in C++. On top of that, Cadence Cloud-Hosted also allowed for their source code to be co-compiled into Python for use with Tensorflow.

The Cadence Cloud-Hosted Design Solution changed everything for Syntiant. The Virtuoso and Stratus environment let them ramp up quickly, understand the technology, make sound decisions and build up to substantial models in record time. On top of that, using the Cadence Cloud-Hosted Design Solution gave them access to Cadence AEs for assistance.

Six months to tapeout is no laughing matter. If you’re done joking around with inefficient flows, read more on the Cadence Cloud here.

Veriest to Host Verification Meetup in Serbia Featuring Specman Macros

$
0
0

Veriest, a member of the Cadence Verification Alliance, is holding a series of Meetups in Serbia to serve the growing technology community. The December 12th event will feature a session on Specman Elite macros and how they help with reusability and maintainability.

Veriest is a long time Verification Alliance member, and recently co-authored a paper with Valens on DVCon on the same topic: "Learn How Valens uses Specman Macros Automate Configuration of Verification Environments".

See the flyer below for information on the Serbian Meetup. Learn more about the event here.

App Note Spotlight: Streamline Your SystemVerilog Code, Part III - SystemVerilog Data Structures

$
0
0

Welcome back to the third installment of a special multi-part edition of the App Note Spotlight, where we’ll continue highlighting an interesting app note that you may have overlooked—Simulation Performance Coding Guidelines for SystemVerilog. This app note overviews all sorts of coding guidelines and helpful tips to help optimize your SystemVerilog code’s performance. These strategies aren’t specific to just the Xcelium Parallel Simulator—in fact, they’ll help you no matter what simulator you’re using.

Today, we’ll talk about data structures, and how to make sure you’re using the best-optimized method for any use case.

1)      Use static arrays instead of dynamic arrays when the size is relatively constant.

SystemVerilog has a couple of dynamic data structures, and static counterparts for each. The dynamic structures have overhead in that they are heap-managed objects, so there’s a certain amount of overhead that comes with that. Static data structures don’t suffer from this issue so, for cases where it is reasonable, static arrays will perform better than their dynamic counterparts.

2)      Use associative arrays when you need to do lookup or random insertion/deletion

Associative arrays have more efficient lookup than other data structures. Often implemented using a tree, they have a complexity of O(log n). This is much, much faster than a queue or array, which has a linear lookup complexity, O(n).

It’s not all fun and games for associative arrays, though. Since they’re implemented as trees, adding or removing an element from the front or back isn’t as simple of a process; while queues and arrays have access to those areas in constant time, O(1), associative arrays are O(log n) to add or remove an element at any point.

So—if your use case requires a lot of random lookup, and you won’t be inserting or deleting things from the front or back specifically all that often, consider an associative array.

3)      Choose queues when insertions and deletions are mostly at the front or back

Like the above tip, if you only care about adding to the front or back of your structure, a queue is for you. Not only does it access the front or back of the queue in constant time—it can also access an element at any index in constant time, too. This makes it the absolute fastest option for that use case.

4)      Use built-in array functions when needed

This is a fairly simple one—don’t reinvent the wheel on your data structure’s functions. If you’re using a SystemVerilog standard data structure, there’s a good chance whatever operation you’re trying to perform can be done through a built-in function. This extends to querying functions, locator methods, ordering methods, and reduction methods.

5)      Don’t use the wildcard (*) index type for associative arrays

You’re allowed to use the wildcard (*) to make your key a generic integral type; however, this isn’t a good idea if you’re looking for maximum efficiency. Since the wildcard allows you to use any key type, whenever an item is stored in the associative array, the simulator uses a dynamic key type to account for the uncertainty. This has quite a bit of overhead versus a statically sized key. Likewise, the simulator must dereference the dynamic keys to check against the input when you’re doing a lookup or search, and this adds overhead as well.

That’s all we have for today—check back soon for the next installment!

Is It Time to Verify Your Chips in the Cloud? Part 2 of 3

$
0
0

Welcome back to our series on cloud verification solutions. This is part two of a three-part blog—you can read part one here.

The high-performance computing (HPC) market continues to grow. Analysts say that the HPC market will reach almost $11 billion by 2020—that’s an annual growth rate of almost 20%. Cloud providers and other related companies are putting more and more resources into building solutions for this rapidly growing HPC market, which is of great interest to the EDA community.

All of this talk is leading to action—cloud providers are increasing their attention to HPC, which in turn helps create an EDA-appropriate environment for users in the cloud. It’s also a nice confidence boost to the verification and semiconductor design team—it shows them that their needs are being addressed, and that there may be better options for them to do their future work in the cloud.

Now, if you’re in verification—as this is the functional verification blog—how can the cloud help you? There’s a couple of important things to keep in mind as you do your research.

Make sure you’re fully aware of your long-term needs as well as your immediate needs before you partner with an EDA vendor. Just because something works for you now doesn’t mean it’ll work for you in the future. Companies and goals evolve; make sure you’re not getting stuck with subpar tools because you had simple needs when you wrote up your contracts.

Another thing to be aware of is a given provider’s expertise with variable computing requirements. You don’t want to be a new company’s guinea pig with something this vital to your workflow. Keep an eye out for:

1.       Awareness of the best practices associated with creating a secure cloud environment

2.       Products that are validated as cloud-ready

3.       Experience selecting compute and storage instances in the cloud using EDA workload data

4.       The ability to facilitate cloud orchestration

Beyond those things, make sure that your prospective EDA partner has plans to improve their cloud solutions, and to create new cloud solutions, in the future. You want a partner who will take this as seriously as you take it—don’t settle for anything less.

Now, you may be wondering where this is all going.

Tune in next time for our thrilling finale!

UVM-ML- Managers’ Freedom of Choice

$
0
0

Freedom of choice is a term we hear a lot, especially in the last 10 years. It is defined in wikipedia as “an individual's opportunity and autonomy to perform an action selected from at least two available options…”.

Is having many choices always a good thing? Well, usually it is- who would not want to live in a world where s/he has options and can make choices? However, there are also downsides to having multiple options. In the TED lecture, The paradox of choice, Barry Schwartz discusses the negative aspects of having too many choices.

For example, in the healthcare world- in some cases, your physician might present you with a few options and let you make the right medical choice for yourself.  Is it necessarily a good thing? Do you always feel you have the tools to make this choice?...

Freedom of Choice is discussed a lot in the context of many fields such as, law, economics, and well being. But, what does freedom of choice mean in the context of verification? Among other things, it is the freedom to choose the right verification language for your project.

The world today is much more complex than it used to be a few years ago.  Situations like acquisitions, mergers, remote-sites, and purchased third-party VIPs might create few challenging situations, especially for the managers who need to make choices.

Let’s talk about two common scenarios, integrating existing UVCs and choosing a language for new projects.

Integrating Existing UVCs

This is a very common scenario as a result of acquisitions. What would you do if you need to have an existing UVM-SV UVC interact with an existing UVM-e UVC? Will you rewrite one of them with the other language? Naturally, rewriting a UVC is a huge effort.

Fortunately, with UVM-ML you can reuse these existing UVCs and have them co-exist and communicate with the right topology) discussed in the following sections). As a manager, it saves you the effort of rewriting a UVC and if you want each team to continue working with the language it is used to, you can do this with UVM-ML.

Choosing a Language for New Projects  

You might find yourself in a similar situation (as described above) even for a new project. For example, if you have two teams, and each team is used to working with a different language. Here, you can either select the best language for all the teams or let each team continue working with the language it is used to. In principle, the former option is preferred. Firstly, regardless of the power of UVM-ML, it is always easier having everything written in the same language. Secondly, you would prefer having all your people working with the language providing the best quality and productivity.

So, what is the best verification language?

Anyone who has worked with both e and SV knows by experience that e is much superior because of several reasons, but this is a subject for a different blog.  It is true that Xcelium bypasses many limitations of the System Verilog LRM, however, still using e as the verification language is easier and more effective.

So, the best option would be to have everyone work with e, however, it is not always possible. In reality, there might be other factors and considerations. For example, you might have a team pressuring to continue with the language it is used to. In such a situation, as a manager you might decide to let each team select its preferred language. This is the flexibility you get with UVM-ML, this means you have a choice and you can have each team make its own choice.

So what is UVM-ML exactly?

UVM-ML library enables you to connect different UVCs written with UVM-e, UVM-SV, and UVM-SC. While we, as the R&D, are aware of its importance as the glue in several leading companies, it is always exciting to hear customers appreciate its capabilities.  In 2018 US DVCon, HP Enterprise won the best poster titled: “Is e still relevant?”. In addition to the fact that the answer in the poster is “yes”, in this poster (and its relating paper), HP Enterprise describes UVM-ML as the enabler for both reuse of the existing projects written in different languages and selecting the right language for each project (frankly speaking, we could not say it better…).

Figure 1:HP Enterprise poster that won best poster in 2018 US DVCon

This poster and the paper include case studies and learnings from projects within the Silicon Design Lab (SDL) of HP Enterprise. They are saying: “UVM-ML has a proven track record within SDL. It has been used for several years, spanning numerous projects… Through these projects, SDL has been able to take advantage of many of the advanced testing features available in Specman/e while utilizing a variety of UVMSV content from internally developed VCs to externally purchased Verification IPs.”.

UVM-ML was developed with AMD as an open source library provided in Accellera site. Since Incisive 15.2 it is provided also within incisive and Xcelium. We encourage our customers to use the version provided within Xcelium since it has an enhanced integration with Xcelium, however, there are some cases in which customers choose to use the open source version (you can always consult with the UVM-ML support team: support_uvm_ml@cadence.com).

UVM-ML supports multiple topologies according to the user environment. In a side-by-side hierarchy (parallel trees), the environment contains multiple tops, each top containing the components of a single language. In Unified hierarchy (single tree)- each component is instantiated in its logical location in the hierarchy. For simplicity, the two examples contain two languages, however can be extended to three.

Figure 2: Side by side example

Figure 2: Side by side example

Figure 3: unified hierarchy example

How does the magic work?

UVM-ML contains an inner backplane, services, etc. but the main part that is relevant to the user’s point of view is the adapter which has an API for connecting your UVC to the UVM-ML library. The API of each adapter is provided in the native language of the framework that is being connected, meaning there is a UVM-e adapter, a UVM-SV adapter, and a UVM-C adapter. This means that when you connect your UVC to the library, you do it with the language you are familiar with.

 

Figure 4: UVM-ML inner blocks

There is a lot of material out there about UVM-ML. You can read more about UVM-ML in the reference and user manuals in cdnshelp. If you want to quickly get started, it is recommended to read the following blogs:

 To wrap up, enjoy our multi choices world of verification!!!

Orit Kirshenberg

Specman team

Is it Time to Verify Your Chips in the Cloud? Part 3 of 3

$
0
0

Welcome back to our series on cloud verification solutions. This is the final part of a three-part blog—you can read part one here and part two here.

Now, for the moment you’ve all been waiting for: what can Cadence do for you in the realm of cloud verification. Are you ready?

In case you saw that list of qualities one should look for in a cloud partner in the last part of this blog, and wondered who ticks all those boxes—Cadence does. Tapping into 20 years of hosting experience, Cadence has just announced Cadence Cloud, a portfolio of solutions that meet the needs of companies large and small.  The first of these is the Cloud-Hosted Design Solution—a fully-managed offering that provides access to the full suite of Cadence software products. This offering is an EDA-as-a-Service solution that can help you—no matter who you are—meet your variable compute, time-to-market, and cost requirements without the difficulty of managing the cloud environment yourself.

If you’re concerned about the security of doing all your verification work in the cloud, don’t worry: we were concerned about that, too. Our IaaS partners pass global security standards like:

·         FIPS 140-2 Level 2

·         DoD Security Requirements

·         ISO 27001, 27017, and 27018

·         …and more!

In addition, the Cloud-Hosted Design Solution takes your usual infrastructure-as-a-Service (IaaS) security to the next level, with additional security apps, practices, and testing.

The next solution in the portfolio is Palladium Cloud. This gives users access to the Cadence Palladium emulation platform. Seasoned Palladium users who already have on-site hardware can use it when they need a little extra emulation capacity for their projects. New users find it great for getting instant access to emulation tech. Palladium Cloud supports the full range of emulation use models.

If you don’t like the idea of Cadence managing your environment for you, then the third solution in the Cadence Cloud portfolio is for you—the Cloud Passport model. With Passport, you can choose from a list of cloud-ready tools and manage your own environment using any of the top IaaS vendors.

Now, when you combine the scalability, security, and flexibility of Cadence Cloud with Xcelium’s amazing simulation technology, you get a radical increase in verification productivity that was inconceivable before.

If you’re worried about time to market—and who isn’t, really—Cadence Cloud is here for you. Verification is the biggest time-sink in the development process, so shrinking that should be your first concern—and it’s exactly there that Cadence Cloud packs the most power. Bringing together Cadence’s twenty-plus years of environment managing experience and a wide range of cloud-read and optimized products, Cadence Cloud is ready to help you address your needs.

So, the question remains: are you ready?


Tales From DAC: How Syntiant Went From Zero to Tapeout in Six Months

$
0
0

Here’s something to chew on:

Syntiant is an AI startup involved in deep learning technology and semiconductor design. Their goal is to create exceptionally low-power designs for always-on devices, like those used in speech detection.

Syntiant went from empty air to tapeout in six months.

That sounds hard to believe, but it’s true. They got their first venture-capital funding in October of 2017, taped out initial test chips in December, hit first product chip tape-out in March, and formally announced their company in May. This kind of timeline is completely unheard of—but thanks to Cadence technology, Syntiant’s scintillating success story—told at the Cadence Theater at DAC 2018—may be soon be more common than you think.

What do you need to start a semiconductor company? Lots of things, obviously—tools, data center access, EDA experts—the whole nine yards. How does a fledgling company like Syntiant gain access to all of these things? Building your own data center is expensive, and finding qualified EDA experts isn’t exactly easy—never mind purchasing all those tool licenses.

Thanks to Cadence, though, Syntiant only needed one thing: the Cadence Cloud.

Syntiant wanted speed. They wanted instant ramp-up, fast releases, a quick transition to SoC development, and—above all—a fast tapeout to silicon. Cadence Cloud-Hosted Design Solution got them all of those things, and it can get you all of those things, too. Cadence Cloud-Hosted is ready to go right out of the box, and you can use it while VPN connected to your secure chamber. If you’re not sure what you need yet, there’s the DEMO chamber, where you can familiarize yourself with the design environment and any of the Cadence tools. Then there’s the POC (proof-of-concept) where you test out a chamber created to your unique design specifications and tool requirements to ensure you purchase what’s right for you.

Cadence Cloud-Hosted allowed Syntiant to easily produce their test chips with Virtuoso, and use SystemC for their system modeling needs. They ended up using SystemC to build their full system model, taking advantage of SystemC’s object-oriented source language in C++. On top of that, Cadence Cloud-Hosted also allowed for their source code to be co-compiled into Python for use with Tensorflow.

The Cadence Cloud-Hosted Design Solution changed everything for Syntiant. The Virtuoso and Stratus environment let them ramp up quickly, understand the technology, make sound decisions and build up to substantial models in record time. On top of that, using the Cadence Cloud-Hosted Design Solution gave them access to Cadence AEs for assistance.

Six months to tapeout is no laughing matter. If you’re done joking around with inefficient flows, read more on the Cadence Cloud here.

Renesas Brings Their Legacy Testbench Up to Speed Using the Cadence Verification Suite

$
0
0

Recently, Renesas Electronics Corporation faced a challenge. They were developing a new data conversion block, one that included an AHB bus bridge, which would be attached to a pre-existing DMA IP core. There was also a complicated finite state machine involved in this new block. Renesas didn’t have a whole lot of time on their hands—they needed a quick turnaround time, but only had a limited amount of engineers to accomplish that with. Because of that, they wanted to recycle a few in-house IPs and the verification of those IPs in their new project, despite having a team that wasn’t involved in those previous endeavors. Beyond that, Renesas also wanted to upgrade their verification methods with top-of-the-line tech. They wanted this verification environment to be a model which other IP core design projects could follow.

Quite a tall order—but it was easy with Cadence’s help.

Using Cadence tools and assistance from Cadence application engineers, Renesas was able to use Specman e’s native scalability to keep their old legacy testbench, even though there were over 45 component files and loads of internal connections across different components. After that, Renesas used the Cadence Xcelium  Parallel Logic Simulator alongside the Cadence Indago  Debug Analyzer App, taking advantage of the e language to help them build complex and scalable testbenches.

“To make the best use of the existing IP cores, renewing the legacy verification environment with the most advanced tools available proved to be an effective approach. Positive and collaborative relationships with Cadence played a key role to achieve it,” said Takahiro Ikenobe, director of the peripheral circuit design department at Renesas Electronics.

Using Cadence products gave Renesas a 77% savings in labor while still meeting Renesas’s standard of high quality. With the vManager platform, Renesas was able to reach 100% combined coverage, allowing them to completely reach verification signoff. Using Cadence’s tools to revitalize their old verification environment was a resounding success, and greatly helped Renesas make the most of their existing work—and we’re looking forward to further endeavors with them in the future.

Tales from DAC: How Altia Systems Used Xcelium to Bring New Life to Virtual Meetings

$
0
0

We’re going to take a wild guess and say you’ve been in a meeting before. Maybe it was a virtual meeting—but those never really feel the same in person, do they? Attending a virtual meeting can feel cold and impersonal, especially since you can barely see everyone else. Everyone is there except you—and even then, saying you’re “there” is a bit of a stretch.

Nowadays, though, there’s technology to remedy this problem. Panoramic video systems fill this need by providing 180 degrees of video field range—and the newest offering from Altia Systems has so much more than just that: it is the world’s first 4K Plug-and-Play USB camera system, the PanaCast 2.  It delivers ultra-fast video with a natural human perspective, and in two years, has grown to over 1400 customers in 41 countries.

This thing has some serious specs: 300 million pixels, more than eight processors, 30+ patents behind its creation, and three imaging sensors—all for under $1000 a unit. Each camera has an angle of overlap that allows it to cover a 180-degree range, and those views are stitched together with an image stitching algorithm that pieces together the final video in 5 milliseconds. It’s UVC compliant, works right after plug-in, and is compatible with cloud or on-premises conference services.

Figure 1: The specifications of the PanaCast 2

So: how do you build something like this? Well, the parts are certainly complicated enough. You need a built-in AI for anti-aliasing, minimal distortion and a natural perspective to ensure a comfortable image, and everyone needs to be in the conversation—you’ve got to be able to see everything, and that means 100% space utilization.

That’s a serious task for the verification engineers—but luckily for them, Altia Systems used Xcelium Parallel Simulator for this, and got it done fast.

The PanaCast 2 required an exceptionally complex test bench—with between 3-6 image sensor models, multiple MIPI interfaces, and DDR3/DDR4 models. On top of that, each frame of video is quite large, at between 2-8 megapixels each. The design also contained a complex mix of Verilog, SystemVerilog, and Netlists, alongside several gate-level macros within a Xilinx FPGA.

Those engineers had quite the job ahead of them.

With Incisive, it would take 5-6 hours to simulate each frame; 19 hours if we’re talking about the 3 – 4 frames needed for video verification. On top of that, exported waveform files were 50 GB, creating a real strain on data storage.

Altia Systems worked with Cadence AEs to help pare down their load; bringing those waveform files down to between 1 and 10 GB, and achieving a 1.25x speedup on their single-core simulation needs right out of the box using Xcelium.

So: what’s next for Altia Systems and their journey with Xcelium? They expect that their performance needs will vastly increase. Frame sizes are going to go from 2 megapixels to somewhere between 13 and 16; plus, there’ll be new features and DSP cores for the next generation of cameras.

To help, they plan to use some of the sweet new features Xcelium brings to the table: Save/restart functionality will give them checkpointing, to help speed up their debug turnaround. Multi-threading and multi-core waveform dumping will give them extra info to look at. Lastly, Xcelium’s famous multi-core speedup will be put to the test.

Altia Systems has only scratched the surface of what Xcelium can do for them with the PanaCast 2—as Altia Systems moves on to their next endeavors, the unmatched power and speed of Xcelium can only send them to new heights.

Renesas Brings Their Legacy Testbench Up to Speed Using the Cadence Verification Suite

$
0
0

Recently, Renesas Electronics Corporation faced a challenge. They were developing a new data conversion block, one that included an AHB bus bridge, which would be attached to a pre-existing DMA IP core. There was also a complicated finite state machine involved in this new block. Renesas didn’t have a whole lot of time on their hands—they needed a quick turnaround time, but only had a limited amount of engineers to accomplish that with. Because of that, they wanted to recycle a few in-house IPs and the verification of those IPs in their new project, despite having a team that wasn’t involved in those previous endeavors. Beyond that, Renesas also wanted to upgrade their verification methods with top-of-the-line tech. They wanted this verification environment to be a model which other IP core design projects could follow.

Quite a tall order—but it was easy with Cadence’s help.

Using Cadence tools and assistance from Cadence application engineers, Renesas was able to use Specman e’s native scalability to keep their old legacy testbench, even though there were over 45 component files and loads of internal connections across different components. After that, Renesas used the Cadence Xcelium  Parallel Logic Simulator alongside the Cadence Indago  Debug Analyzer App, taking advantage of the e language to help them build complex and scalable testbenches.

“To make the best use of the existing IP cores, renewing the legacy verification environment with the most advanced tools available proved to be an effective approach. Positive and collaborative relationships with Cadence played a key role to achieve it,” said Takahiro Ikenobe, director of the peripheral circuit design department at Renesas Electronics.

Using Cadence products gave Renesas a 77% savings in labor while still meeting Renesas’s standard of high quality. With the vManager platform, Renesas was able to reach 100% combined coverage, allowing them to completely reach verification signoff. Using Cadence’s tools to revitalize their old verification environment was a resounding success, and greatly helped Renesas make the most of their existing work—and we’re looking forward to further endeavors with them in the future.

To read the Renesas's full story, check here.

Verification of ML IP and Specman—Our Hackathon Project

$
0
0

If you are lucky enough and your company spends a few working days each year on a Hackathon, you must know that it is usually a lot of fun. The latest 2018 Hackathon in Cadence was all about Machine Learning. We, in Specman R&D, debated a bit around how to approach the topic since Machine Learning means a lot of different things in our industry. Take a look at the following interesting article: Where ML works best in which Anirudh Devgan, president of Cadence, describes few main areas.

When you talk about verification and Machine Learning, usually two main things come in mind:

  1. Using Machine Learning in Verification: We encountered multiple teams which use Specman for implementing machine learning techniques to optimize their regression. We have been getting different requests for more notifications and hooks, especially around coverage. And, we already have some of these requests on our roadmap.
  2. Verification of Machine Learning IPs: For any team working on machine learning IP or chip, verification is a critical and a challenging task and if done right, it can get you a leading edge. This usually involves a significant large dataset, which means you need to carefully decide which samples to test, pick up the good corner cases, and get to sufficient coverage fast.It is a well-known fact that Specman is the best verification engine- it has the best constraint solver and the easiest aids to define coverage model. Therefore, we decided to challenge our tool and see it in action in a Machine Learning (ML) environment. This blog describes our small Hackathon project that few of us from the Specman team were working on. This project presents Specman strengths in an ML IP verification environment. 

What model are we verifying?

We decided that in favor of the Hackathon project, we are the verification team of a company that produces an IP of a weighted trained Neural Network (NN). We have decided to take a black box verification approach, meaning we are not verifying the NN internals but rather its outputs. Therefore, the exact usage of the NN was not that important for us but since image recognition is so useful and popular, we decided to go with it. In addition, we wanted to take advantage of the knowledge of the Machine Learning experts in Cadence.

These experts provided us with a Convolutional NN (CNN) comprising 1 input-layer, 5 hidden-layers, and 1 fully-connected layer written in Python on top of Tensorflow. In addition, they provided us with a dataset called mnist-fashion containing small images of 10 different types of clothing items (shirt, dress etc.). In our Hackathon project, we aimed that our IP should be able to identify these clothing items. To make it more interesting, we decided that our IP will be used for identifying clothing items in an image constructed of 4 different clothing items (for example Shirt-Dress-Sandals-Trouser). Now, three things needed to happen:

  1. The Python SW model should be trained with a set of images. Our Machine Learning experts divided the dataset of images into a training set and a classification set and started training the CNN with the training set.
  2. The SW model should be converted into HDL model, defiantly a challenging and interesting task. However, since this is a time-consuming task and our main focus is on verification, we skipped this stage and decided to use some “dummy” HDL.
  3. The verification of this HDL model- is the focus of this Hackathon project.

The verification

In our little project, we want to perform black box verification to the HDL model. This means that without knowing its inner implementation and algorithms we just want to verify whether it is doing what it was designed to do- in our case identify clothing items in an image of 4 items. The Python SW model will be used as the reference model. Though in the real world there should be differences between what the HDL model is able to do as compared to the Python SW model, but this only makes the verification effort more important and challenging.  

What kind of images we want to use for the verification?

Now, we have what we previously called the “classification dataset” of images that we wanted to use to verify that our trained dummy HDL Model can identify just as our Python SW model. As decided earlier, we need to identify images composed of 4 different clothing items and also need to verify all 10 possible clothing items. However, there are also other important factors such as the position in the image, and moreover, if there are specific types it is important to verify while they are next to one another? etc. Well, this requires defining a model including a few constraints which is quite natural to do with e. Let’s take the following constraints for example in which ‘items’ is a list containing 4 cloth types:

// each combined image should have 4 different cloth types

keepsoft items.all_different(it.cloth_type);

 

// have many coats

keepsoft cloth_type == select {

        30 : Coat;

        70 : others;

    };

 

// never have an image containing both dress and skirt

keep (skirt notin cloth) or (dress not in cloth);

 

How will we construct this image of 4 different items?

Remember, we have a dataset of single clothing items and we want to use Specman generation to pick up these specific 4 small images and their positions, but “who” will construct this full image? Logically, the right thing to do instead of inventing the wheel is to use some existing Python API. We found the Python algorithm, ComputerVision (thanks again to our ML guys). Now we were left with the question of how to activate it from Specman. Python has a C interface. Specman has two flavors of C interface, a basic one and an advanced one which is called FLI. The basic C interface is easy to use and is sufficient for our project, so we decided to use it. We defined the method that constructs an image out of 4 small images in e as a C function. In the C implementation, we had to write some code to call the Python API. Once we found the Python algorithm, ComputerVision, it took one of our team members few hours to get it working.

How do we activate the reference model?

Our CNN was written in Python using Tensorflow. Once we were already using Python API for constructing the image, we just needed to do the same for calling the Python SW model. Calling Python from e the second time was obviously easier and faster.

How would we know which images and combinations were tested?

Naturally, we want to know whether we have covered all types, what combinations were covered and what we did not cover etc. while there are a few different interesting combinations. The fastest way for us to do so was to use Defined As Macro to generate the different coverage items. Using a coverage cross item would have been a more elegant and robust solution but we assumed that using the macro was the fastest way.

Running the verification

Now that we had all the parts working, we only needed to push the simulation “play” button and all the parts came into life as shown in the following workflow in a loop:

  1. Specman picks a combination of 4 items according to the defined constraints.
  2. Specman calls the ComputerVision API to construct a new image.
  3. Specman calls the Python SW model with this new constructed image and gets a result of 4 items identified.
  4. Specman sends this image also to the HDL Model and gets a result (hmmm… not really, as we said at the beginning, we used some dummy HDL model that usually produces the same result as the Python SW model, except in a few odd cases).
  5. Specman “checks” the two results and compares them.

 

When the test ended, we were left with the result of the passed and failed cases and our coverage model. For those of you who were using TensorFlow in a parallel mode, yes, we might have been able to do things more productively in TensorFlow, but remember that this is a short project and we had time constraints…

How did the coverage report look like?

All 10 clothing types were covered.

For the different combinations, you can see that we have never had the combination of: Dress- Sandal- Sneaker-T-shirt. Since we defined 20 as the “at_least” value for each combination, you can see that we got 4 Dress-Sandal-T-shirt-Trouser (second line: 20%).

Summary

First things first, as a team we had a lot of fun- the whole exercise was exciting and very different compared to what we usually do in our daily work. Moreover, it was very fulfilling to see Specman’s strengths in action in a relatively new domain. It is well known that Specman is the best verification engine, but it seems that as a tool it can especially shine in a domain which involves massive amount of data, where you need to pick the most important pieces of data and monitor them. Furthermore, SW engineers may find these capabilities useful for validating their SW and can use Specman as a stand along application without the simulator. More information will come later, stay tuned…

Orit Kirshenberg

Specman team

New Training Bytes Available Now: All About SystemVerilog Classes

$
0
0

If you’re leaving 2018 with the feeling that your SystemVerilog skills are lacking, don’t worry—there’s a new series of Cadence Training Bytes to help you hit the ground running in 2019. Here you’ll find eight new YouTube videos all about SystemVerilog classes.

You can find the first video here.

Here’s a quick table of contents:

SystemVerilog Classes 1: Basics

This video goes over the basics of what a SystemVerilog class is—how to create an instance of a class, what a class can and can’t contain, and what they can be used for. It also goes over how SystemVerilog classes compare to the C++ or Java classes you may be familiar with.

SystemVerilog Classes 2: Static Members

Here, you can learn about the static properties and methods of a class—a property or method shared by all class instances. If you’ve ever wanted to access a member function of a class you haven’t created an object for yet, the Static keyword is here for you.

SystemVerilog Classes 3: Aggregate Classes

An Aggregate Class is a class that has properties that are also classes. It’s similar to module instantiation. Having an aggregate class can help you define a slightly different relationship that that of a parent and child class; since the member classes of an aggregate class don’t necessarily have to derive information from the aggregate.

SystemVerilog Classes 4: Inheritance

Inheritance in SystemVerilog is similar to what you may already know from other object-oriented programming languages. To inherit a parent’s properties into a child class, use the “extends” keyword in the class declaration. A child class can add more members to a parent’s class or override existing members in a parent class.

SystemVerilog Classes 5: Polymorphism

You can only extend from one parent in SystemVerilog—how do you get around this? Polymorphism comes to the rescue! Using an array of handles, one can dynamically select which subclass you want to use easily, without having to make so many tedious declarations.

SystemVerilog Classes 6: Virtual Classes and Methods

If you’re using polymorphism, you might be running into issues where a method called through a handle isn’t calling the right function along the inheritance line. Virtual classes and methods can help out with that—a virtual method is resolved according to the contents of the handle. For more information on that, check out this video.

SystemVerilog Classes 7: Class Randomization

If you want to randomize a class property, you can use rand or randc. Rand creates random content with uniform distribution, while randc is random cyclic—it iterates through all values without repetition at least once. Once you’ve declared a variable as rand or randc, you can call the randomize function, which is an un-redefinable function native to all classes that will randomize the data held by variables marked as rand or randc.

SystemVerilog Classes 8: Class Constraints

If you don’t want a random value to be any value, you can use a constraint—this allows you to create a range in which random values can be generated. Constraint members are normal class members and can be inherited just like any other. Check out the video for more information on what constraints can and can’t do.

There you have it—a selection of eight Training Bytes to get you started learning about SystemVerilog classes. To view other Training Bytes you might be interested in, check here.

Specman is Sweet – Bosch Sensortec's Story

$
0
0

Recently, Bosch Sensortec has been using Specman for their functional verification needs in their Inertial Measurement Unit, and they’re loving it.

Why is Specman so cool? Well, it’s implementing the familiar UVM in e, which provides the tools and infrastructure to easily build extendable, maintainable, and reusable verification components. If you use Specman, you’ll see big productivity increases. For a lot of common functions, Specman uses less code than other verification languages, speeding up your code writing. Specman is easy to use, and it’s very intuitive—your prior object-oriented programming language knowledge comes in handy here. Beyond that, if you are stuck on something, the excellent Cadence Support is just a click away, and will help you through any Specman-related difficulty you might face.

At Bosch Sensortec, it took only six short months for them to prepare and implement their first verification component using Specman. And that wasn’t a simple component, either—it had interface eVC, agents, monitors, BFMs, drivers, sequencers, and coverage models. Using a different hardware verification language could make something that complicated a hassle—but with Specman, it’s easy.

Artemios Diakogiannis, a verification specialist at Bosch Sensortec, says that his favorite part of Specman is the flexibility—since Specman is aspect-oriented, it’s simple to extend components to add additional functionality.

So—what’s next at Bosch Sensortec? Artemios says he wants to try the reflection API, which provides an interface to program metadata. This would allow him to define user extensions to pre-existing languages, pushing Specman’s flexibility even further.

Bosch Sensortec used both functional and coverage closure methods during verification, which helped ensure that they hit all of their RTL code in their designs.

With another happy Specman user joining the ranks, maybe you should consider giving Specman a try, too. Unless, of course, you’re okay with using a less flexible, less efficient hardware testbench language.


Tales From DAC: How Syntiant Went From Zero to Tapeout in Six Months

$
0
0

Here’s something to chew on:

Syntiant is an AI startup involved in deep learning technology and semiconductor design. Their goal is to create exceptionally low-power designs for always-on devices, like those used in speech detection.

Syntiant went from empty air to tapeout in six months.

That sounds hard to believe, but it’s true. They got their first venture-capital funding in October of 2017, taped out initial test chips in December, hit first product chip tape-out in March, and formally announced their company in May. This kind of timeline is completely unheard of—but thanks to Cadence technology, Syntiant’s scintillating success story—told at the Cadence Theater at DAC 2018—may be soon be more common than you think.

What do you need to start a semiconductor company? Lots of things, obviously—tools, data center access, EDA experts—the whole nine yards. How does a fledgling company like Syntiant gain access to all of these things? Building your own data center is expensive, and finding qualified EDA experts isn’t exactly easy—never mind purchasing all those tool licenses.

Thanks to Cadence, though, Syntiant only needed one thing: the Cadence Cloud.

Syntiant wanted speed. They wanted instant ramp-up, fast releases, a quick transition to SoC development, and—above all—a fast tapeout to silicon. Cadence Cloud-Hosted Design Solution got them all of those things, and it can get you all of those things, too. Cadence Cloud-Hosted is ready to go right out of the box, and you can use it while VPN connected to your secure chamber. If you’re not sure what you need yet, there’s the DEMO chamber, where you can familiarize yourself with the design environment and any of the Cadence tools. Then there’s the POC (proof-of-concept) where you test out a chamber created to your unique design specifications and tool requirements to ensure you purchase what’s right for you.

Cadence Cloud-Hosted allowed Syntiant to easily produce their test chips with Virtuoso, and use SystemC for their system modeling needs. They ended up using SystemC to build their full system model, taking advantage of SystemC’s object-oriented source language in C++. On top of that, Cadence Cloud-Hosted also allowed for their source code to be co-compiled into Python for use with Tensorflow.

The Cadence Cloud-Hosted Design Solution changed everything for Syntiant. The Virtuoso and Stratus environment let them ramp up quickly, understand the technology, make sound decisions and build up to substantial models in record time. On top of that, using the Cadence Cloud-Hosted Design Solution gave them access to Cadence AEs for assistance.

Six months to tapeout is no laughing matter. If you’re done joking around with inefficient flows, read more on the Cadence Cloud here.

Tales from DAC: How Altia Systems Used Xcelium to Bring New Life to Virtual Meetings

$
0
0

We’re going to take a wild guess and say you’ve been in a meeting before. Maybe it was a virtual meeting—but those never really feel the same in person, do they? Attending a virtual meeting can feel cold and impersonal, especially since you can barely see everyone else. Everyone is there except you—and even then, saying you’re “there” is a bit of a stretch.

Nowadays, though, there’s technology to remedy this problem. Panoramic video systems fill this need by providing 180 degrees of video field range—and the newest offering from Altia Systems has so much more than just that: it is the world’s first 4K Plug-and-Play USB camera system, the PanaCast 2.  It delivers ultra-fast video with a natural human perspective, and in two years, has grown to over 1400 customers in 41 countries.

This thing has some serious specs: 300 million pixels, more than eight processors, 30+ patents behind its creation, and three imaging sensors—all for under $1000 a unit. Each camera has an angle of overlap that allows it to cover a 180-degree range, and those views are stitched together with an image stitching algorithm that pieces together the final video in 5 milliseconds. It’s UVC compliant, works right after plug-in, and is compatible with cloud or on-premises conference services.

Figure 1: The specifications of the PanaCast 2

So: how do you build something like this? Well, the parts are certainly complicated enough. You need a built-in AI for anti-aliasing, minimal distortion and a natural perspective to ensure a comfortable image, and everyone needs to be in the conversation—you’ve got to be able to see everything, and that means 100% space utilization.

That’s a serious task for the verification engineers—but luckily for them, Altia Systems used Xcelium Parallel Simulator for this, and got it done fast.

The PanaCast 2 required an exceptionally complex test bench—with between 3-6 image sensor models, multiple MIPI interfaces, and DDR3/DDR4 models. On top of that, each frame of video is quite large, at between 2-8 megapixels each. The design also contained a complex mix of Verilog, SystemVerilog, and Netlists, alongside several gate-level macros within a Xilinx FPGA.

Those engineers had quite the job ahead of them.

With Incisive, it would take 5-6 hours to simulate each frame; 19 hours if we’re talking about the 3 – 4 frames needed for video verification. On top of that, exported waveform files were 50 GB, creating a real strain on data storage.

Altia Systems worked with Cadence AEs to help pare down their load; bringing those waveform files down to between 1 and 10 GB, and achieving a 1.25x speedup on their single-core simulation needs right out of the box using Xcelium.

So: what’s next for Altia Systems and their journey with Xcelium? They expect that their performance needs will vastly increase. Frame sizes are going to go from 2 megapixels to somewhere between 13 and 16; plus, there’ll be new features and DSP cores for the next generation of cameras.

To help, they plan to use some of the sweet new features Xcelium brings to the table: Save/restart functionality will give them checkpointing, to help speed up their debug turnaround. Multi-threading and multi-core waveform dumping will give them extra info to look at. Lastly, Xcelium’s famous multi-core speedup will be put to the test.

Altia Systems has only scratched the surface of what Xcelium can do for them with the PanaCast 2—as Altia Systems moves on to their next endeavors, the unmatched power and speed of Xcelium can only send them to new heights.

Tales From DAC: Netspeed and the Cadence Interconnect Workbench Pair Up

$
0
0

Services like facial detection, efficient cloud server workload management, artificial intelligence, and image enhancement are all the rage these days; but creating a design to accommodate these needs can be incredibly taxing on your engineering resources. Luckily, Netspeed Systems is here to help.

The devices being designed nowadays are more complicated than ever, and the design requirements are more complex, too. Designers that want the highest performance, multi-core capabilities, mixed traffic requirements, and other features all delivered with either CPUs or GPUs need more. Designers today need a mix of both, for each of their strengths—they need heterogenous computing.

Services like facial detection, efficient cloud server workload management, and image enhancement all call upon heterogenous computing to meet their complex needs. Designers want to use GPUs for easily parallelized data and big data, while they also want CPUs for smaller data and highly-structured, non-parallelizable data. There’s also issues with efficient memory management and maintaining cache coherencies without compromising system-level quality-of-service.

 This is where NetSpeed Gemini comes in.

Gemini is highly configurable—you can easily set your cache hierarchies and ensure caching and coherency participation. Its distributed architecture makes for lower latency and allows for floorplan-aware configurable cache hierarchies.

This is all pretty cool—but you still need software automation to help avoid deadlock in this complex computational platform. The burning question is: how do you know when you’re done with the verification? Netspeed’s architectural design approach seeks to answer exactly that question. You begin with a specification of your architectural requirements; then, Netspeed helps you weigh the different tradeoffs and explore the design space so you can find the best solution for your project. Then, you can get design feedback at any step of the process. This way, you can create and reach concrete goals.

The Netspeed platform has its own built-in simulator, called the Integrated Performance Simulator. This simulator—and its accompanying SystemC model—are great if you don’t have a solid grasp of your traffic requirements yet, and things are still a little more abstract. But, if you’re looking for something more precise, you want Verilog simulation—and the best way to get that is through the Cadence Interconnect Workbench (IWB). Cadence IWB gets you cycle accurate performance analysis, protocol checking via VIP, data consistency checking via IVD, and loads more—and it’s easy to execute on Xcelium or Palladium XP II or Z1. You can get great graphs showing your workload-base traffic simulations, alongside other data analytics to help you identify and fix your performance bottlenecks.

Next-generation applications are driving us to next generation architectures. Devices have caches everywhere, and they’re all snooping each other—how can you expect to keep coherency in that kind of chaos? With next-generation performance analysis—the kind brought to you by Cadence IWB and NetSpeed.

For more information on Netspeed Gemini, check here; for more information on the Cadence Interconnect Workbench, check here.

Adding a Patch Just in Time! — Or Can You Really Allow Yourself to Waste So Much Time?

$
0
0


One animation video - Patch Like The Wind -  is worth a thousand words :)

If you don’t use Specman or don’t use Specman correctly, you spend most of your time waiting for compilation to finish.

One of the most frustrating (and common…) scenarios is when you know more or less what the fix should be (such as, “wait additional cycle before sending” or “the variable should be int and not uint”) and the fix can be done in a matter of minutes. However, you are forced to spend hours waiting for the compilation to end in order to analyze the results and decide if you are satisfied with the fix. While fixing the code, usually you do not write the exact right code the first time. So, you adjust your code – another matter of few minutes – and then you have few more hours to wait for compilation.

Horrifying.

But not if you use Specman.

With Specman, you can fix a file loaded on top of the compiled environment. Here, you don’t need to wait for hours of compilation. You can create a small ‘patch’ file, in which you implement the changes, and then you can load this patch file on top of the compiled environment. Once you are happy with the changes you made – you can move the fixed code to the relevant file/s and compile. Loading one or two files instead of compiling the whole environment can save you hours, if not days, for every fix of your code.

If you want to save some more time – don’t run the test from the start. Save the test before the interesting point (before sending the item, when calling the checker), and dynamic load the fixed file.

The capability to modify the testbench with code loaded on top is also very helpful when it comes to getting fixes from other teams/companies. Instead of waiting for the VIP or any other tool provider to create a fixed version – they send you one e file which you can load on top. Even Specman, as it is written in e, can be extended with patches. No need to wait for an official hot fix - you can get a patch file from the Support team with the required fix.  

Yes, this is no news to Specman users. In meetings with Specman users, asking them “What is your favorite feature?”, one of the top answers is “the Great Patching Capability”.

Next time you are asked “Are you still using Specman?”, you can reply “Sure, and are you still compiling?”

Come Join Us for "Deep Dive into the UVM Register Layer" - A Webinar From Duolos

$
0
0

Join us on September 14th for a free one-hour webinar on the finer aspects of the UVM register layer. We’ll be focusing on key aspects of the UVM Register Layer that can help you with your UVM modeling in ways you may not be aware of.

We’ll be covering the following topics:

  • How to use user-defined front doors and back doors to expand what the register layer can do
  • Understanding the role played by the predictor, and how to use it with the aforementioned user-defined front doors
  • Using register callbacks to help model quirky register behaviors, alongside the side-effects of register read/writes
  • What changes you can or can’t make to UVM code while preserving the random stimulus generation.

Combined, the information covered in these topics can make you a better user of the UVM register layer. Code examples shown during the webinar can all be run with our Xcelium Parallel Simulator.

Come join in!

For more information on this webinar, and for available times on September 14th, check out the link here.

Viewing all 673 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>