Image
image
image
image


Design Articles

Verification Management: The Path of Evolution

You have gigabytes of verification data—what does it all mean? Verification management is both the next hurdle and the next path in verification evolution.

By Rahul V. Shah, Director, ASIC Engineering Division, Sibridge Technologies and Darron May, Product Marketing Manager, Mentor Graphics

magnifying glass

It is a universal truth that evolution is a never ending progression. Whether biological or technological, evolution is all about overcoming the hurdles in the path of development. We evolve and cross one hurdle and soon face another, then evolve again. For example, new modes of transportation were invented to resolve the travel-time issue, and now there is too much traffic. Whether it is in the air or freeway, this is one of many hurdles we must cross in transportation. The verification industry is no different.

“Verification effort increases exponentially with design complexity.” True as this statement is, there is another truth lying behind it. “Design complexity itself grows exponentially with each technological advance.” Exploding verification complexity and HVL-based random environments churn out so much verification data that it is impossible to track. Thus, verification management is both the next hurdle and the next path in verification evolution. Soon it will be become one of the key factors in achieving first spin silicon success.

In simpler times, when verification was predictable, 70‑90% of a typical ASIC was developed in-house with the remainder consisting of third-party IP blocks. Verification was accomplished with either purely directed test cases or a semi-random environment using Perl or some other scripting language. The complete verification team sat under one roof, encouraging a much higher flow of information between team members and their manager. Compared to what we face today, the complexity of these designs was much lower. Linking and tracking everything was fairly simple. Simple scripts instilled confidence that regression runs were clean, and well-reviewed, directed test plans and code coverage were good enough to sign off on verification.

Technological growth has meant more work for engineers, but the market is not ready to begrudge more time to tape out. Verification teams must meet the same expectations within the same amount of time on much more complex designs. To generate more data and try more scenarios in less time, new languages and concepts, such as constraint-driven randomization, have been introduced. Similarly, reusability has become an important component in both design and verification development code.

figure 1
Figure 1: The variety and volume of chip verification components
that must be managed to reach verification closure continues to grow
beyond the limits of manual methods.

However, all these fuel an engine that generates more and more base code, which we want to reuse again and again, and every project gives birth to more reusable code. Additionally, as there is no scarcity of storage or processing power, it is possible to develop complex libraries that keep growing from the daily contributions of reusable base code. This is a good thing, as it leads towards the collection of reusable data, yet it also feeds a system that, if not managed properly, can turn all this data into a giant black box, misleading designers to reuse code that, while presumed to work correctly, can carry bugs into a design without their knowledge.

In sum, the issues confronting the verification industry today have less to do with technology and more to do with management (Figure 1). This reflects the character of the current design and verification environment, which features more third-party code, dispersed verification teams, vast volumes of data, and the need for automated verification management solutions. Fortunately, there are early indications that the EDA industry has already foreseen this and has started taking steps to help the verification industry evolve through verification management.

Higher Content of Third-Party Code

The current generation of highly complex chips has a much higher number of third-party components compared to earlier generations. This can be design IP (DIP) as well as verification IP (VIP).  The IP supplier can be from another company or from a different department within a corporation. Although the IP supplier provides documentation, sample test cases, and a test environment with their IP, they probably know nothing about your environment.

If you know the history of a particular DIP, you can define how much verification will be necessary for that particular design, and verifying silicon-proven DIP at the module level might not be a good utilization of verification time; yet ignoring the integration issues when you combine silicon-proven DIP at the chip level can be a disaster. For example, there may be critical signal requirements for the DIP that need to be met for it to work properly; such as a specific reset duration or the routing and handling of interrupt lines. These issues might not show up in module level verification, as the complete environment is stand alone, but once the DIP is integrated, there will be common logic for the clock, reset, and interrupt for the entire chip. Thus, the specific DIP requirements need to be included.

Integrating various VIP blocks from the same or different third-party vendors at the chip level can also deliver unwelcome surprises, especially if you do not know the integration issues. Given this complexity, you need to track each of your third-party components to make sure that you are moving in the right direction. The fact that silicon IP is proven just does not cut it any more.

Verification Teams Are Global

Outsourcing is a cost-cutting measure, especially for verification. Yet, nothing is free. The reduced cost of outsourcing verification tasks is offset by the risks of having an outsourcing team thousands of miles away. You expect your outsourcing partner to understand your technology, know your pain, and communicate properly. My experience in the ASIC design and verification services industry during the last couple of years has taught me that the most common outsourcing concerns are visibility and communication.

If you are a verification manager, you have good reason to be anxious. What if the outsourced verification team wastes your critical schedule-time and money making something you did not ask for or present a rosy picture on top of a lot of messy code, which you’ll have to manage? Many verification managers get burned by incompetent outsourcing partners.

Even if the outsourcing arm does a good job on their block, it is very important that they understand the chip-level integration issues. They must also have clear ideas about the various dependencies and priorities and their impact at the chip level. Most of the time, it is up to the verification manager to manage this task remotely, purely at the mercy of the data shown by the external outsourcing arm.

Verification Data figure 2

The invention of Hardware Verification Languages (HVL) is one of the major events in the field of verification. With new randomization approaches, it is practical to run thousands of scenarios with simple base code. This is great, but again not free of cost.  The amount of data generated needs to be analyzed before you can make anything meaningful out of it.

Instead of writing directed test cases, verification engineers are now moving towards functional coverage and coverage-driven verification. Top-level scenarios or various features of the functional specification can be encapsulated into a well-written coverage plan. Engineers can track the functional coverage report and map the functional coverage plan to gain confidence about the generated stimulus.

Creating a functional coverage plan and iterating random test cases will not result in a high degree of confidence, unless the results are properly checked. For example, functional coverage directs the randomizer to generate more and more random scenarios, but it might not check if the outcome of the generated stimulus is a desired response or not. A functional coverage plan that is not properly tied to verification results, including the coverage data and the results of the verification run (in terms of pass or fail), can give false confidence—a serious pitfall for any verification manager. On top of stimulus data, features and requirements must be tracked with respect to their level of priority or importance; as opposed to finding bugs randomly and, consequently, missing critical bugs due to resource and schedule constraints.

Managing the complete verification schedule, tying all the milestones together, keeping track of the progress reports from all the teams, and looking at the bigger picture is a huge task. Many companies have a dedicated position, along with a verification manager, solely for tracking their Microsoft project plan.

The Big Picture

It is not that we do not track all of the data; it is just that we track the data in isolation. There are many bug tracking systems, but few automatically relate a particular bug to the delay in a timeline or a hole in the functional coverage plan. Teams still rely on an experienced verification engineer or manager to identify these things and make a manual link. There are timelines, but they do not get linked automatically with regression reports or unexpected releases. Again, someone has to do it manually, and the scope of this task is not only increasing but also becoming more and more complex.

Verification management is all about collecting and analyzing the right data at the right time. The EDA industry is beginning to understand these issues, incorporating things like functional coverage planning that can be automatically linked with the actual coverage results, so you do not have to make the link between the plan and the results manually. A new feature in Questa from Mentor Graphics allows meta-data to track verification information, such as the actual engineer working on a particular feature or the verification engine being used. This will definitely help managers track their resources automatically. It also helps evaluate resources in terms of their performance.

Steps are being taken to make a standardized database format, for example the Mentor Graphics Unified Coverage Database (UCDB), to store all simulation data. This in itself is a good indication that the EDA vendors did realize the growing data issues early on and, as a result, started taking steps in the right direction.

At times I wonder whether verification is following the evolutionary footsteps of the software world. For example, verification engineers adopted object oriented programming much later than their software counterparts, and now we are moving toward standard databases. Maybe now is the time when ASIC verification is becoming complex enough to follow the lead of software development cycles.

In the future, there may be some kind of single portal for verification managers that helps them see all the complex relations between timelines, distributed team impacts, bug reports, regression reports, functional coverage data, and so on. Until that becomes a reality, we run the risk of chip failures if we do not focus on verification management as the next, beneficial evolutionary mutation.

Sibridge Technologies
Fremont, CA
(510) 279-3755
www.sibridgetech.com

Mentor Graphics Corporation
Wilsonville, OR
(503) 685-7000
www.mentor.com

This article first appeared in the March, 2008 issue of Portable Design. Reprinted with permission.



Insert your comment

Author Name(required):

Author Web Site:

Author email address(required):

Comment(required):

Please Introduce Secure Code:


image
image