Design Articles

Verification Methodology for Low Power—Part 3
Multivoltage Verification—Static Verification

This is the third of four weekly serialized installments from the Verification Methodology Manual for Low Power. Part 1 covered Multi-Voltage Testbench Architecture—Testbench Structure and Components. Part 2 covered Multi-Voltage Testbench Architecture—Coding Guidelines as well as Library Modeling for Low Power. Part 3 addresses Multivoltage Verification—Static Verification. Part 4 covers Multivoltage Verification—Dynamic Verification and Hierarchical Power Management.

By Srikanth Jadcherla, Synopsys, Inc.; Janick Bergeron, Synopsys, Inc.; Yoshio Inoue, Renasas Technology Corp.; and David Flynn, ARM Limited



This chapter takes a detailed look at both static and dynamic verification. We cover static verification first as part of the flow and move onto dynamic verification. The flow at various design stages is also discussed.


In the previous chapter, we looked at the preparation for verification at various levels of abstraction from testbench and RTL to post layout. In this chapter, we cover the basic verification process and flow, including both static and dynamic verification. Chapter 7, “Dynamic Verification” focuses deeper into the area of dynamic verification. While an immense amount of preparation and infrastructure is needed for dynamic verification, in power managed designs, a good amount of static verification is needed to make sure that bugs that can be detected without the effort of running vectors. So, the question is, what exactly are the goals of verifying power management?

This goal has gone through quite a bit of evolution, from a verification standpoint. Given the emergence of voltage-aware logic analysis, it is now possible for verification engineers to make sure that the DUT works as intended once plugged into a system. In Chapter 5, “Multi-Voltage Testbench Architecture”, all the effort was directed at ensuring that we have a test harness that actually mimics the system setup as well as the electrical effects brought about by voltage variations.

So, what exactly do we do to make sure that the DUT works as intended? First and foremost, we check that the design is indeed structurally connected correctly. This task involves numerous checks, but that can be done statically. Then, we proceed to verify that the power management unit functions as intended. However, that is only the first order of business. What we are really after is to ensure that the DUT works in all power states and can execute all power transitions and sequences as intended. This is a complex task. While it is quite difficult in current technology to prove that the DUT actually saves power, we have the additional burden of proving that the design never enters an electrically unsafe state. An IC that is functionally correct, but consumes excess power—even burns out at times, is not very useful.

Rule 6.1 — Verification must first focus on the electrical safety of the design.

Broadly speaking, as the SoC operates, we need to verify that it is functional in all states and can execute all the intended transitions and sequences. Furthermore, we need to determine if there are any unsafe electrical situations or excessive current consumption scenarios as early as possible. The verification engineer's challenge is to ensure that these goals translate into effective coverage metrics, directed and random tests, and assertions.

Although we tend to think of static and dynamic verification as two separate activities, it helps to think of them together when it comes to coverage of the problem at hand. Static verification can be used to profile the design, which can then be subject to further dynamic tests. This is especially true when static tests are used to detect temporal bugs as opposed to purely structural ones. Equally beneficial is a flow where dynamic verification results or assertions related to them are used in formal analysis to find errors.

However, most IC design groups today are organized into verification and implementation teams. Static verification is typically run by implementation teams, as new netlist generation is done at various points in the flow. This practice no longer works with multi-voltage verification. A team that performs dynamic verification alone may spend a lot of time detecting and debugging errors that could have been detected statically.

magnifying glass

Recommendation 6.2 — Any errors that can be detected statically must be fixed before dynamic verification is performed. Dynamic verification must account for any errors that are not found by static checks.

Recommendation 6.2 is not a Rule, as much as it should be. This is for a practical reason: team organization may vary, and parallelism may be desired by some teams. However, the wisdom of launching a regression can be questioned when errors detectable statically are present, unless of course, the dynamic tests have intelligently focused their attention on other problems.


The first thing to remember about static verification is that it is mainly for structural violations, but not necessarily at the gate level—this is a common misconception. Another interesting aspect of static verification is that it is not necessarily the power intent that has to be fixed all the time. Static verification is really a cross product of three entities: design structure, power intent and library elements. Irrespective of the level of abstraction, static verification takes in these three components. Appendix B lists possible static checks comprehensively. In this chapter, we focus on the process.


Legacy flows involved the insertion of protection cells in RTL code by script or hand. In current power intent methodology, the protection cells are overlaid in a side file. In either case, the existing (or intention to insert) protection cells where needed must be verified to comply with “Rule 3.1 a” on page 49, which states that all spatial crossings must be suitably protected.

This stage of the design process can also be used to verify that the power intent is complete and consistent. For example, a power intent file that does not partition certain design modules into power domains or has incorrect library element selection commands.

Most of the dynamic verification happens in RTL at this time. Hence, this stage of design is a great time to do formal analysis, especially in conjunction with simulation. Even without any such interaction, it is possible to analyze the architectural aspects of power intent to look for violations, such as not having an all off state or transitions that require too many rails to transition. In some commercial implementations of static tools, temporal ordering of islands can be used to derive legal state tables or vice versa.

In the era of extensive IP integration, there is another useful aspect of RTL static verification. Dependencies on control signals that emanate from the off or standby state sources in destinations that are On can be detected by static checks, though these bugs are temporal in nature. Clocks, resets, power gating controls, and isolation controls form the essential list to check. However, each DUT is unique and needs specific signals to be checked. Consider, for example, Figure 6-1. There is a dependence on signal I/O EN to send a PWR EN to the Always On block, which in turn sends a wakeup signal to the On/Off block. However, this puts I/O EN in an unknown state when its source block is turned off. Note that inserting an isolation device does not necessarily solve the problem. It has to be isolated to the appropriate value and since a a bi-directional pad is shown, make sure there are no hazards with the direction set by isolation.

figure 6-1

Not all control dependencies are this simple nor are all such dependencies direct functional errors. Even in the simple cases, a signal such as I/O EN in Figure 6-1 needs to be either profiled automatically by tools and/or by the verification engineer. Some of the commercial tools today are capable of automatically identifying these dependencies. However, the complexity and sometimes hidden or encrypted model of IP integration may elude automated analysis.

Sequential dependence of software register-based dependence is even more complex to detect. These can be identified by rigorous identification and testing of the power modes in which these signals are not valid. For example, perhaps a read is made to an address that resides in an off part of the DUT. The isolated values are returned, thereby returning an incorrect execution downstream. This situation is in fact, quite critical. There is no runtime probe that tells the DUT that a particular block is off. It entirely depends on the software remembering context and/or probing some other elements in hardware, such as power switch controls and other devices.

Rule 6.3 — The testbench must have assertions to protect against transactions with a block that is in off or standby mode.

Rule 6.3a — Software addressable registers known to be in on/off islands must have assertions to verify that access happens only when they are in the on state.

The reader will also notice that such a dependency is not necessarily an error. Perhaps logic exists to override the offending signal in the mode where its source is shutdown. However, for verification, this needs some tests or formal/property analysis.

Recommendation 6.4 — Identify critical control signals that originate in On/Off blocks and verify that there is no dependency on them to transition out of the state in which their source is off.

RTL static verification takes on more meaning as the number of islands and/or the size of the design grows, making the power intent specification and dynamic coverage much harder.


In most design flows, the insertion of isolation gates, power switch structures etc. is done at the netlist stage. Hence, a whole range of static checks become essential as netlist structure is repeatedly transformed through synthesis, floor planning, power switch insertion, scan chain formation, clock tree synthesis, buffer tree insertion and timing fixes. Each iteration of the netlist transforms the design and its structure, thus requiring verification, at least statically if not dynamically after each transformation.

Gate-level static verification also depends on the availability of accurate library models. In the case of static verification, this information is present in the Liberty format, as we discussed in Chapter 5, “Multi-Voltage Testbench Architecture”. Gate level simulation also depends on the availability of cell level models that comprehend changes in voltage rails.

The other aspect of gate-level verification is the insertion of I/O cells such as pads. These typically have multiple domains within them and are sometimes part of the power network. Adding to the complexity, some cells come with built-in level shifters and isolation cells. Power intent formats vary widely in their range of expression, their ability to communicate hierarchically, and to impose rules on the overall integration. Overall, this is an area of extreme caution for the user. Even if complete automation was available, the amount of setup by the user is quite high.

In designs that rely on external power supplies, it becomes impossible to perform verification without testing the partitions of the I/O structure along the power rails and to account for any sequencing of power rails from the outside world. For such designs, I/O cells become part of essential coverage.

A quick glance at what static checks are needed can be found in Figure 6-2. A more exhaustive list is found in Appendix B.

figure 6-2

The list in Appendix B is exhaustive, but not necessarily universally applicable or complete. Specific design structures and IP require their own checks and need to be implemented for sign-off. However, it is interesting to note that all static structural checks emanate from a few simple requirements. A spatial crossing must be in an electrically safe state at all times, a power structure must be electrically viable at all times, and the design must not consume power in excess of what is needed.

It is increasingly common for users to adopt a methodology of reverse transformation from a power/ground connected netlist view to the power intent (spatial and partially temporal in nature). This exercise is helpful especially when hardened IP is integrated and enables robust checking in the back end.

So far, we have not broached the topic of power structure viability except in Chapter 1, “Introduction”. This brings us to the topic of a sign-off process for multi-voltage low power designs. As we mentioned in Chapter 1, power is as much a delivery and reliability problem as it is a density and leakage problem. On the delivery front especially, the ability of the power structures to handle peak current loads and current fluctuations is essential. Such checks must be part of an overall sign-off process.

Last but not least, is the topic of equivalence. A reference design and implementation may be equal when all islands are on and equal, but not necessarily equivalent as voltage changes are applied. Consider Figure 6-3: the multiplier block is moved from Domain 1 to an on/off domain in the implementation. An equivalence check that ignores the power down state of Domain 2 will not detect the error in the implementation. Even if one were to perform equivalence checks in the power down state, the isolation enable in the implementation tied to constant “1” or inactive level for formal verification will not detect the error. An active isolation value needs to be applied to detect the error.

figure 6-3

Note—the design as implemented will pass a purely static electrical check on the spatial crossings. Hence, a comparison to the original design is needed to ensure that the original architecture is preserved. Overall, across all the multi-voltage design styles, equivalence issues are indeed a tricky problem. In recent times, commercial solutions to solve this important problem are increasingly available. We conclude this subsection with an essential sign-off rule.

Rule 6.5 — The implementation must be checked both as a stand alone as well as for equivalence to the original reference design across all power states.


Printed with permission from Jadcherla, et al, Verification Methodology Manual for Low Power (Synopsys, Inc.: Mountain View, CA). Copyright (c) 2009 by Synopsys, Inc., ARM Limited, and Renasas Technology Corp. All rights reserved.

Synopsys customers can download a free copy of the book at The companion Low-Power Methodology Manual is similarly available at

Next week: Multivoltage Verification—Dynamic Verification and Hierarchical Power Management

Bookmark and Share

Insert your comment

Author Name(required):

Author Web Site:

Author email address(required):


Please Introduce Secure Code: