Design Articles

Verification Methodology for Low Power—Part 2
Multi-Voltage Testbench Architecture: Coding Guidelines & Library Modeling

This is the second of four weekly serialized installments from the Verification Methodology Manual for Low Power. Part 1 covered Multi-Voltage Testbench Architecture—Testbench Structure and Components. Part 2 covers Multi-Voltage Testbench Architecture—Coding Guidelines as well as Library Modeling for Low Power. Part 3 addresses Multivoltage Verification—Static Verification. Part 4 covers Multivoltage Verification—Dynamic Verification and Hierarchical Power Management.

By Srikanth Jadcherla, Synopsys, Inc.; Janick Bergeron, Synopsys, Inc.; Yoshio Inoue, Renasas Technology Corp.; and David Flynn, ARM Limited


As can be expected, the impact of power management can be felt on how code is written as well, both for the DUT and testbench. This section contains coding issues and guidelines for low power designs. These are usually encountered when migrating either existing code or coding rules to low-power designs. They involve both testbench and DUT code.


Many testbenches are written to detect an X (logic level) on various critical signals and to give error messages or even at times, abruptly end the testcase with a fail status. This is in conflict with low-power design practices, which rely on X and Z corruption to reflect logic values in shutdown. Modifying such X-detection routines to account for shutdown is one of the most commonly seen changes of current coding practices in testbenches.


RTL code is sometimes built for 2-state simulation and may not propagate X’s correctly. If the simulation semantic corrupts a register, say on improper restore, the X logic value placed in the register may never be observed by the testcase. Further, X propagation may not occur in RTL, but gate-level simulation may yield a different result. Fortunately, most of the structures that arrest X-propagation can be detected by linting tools.

Recommendation 5.4 — Constructs that inhibit propagation of X logic values should be avoided in RTL.

Recommendation 5.5 — Corruption in simulation may not be observable in simulation results and assertion failures must be used to detect such situations.


For starters, consider the ubiquitous 1 ’b1 and 1 ’b0 constants that are used all through the RTL. This was functional in the days when the entire chip (or at least the core) had a single supply voltage and a single ground. However in today’s multi-voltage designs, there is no such thing as a single Vdd or single ground connection.

In addition, rails such as back bias nets or retention supplies may not even drive 1s and 0s. They may be arbitrary voltage values that not equal the Vdd/Vss value, which makes it questionable for them to be declared as supply 1/0 nets.

Note that emerging standards, such as the IEEE (P) 1801, Unified Power Format (UPF) define power nets/rails and a type/value can be assigned to them. This alleviates some of the difficulty in analyzing the power nets and their connections, but the burden of avoiding hardwired constants is still with the RTL designer.

figure 4

In most cases, the answer seems to connect to the local Vdd/Vss of the standard cell such as the supply1 declarations above. That may well work most of the time, especially in static multi-domain designs. However, in the case of power gated domains, where both the source Vdd and the switched Vdd are considered supply1 as in Figure 5-4, it would be legal to connect either Vdd to a 1 ’b1 connection, but not necessarily correct.

Additionally, consider the case where the constant is connected across from another domain in the design. Placement and Routing tools especially grapple with this issue, in their “flat” view of the design. The worst of these is the case in which the parent module is in one power domain and then instantiates constants in the port map of a module that is partitioned to another domain.

Note that synthesis and physical synthesis optimize constants away. This may no longer be valid for some multi-voltage designs. This situation has to be treated differently depending on whether the constant is local and subject to being turned off and whether there is any interaction with other domains.

One solution that will work is the creation of TIE _HI _VDDx or TIE_LO_VSSx type of nets. This will force the RTL designers to explicitly identify constants and think the implications through. This will also serve as an unambiguous guide to the verification and implementation tools.

To summarize this section, the following are a few basic rules for multi-voltage low-power designs: Rule 5.5 and Rule 5.6.

Rule 5.5 — Instead of hardwired constants, use TIE_HI_<NAME>or TIE_LO_<NAME> to explicitly identify the intended connection.

Rule 5.6 — Make sure that constants do not cross domain boundaries. Their behavior will need to be comprehensively analyzed across all sources: destination state combinations if they do. An additional level shifter may also be wastefully needed if this is done.


This refers to the practice of expressions contained in port maps. For example, consider the code contained in the following stub:

myreference Inst _myref (.input_pin(sigA && sigB)// ..

This is perfectly valid Verilog code, even if it is not good coding style. However, consider the situation where Inst _myref is partitioned into a different power domain than its parent. This leaves us with the mystery of where the expression in the port map belongs, how level shifting and/or isolation is to be applied. In conventional methodology, this will most likely be a synthesized gate at the parent level, However, consider a further version of the code stub above.

myreference Inst _myref (.input_pin(sigA &&  sigB),// ..
.output_pin1  (sigA),
.output_pin2 (sigB), //..
sigA and sigB are actually outputs of Inst_myref. The convention that the synthesized gate is placed in the parent is less justifiable in this case and outright complex to resolve.

Recommendation 5.7 — Avoid port map expressions at power domain boundaries. They are likely to cause improper specification and hence difficult to verify.

Likewise, expressions in power intent side files are extremely hazardous. They may not be synthesized correctly or verified/covered as needed.


There are designs in which the first stage of logic is a storage element. This used to help timing, but does not work very well if the power domain containing the logic is turned off and the sender of the data is still powered-on. In fact, it could be outright dangerous if the first stage of a flip-flop is a pass transistor.

figure 5

Consider the case where the eventual target library has a pass transistor connection at its first stage (D input or scan input) as shown in Figure 5-5. When a live/on domain drives this connection when the domain with the first stage flip-flop is off, then there could be a sneak path for current, because the state of the gate is unknown. In rare and extreme cases, this can cause device breakdown, but the normal symptom is power wastage.

The state of the gate of the pass transistor depends on the clock condition. If the clock to the domain is wiggling, it potentially connects to a lot of first stage CMOS gates. This indicates a lot of capacitance is wiggling even though the domain is off. If the external clock directly drives them, this could keep opening the pass transistors described in the earlier paragraph. This situation is a pure waste of power and must be avoided. The following are a few rules for coding IP blocks or hierarchical modules.

Rule 5.8 — Do not use first stage flip-flops if the domain is going to be turned off, unless input isolation is used.. Verification tools must ensure that this is the case.

Rule 5.9 — Verify that gate clocks are gated down to first stage inactive if the domain is going to be turned off. First stage inactive means that it must be either a cmos gate connection or the pass transistor it hooks up to must be closed.

Rule 5.10 — Verify that elements with first stage pass transistors are not used at the domain boundaries.


It is customary to write testbench code to monitor various functions in the code. Similarly, assertions may be written either at the testbench level or deep in the code. Unfortunately, most of these assertions or monitors may have been written without planning for a multi-voltage architecture. The verification engineer is likely to encounter many tricky situations when migrating such a testbench/environment to the multi-voltage world.

First, consider monitor statements that directly access names nets hierarchically (which is bad coding to begin with). If the domain goes to shutdown, these nets may be assigned to Z or X, throwing off code written in the monitor.

Similarly, assertions may not factor in shutdown conditions. It is not as simple as factoring in X and Z values in the code. The reality is that a power state transition such as shutdown goes through a number of pre- and post-shutdown management events such as clock gating, multiple resets, and retention/restore sequences. The monitor or assertion set needs to stall or account for these transitionary states. In fact, NEW assertion and monitor code may be needed to factor in the power state tables.

Broadly speaking, the change in monitor/assertion code is that there may be code that is always monitoring the block; code that is off when the block is off and code that is on when the block is off and any further code to monitor transitionary states.

Extending this concept further, there may be force statements atthe testbench level that make cross module references. These are especially done to set up pin strap options, device ID bits, etc. These force statements can conflict with any assignments the simulator is doing, especially in shutdown and retention situations. Even without low-power design, using cross module force statements should never be implemented. Low-power design adds further twists to the usage of this construct.


Almost every testbench infrastructure utilizes initial blocks. Often, initial statements are used to load memories, set constants, and set finish times/stop times.

In the case that the initial block (along with a construct like readmem) is used for a block that is off by default and wakes up only later, any initializations must be deferred until the actual wake-up. Similarly, for a block that can be turned off, any memory initialization in it must be repeated after every power up. In addition, such an initialization needs to be sensitive to any handshake with power sensing and reset signals applied to the block. This handshake is often a source of bugs, so it is best to avoid such readmem based initializations. At least a few tests must cover the actual hardware-based initialization sequence.

On the contrary, registers such as non-volatile memory bits, laser fuse bits, and one time programmable bits need to NOT be corrupted by shutdown. Unfortunately, current HDLs do not provide for a simulation semantic to such bits in the first place. In the era of low power design, recognizing and supporting such bits is essential to accurate verification. Note that these bits do not wake up instantly. There is a point of activation along the rising power rail as it turns on. Also, the protocol often involves power-good and reset signals to latch these bits, adding further complexity to how this mechanism can be verified.

Extending this concept further, any PLI routines that form behavioral models or collect data, including debug/coverage routines, need to be aware of the shutdown conditions. For example, a CPU simulation model may be built in C and hidden inside a wrapper. A shutdown of the CPU may completely escape such a model. In fact, such a model may not just shut down accurately, it may also wake up or reset incorrectly as well.


State Retention is altogether a new semantic that is being applied to sequential elements. Consider an sequential element such as a flip-flop being assigned to be a retention element: In this case, the flip-flop is probably coded in Verilog as an always at the posedge or negedge of the clock construct. However, the intended behavior is that when the domain goes to shutdown, there are additional save/restore signals hooked up to the actual sequential element that retain and restore the value of the bit.

There are numerous implementations of retention elements available. These change the protocol that is followed for save/restore and further impact the behavior of clock and reset (and scan in some cases). The same RTL may have to be simulated differently, depending on the actual behavior of the element being used in the context of instantiation. For example, a reset may clear the output of the flip-flop but not the retention element. Also, there may be a special reset pin needed to flush the retention element itself.

Another complexity is that the original RTL does not have save/restore pins instantiated locally in the first place. This implies that such a “connection” is done by the power intent file on the side. While this is extremely convenient and useful for the overall flow, RTL and gate-level simulation results may vary, based on how the save/ restore signals are connected in the netlist.


It is common practice to synchronize asynchronous signals while crossing domains. Power management control loops, however, involve many asynchronous signals whose state is relevant to the PMU. While synchronizers can still be used, it must be recognized that there may be additional isolation latches in the path, which makes the synchronizers somewhat redundant. Furthermore, the design may enter a deadlock state by gating the clock to the synchronizers, while waiting for a wake-up event, which never makes it past the gated synchronizers.


Often, isolation cells are simply AND/OR gates or their inverted versions. However, it becomes difficult to distinguish between regular logic and isolation cells, especially to detect redundant isolation insertions. Signals at domain boundaries have less ambiguity with respect to isolation gates. Not detecting redundant gates can be quite dangerous functionally. Hence, it is best to have special isolation cells or create wrappers around the basic gates in case they are used for isolation. The wrappers enable static detection of redundant gates and form easily identifiable coverage points.


This is not exactly a coding guideline but a language semantic of which to be wary. Imagine a stub of Verilog code as shown below, which implements a combinatorial equation. If the always block resides in a shutdown domain and signal inputs ig is sourced from an On domain, simulation results can be inaccurate. Consider the situation when inputsig changes from 0 to 1 when the block containing the expression is in shutdown state. A value of logic X is driven on to outputs ig for the shutdown period, but once shutdown is removed and the block is woken up, the normal re-evaluation of logic is not defined in traditional HDLs. The issue can be worked around by either simulator support and/or an assertion indicating an input toggle while in shutdown mode.

always @ (xinputsig) 
   youtputsig = xinputsig;

Similarly, Verilog does not natively allow a mechanism to code the complete behavior of an asynchronous reset. A reset operation can only be triggered on an edge on the reset signal. This causes a problem in a the power management scheme, which asserts reset before powering up an Off block so the block wakes up in a reset functional state. The edge of reset is masked as the logic is in Off state. On wakeup, the reset is already low but the Verilog description does not initiate a re-evaluation of the registers in that domain. Avoiding usage of asynchronous reset is not possible but users must ensure that their low-power verification solution handles asynchronous reset properly.


As one can imagine, traditional representations for libraries will need to be updated for low-power designs. These updates are quite intertwined with both implementation and verification. Hence, both sets of tools need to use the same information consistently across the entire design flow.

We can primarily identify two major areas where libraries need to change. One is the addition of power management cells such as isolation cells and level shifters into the library. The second is the modification of existing standard cells to accommodate the fact that designs now have multiple voltage rails.


Cells like isolation cells look deceptively like AND/OR gates or other standard logic. Level shifters may be misconstrued as yet another buffer. However, the design and overhead of these cells tends to be quite different. And then there are elements such as power switches which are entirely new. Not only do these cells need to be added, there needs to be a suitable identification of these cells in the library files. For example, the isolation cell may have an “is_isolation” attribute or similar to be set to true.

Further, the pins of such cells may need special identification. Imagine for example that the isolation cell has one of its inputs protected with a high Vt implant to better withstand the fluctuations of a floating input from the off island. This implies that the data to be connected must hook up with this particular pin and not be swapped with the isolation enable, though one might be tempted to think of the cell as a logically symmetrical AND/OR gate. Such a cell property needs identification. Tools hence need to understand this property and check for its correct realization in the design.

Another example is when the isolation enable is inverted in the cell. Not only should we not connect the data input to this pin (thereby causing an un-isolated input internal to the cell), but also, this information must be factored into synthesis, static and verification.

Cells with multiple rails such as power switches, level shifters, charge pumps etc. need strong identification of source side and output side voltage rails. Logically both networks may be represented as “supply 1,” but there is an immense physical difference.

It is also typical to enclose a function attribute into traditional representations. Power management cells make this complex, because sometimes the function may be quite mixed-signal-like in its behavior. It must be recognized therefore, that certain cells may not have the right functional model. Further, their simulation models must be built with care. It is a matter of debate whether the power and ground rails of a cell must be included in the simulation models. While the answer seems to obviously be yes in the case of standard logic cells, it is less clear in the case of power management cells. This is because these functions, by design, are complex, and expected to vary with voltage dynamically. Further, existing simulation models are represented in the Verilog language, which is inadequate to represent mixed-signal behavior. At this time, various standards have been proposed to the address this issue, since the ability to write such functions is essential.

The dynamic voltage-dependent behavior of these cells leads to another issue: how the timing arcs are represented. There are new sets of timing arcs to be standardized in the first place. These need to be characterized for and exported in a standard library format.

While all of the prior discussion looks like it is primarily focused on static verification, consider the example earlier where the data pin has different properties compared to the isolation enable. In this case, any assertions being written to compare the relative timing of these signals must be applied at the appropriate pins—this can be quite a task in a large design!

Retention cells represent another conundrum: how are save/restore relationships to be modeled? Given the vast variance in retention schemes from one library element to another, how are these to be represented in a common format that yields to consistent representation? Library vendors today have begun exporting retention cells with many new attributes and functions, which must be checked by both static and dynamic verification.


It is not immediately obvious that standard logic elements like buffers and NAND gates need to be changed because the design is made with multiple voltage rails. Changes on this front are primarily on the representation—the cells always had power and ground rails hooked up to them.

One motivation to include the power rails to the cells in the representation is the fact that they need to be explicitly hooked up to one of many rail networks and that connection must be verified. Therefore, a verification process must independently infer power network connectivity from certain standard attributes. The introduction of back-bias is another such additional item: back bias pins come with additional routing overhead, apart from the fact that the pin itself is now additionally present in the cell representation.

The other factor is that delay and power characteristics of cells change with voltage. This means that multiple sets of library data may be needed and factored into the analysis of various design stages.

Simulation models need to now account for the fact that the voltage applied to the cell could change: at this time, most models account only for on/off behavior. It is up to various simulation tools to impose voltage dependent behavior on these cell models beyond on and off.


Custom cells like I/O pads, memories and analog blocks pose some unique problems in multi-voltage flows. These cells typically get connected to multiple power rails, without the notion of a driving rail or primary supply. Within these cells, the rails may actually drive different domains or be non-functional reference rails, such as a reference analog power supply. Further, any digital inputs may need to be referenced to a certain power supply. Likewise, digital outputs may be referenced to a set of driving rails. Inputs may already be protected, i.e., contain level shifters and isolation cells in expectation of certain states. All in all, this is a problematic area with ad hoc representations at this time, but standards are emerging for pin-level attributes and simulation behavior.

In terms of simulation behavior, note that the generic voltage source model described earlier in this chapter can easily be extended to represent such cells. The key however, is the validity of such models; they must be made along with the design and export process of the macros.

figure 6

With custom macros, an additional source of trouble are restrictions on their temporal behavior. Consider for example, a memory with a built-in back-bias mechanism. The external world merely asserts a logic 1 on a standby pin as shown in Figure 5-6.

Such a macro needs to reflect the true constraints of multi-voltage behavior in its integration, though such a cell is not connected to multiple rails. Such a cell might come with a variety of implementation restrictions as well, such as power grid requirements, standby pin timing arcs, and clock gating conditions. Unfortunately, such properties are mostly expressed ad-hoc. From a temporal perspective, we can expect to see a restriction on activity while the standby pin is asserted and for a certain period after its de-assertion. An IP exporter must therefore ensure that all such restrictions are adequately reflected to the end user.

Rule 5.11 — A block of IP must be verified to be compliant to its original design properties, even if the integration is not in violation of top-level power intent.

For example, consider the memory in Figure 5-6. If the original design prohibits power gating of this block when standby is on, such a “state” restriction may not be reflected in the power intent (legal states) of the top level integration. However, a mere coverage of power states and transitions will not be adequate to verify this aspect of the design.

Recommendation 5.12 — An IP exporter must provide adequate assertions and coverage points to the end-user to verify low-power states and functionality.


In a nutshell, multi-voltage design brings about significant changes to the way libraries are represented modeled and used. New library attributes and standards [3], [4] already reflect these changes: the key however, is what we mentioned earlier. Consistent interpretation and comprehensive testing of these new attributes is essential for both design and verification processes.

As the user can see, formulating a multi-voltage test harness can be quite a transition. A well planned testbench architecture and migration is essential for the success of verification, which is the subject of Chapter 6, “Multi-Voltage Verification” and Chapter 7, “Dynamic Verification”.

References and Recommended Reading:

[3] Synopsys, Power Management Verification User Guide (for MVSIM, MVRC)

[4] Accellera, IEEE-P1801


Printed with permission from Jadcherla, et al, Verification Methodology Manual for Low Power (Synopsys, Inc.: Mountain View, CA). Copyright (c) 2009 by Synopsys, Inc., ARM Limited, and Renasas Technology Corp. All rights reserved.

Synopsys customers can download a free copy of the book at The companion Low-Power Methodology Manual is similarly available at

Next week: Multivoltage Verification: Static Verification.

Bookmark and Share

Insert your comment

Author Name(required):

Author Web Site:

Author email address(required):


Please Introduce Secure Code: