Design Articles

Verification Methodology for Low Power—Part 1
Multi-Voltage Testbench Architecture: Testbench Structure and Components

This is the first of four weekly serialized installments from the Verification Methodology Manual for Low Power. Part 1 covers Multi-Voltage Testbench Architecture—Testbench Structure and Components. Part 2 will cover Multi-Voltage Testbench Architecture—Coding Guidelines as well as Library Modeling for Low Power. Part 3 addresses Multivoltage Verification—Static Verification. Part 4 covers Multivoltage Verification—Dynamic Verification and Hierarchical Power Management.

By Srikanth Jadcherla, Synopsys, Inc.; Janick Bergeron, Synopsys, Inc.; Yoshio Inoue, Renasas Technology Corp.; and David Flynn, ARM Limited


In this chapter, the formation or migration to a multi-voltage testbench is discussed. The various testbench components are also identified and discussed. This will cover coding guidelines, power intent and library modeling aspects as well. Overall preparation for the verification process is the focus of this chapter.


In the following pages, we focus on the formulation of a testbench architecture for a multi-voltage design: especially on the methodology of migrating from a non-multi­voltage environment. The primary objective of the testbench is to have the infrastructure provide effective and comprehensive testing of the multi-voltage feature set.

In the process of setting up and/or migrating to a multi-voltage test setup, many issues such coding practices, modeling of various elements, file formats and others enter the picture and we will discuss these in detail.

In subsequent chapters we will cover the actual usage of the test setup: to generate the required coverage, assertions and related topics.


figure 1 The essential components of the power management control system are illustrated in Figure 5-1.

  • PMU (power management unit): typically an RTL block or set of RTL blocks that may interact with software

  • SoC functions: effectively “controllees” that are monitored and managed

  • Block-level circuitry: level shifters, isolation devices and retention cells.

  • Mixed-signal circuitry: power switches, voltage regulators (VR), battery, and other components.

  • Asynchronous, mixed-signal sourced logic signals: power on reset (POR), and Non-Volatile Memory (NVM)/One Time Programmable (OTP) memories.

  • Software: code that executes on the CPU(s) often performing the function of power management

Consequently, the testbench formed to verify such a system must have a corresponding structure. Figure 5-2 identifies the mapping of this structure to a verification system. Note that there may be regular non-power related testbench structures and DUT entities, all integrated into one setup.


Let’s take a deeper look into the various components illustrated in Figure 5-2.


figure 2 This is a testbench stimulus generator that mimics the fetching of instructions to the CPU. Most of the SoC testbenches today have such a harness. However, the difference with low-power testbenches is that the firmware that exercises power management must be appropriately covered.

Some of the typical routines that need to be tested are as follows:

  • Boot and initialization routines, especially those that are part of system power up and power down

  • Load prediction, detection and voltage scheduling routines

  • Interrupt service routines (related to power management)

  • Timer/status bit monitoring routines

It is not sufficient to merely have the firmware present and jammed in! For example, some tests force the execution of specific code stubs that turn power domains on and off or insert the device into an appropriate state. While this approach has some merit, it does not really exercise the power management control loop. It also suffers from a drawback, namely that the CPU that executes the software and the memory interface/ storage may themselves be in some low-power or standby state. It is best to verify the overall control system: the hardware/software that triggers the transition to another state and the software that executes and monitors it.

Rule 5.1 — Power management software must be tested with the control loop that triggers it.

The other critical aspect of software testing is the verification of situations where conflicting resource requirements or power requirements occur. For example, a low battery situation might shut down the device or put it in standby state, but a phone call or chat message might come in that demands the user's attention. Such a situation, where the shutdown sequence is not complete, but conflicting demands abort the shutdown, is described briefly in the paper: Challenges of Multi-Voltage verification on a complex low-power design, SNUG San Jose 2008 [6].

In designs with multiple processors, it is possible that software execution or hardware events in a subsystem trigger events in a different part of the system hierarchy. A common example of this would be plugging a digital multimedia device into a computer through a USB port. Such systems tend to be quite complex and can cause deadlock at the system level.

In general, it is difficult to measure meaningful coverage on power management software routines. However, we can use the registers used by software as meaningful coverage elements. A VMM application such as RAL can then be used to manage this. It has the added advantage of being able to generate random stimuli and stress tests to simulate various conditions. RAL also makes it easy to manage subsystems. The register space is already hierarchical in the DUT. It is also important to recognize branches in software and measure their coverage.

5.3.2 CPU

In most SoC verification processes, the CPU resides in the SoC itself. An RTL model is usually integrated into the DUT. Problems arise, however, when a C or other precompiled model is used as a plug-in to the simulation. This is usually done to improve simulation performance. However, such simulation models typically have two drawbacks. They are cycle accurate and hence, cannot easily respond to the asynchronous events in a power management sequence. Hence, they may not reflect the effects of a power management well. It is also difficult in current power specifications to partition inside a C model. Many CPU cores today shut down all but the cache, which is put in a low Vdd standby state. Such a partition or behavior cannot be easily reflected in the simulation model.

For example, imagine a control register in the CPU that is not appropriately reset after a power down and wakeup. The C model will not display this behavior and continue execution, whereas the RTL simulation will correctly stall based on the corrupted registers.

Rule 5.2 — Behavioral models not covered by multi-voltage semantics must be modified accordingly to respond to power management events.

Recommendation 5.3 — Use RTL models for components inside the DUT for power management tests.


Cells like isolation elements, level shifters, power switches, and retention elements are not necessarily present in the RTL. Power intent standards, such as IEEE (P) 1801, allow users to specify these elements in side files. When the design undergoes simulation, appropriate semantics must be applied to ensure accuracy of results. While this is itself not a problem, the verification process needs to account for differences between such semantics and the actual behavior of the technology library. This is especially pronounced in the case of retention cells. Equally troubling is the behavior of level shifters and power switches, which are essentially mixed-signal circuits. Isolation cells are usually not an issue. However, some power intent formats may not define resets on latch-based isolation (LKGS). Isolation cells with esoteric features such as multiple enable controls and test mode overrides may also need better modeling and coverage at RTL simulation.

figure 3 One of the most significant simulation models is for the voltage regulator (VR) and this can be used equally effectively for power switches. Both are voltage sources which respond to control by the PMU. Figure 5-3 shows a generic voltage source model, along with essential parameters listed below.

Note that both the master voltage source and the output voltage are real numbers, continuous in their variation. Depending on how the voltage source model is built and the languages used, the instantiation of this model varies. However, the key aspect to note is that digital control needs to be sourced to some signal not dependent on the output voltage. The digital status and power on reset need to be referenced to the output voltage.

The parameters/inputs needed for this model are the following:

  • Conversion function from voltage in to output voltage

  • Trigger point for in time and voltage for digital status signals

  • Digital control interpretation and timing

  • Simulation step in terms of voltage and time to produce a continuous variation effect.

This modeling of voltage sources is quite convenient. It allows you to hook up a master source like the battery or other master voltage source while representing cascading DC-DC regulators or power switches. It also allows for the response, both digital and analog, to be integrated into coverage metrics directly in response to the digital controls from the PMUs.

The conversion function of the regulator is quite important. One might visualize the conversion function as something simple:

// voltage out function
if (0.5<=Vmaster < 3.0) Vout = Vmaster * 0.5;
else if (3.0<=Vmaster<=3.6) Vout = 1.8;
else (message: Vmaster is out of range)
// simple function

However, more real life complexity can be added:

  • Back annotating delays in voltage development from actual parameters

  • Piece wise linear approximations of voltage development (or degradation) curves, esp. post layout

  • Injecting droops/spikes in response to specific stress conditions; for e.g., the mas­ter voltage droops, say, 12% when a low battery indicator is received or Vout droops when a large domain is suddenly turned on

  • Injecting random variations to reflect actual voltage tolerances, such as producing a random Vout between 1.62 and 1.98 V for 1 .8V with a specified 10% tolerance as opposed to a constant 1 .8V.

An interesting application of such voltage source models is to simulate their activation of non-volatile memory components such as configuration ROMs, laser fused bits or memory repair bits. Typically, these bits are valid as the Vout is developing and often sampled to wake up the chip or block in certain configurations. For example, a memory repair bit may activate one set of address decoder lines as opposed to another when programmed. The chip or block may wake up in a 16KB cache configuration instead of a 32KB configuration. Often these bits do not need special simulation models, unless the RTL instantiates special macros which then require a simulation model underneath. In most cases, it is sufficient to simulate the control system by appropriately activating the bits along the voltage development path (equivalent to re-initalizing these bits in simulation) and verifying that the chip/ block indeed lands in the desired configuration.

In the online examples provided with this book for downloading, the readers can find examples of generic voltage source models’ source code. These are intended as a starting point to illustrate how closed loop control of voltage sources is modeled.


Printed with permission from Jadcherla, et al, Verification Methodology Manual for Low Power (Synopsys, Inc.: Mountain View, CA). Copyright (c) 2009 by Synopsys, Inc., ARM Limited, and Renasas Technology Corp. All rights reserved.

Synopsys customers can download a free copy of the book at The companion Low-Power Methodology Manual is similarly available at

Next week: Coding Guidelines and Library Modeling for Low Power.

Bookmark and Share

Insert your comment

Author Name(required):

Author Web Site:

Author email address(required):


Please Introduce Secure Code: