Flexible Techniques for Low-Power 32/28nm Standard Cell and Memory Design
ARM Techcon – November 11, 2010 -- Low-power design encompasses an expanding set of loosely defined and widely deployed techniques for making electronic components more power efficient. Those techniques were the subject of a presentation by Wolfgang Heifricht, platform manager at ARM during ARM TechCon this week.
In some cases, you’re trying to make the design run at with a minimum power draw, leaving performance on the table. In other cases, you’re trying to run as fast as possible, but minimize the leakage or heat dissipation that this creates.
Some low-power techniques are designed to address continuous operation of the processor, while others are designed to shut down some or all of the device altogether. Similarly, the low-power design techniques that are employed to maximize power efficiency in a processor will be fundamentally different from the techniques that are used to on memories.
The Complexity Challenge
According to Heifricht, “An SoC can contain 1,000 different types of cells representing 150 functions or more, and all of these must be synthesized, placed and routed in a wide variety of EDA environments. Your goal is to achieve the highest density and highest performance for your digital implementation. “
One of the more interesting techniques he described involved varying the channel length as a means to control leakage. A key benefit of this technique is that it requires no additional mask layers. Heifricht said the technique has been validated on ARM Cortex IP at 38nm, 34nm and 30nm at two Vts for improved granularity.
“We can achieve large leakage performance improvements of up to 20 percent with very fine resolution,” said Heifricht, with judicious use of multiple cell types including HVt, RVt and LVt.
To help designers achieve their goals quickly, ARM has created a power management kit and a leakage optimization retention kit for standard cells. This is critical as the cell density of digital designs increases, he said. “At 180 nanometers, you could expect to see 1800 cells in a typical SoC design,” observed Heifricht. “At 28nm, that number goes up to 12,000 cells.”
Memories, on the other hand, have a completely different set of power design challenges.
“Every memory company has its own memory compilers and its own schemes for memory design,” said Heifricht. “ARM recognizes multiple optimized memory architectures.” As with digital low-power designs, power efficiency has to be built into the memory architecture right up front, he said.
“The best performance choice is an area register file-based design,” said Heifricht. “You don’t want to do anything to compromise that performance. But you can save power by taking advantage of different modes, such as standby, nap, sleep retention and shutoff.” In addition, you can design shut off portions of the device periphery or the core. However, he said, periphery power-up must be incremental to avoid a power surge.
Also, he said, as your memory device density increases, the savings that are possible by tweaking periphery states becomes negligible. “Leakage dominates in large memories, so periphery gains are less noticeable,” he said.
Importantly, he noted that at very low power, memory devices become highly sensitive to Vdd reduction. Special features may be required to support safe operation of the bit cell and avoid data corruption. “If you are designing a device to operate at 0.9mA +/- 10 percent, you’re in the range where this sensitivity occurs,” he said. That’s when you need a write assist.”
The Low-Power Toolkit
The key takeaway, said Heifricht, is that there’s no shortage of techniques for achieving power efficiency.
“Cut down everywhere!” advises Heifricht. “We found good results by cutting down channel length, using power gating, back bias, flexible periphery design, low-voltage modes, low-power architectures and mixed-voltage designs. There’s a rich set of alternatives available.”