Jan Rabaey’s remarkable short course in Low-Power Design Essentials, Part 4

Note: This blog entry is the final installment of four covering Professor Jan Rabaey’s excellent short course in low power design given at the January, 2012 meeting of the Santa Clara Valley Chapter of the IEEE Solid State Circuits Society.

Low-Power Design Essentials: Future Design Solutions.

“We are running out of options,” said Professor Jan Rabaey. “We have done the obvious.” That’s how Professor Rabaey opened the last part of his low-power design short course. However, we’re not at the end of the road yet. Not by a long shot. “There’s still a lot of waste in software stacks [for example],” explained Rabaey.

At this point in the process-technology curve, the minimum-energy set point is now set by leakage. “[Process] technology scaling is no longer helping.” In fact, said Rabaey, energy consumption might start back up the curve at 20nm. We can’t reduce supply voltages any more, he said, because we have to hold device thresholds where they are to halt [static] leakage increases.

More problems loom on the horizon. The industry is starting to turn to FinFETs at 20nm for continued improvement. However, said Rabaey, capacitance between gates goes up with FinFETs and with the increased amount of interconnect wiring used for 20nm designs. (More gates = more wiring.) In addition, process variation and random effects are starting to become dominant for noise margins and device scaling just makes the problem worse.

“Dennard Scaling died at 65nm,” said Rabaey, “and we are now in the era of power-limited [process] technology scaling.” In the most plausible scenario, continued Rabaey, circuit and system designers must now work hard to reduce system power and energy consumption because simple process scaling will no longer deliver the automatic benefits that it did during the golden age of scaling, when Dennard Scaling was alive. However, scaling isn’t completely used up. “It’s possible to operate gates at a mere 52mV,” said Rabaey, “and we are currently at 500mV.” With respect to energy, you need 10-21 J to operate a gate and we’re currently at 10-16 J. “Therefore, there is room for improvement.”

All is not lost, in other words. There are novel ways out of this bind through:

  • Novel transistor structures
  • Energy-proportional computing
  • Aggressive deployment
  • Ultra-low-power design

FinFETs and planar transistors on FDSOI (fully depleted silicon on insulator) substrates are the next step in our quest to develop transistors with lower leakage but we will need devices with even less leakage in the future said Professor Rabaey. The lack of doping in FinFETs and FDSOI transistors makes some of the device variability go away, but not all of it. Taking things several steps further, Rabaey said that you could get near-ideal components by cooling computers down to liquid helium temperatures, which might be OK for cloud computing but not especially practical for mobile devices.

“The most promising device out there right now for zero-power computation” is the TFET (tunnel FET), said Rabaey. The TFET employs charge tunneling between the FET’s source and channel. It therefore has very steep switching behavior.

Another possible candidate for a low-power switching device isn’t a transistor at all. It’s a MEMS relay. (Think of the Digital Light Processing micro-mirror chips from Texas Instruments.) Use of MEMS relays would usher in a new era of mechanical computing not seen since the 1940s. MEMS relays employ electrostatic switch activation, exhibit a very sharp on/off curve, and currently operate with 10ns switching times. They’re perfectly scalable, said Rabaey. Yes, they’re slow but you can overcome that problem with massive concurrency “and you get a 10x energy reduction with concurrency,” he added.

The second of Rabaey’s four novel approaches to reducing the power and energy requirements of computation is “energy-proportional computing.” Most systems operate with low loading and low activity explained Rabaey. These systems should be designed to run optimally under these light computational loads—not at full load.

For example, he said, most data centers run at 30% to 50% loading. You optimize them for power and energy consumption by designing them to operate optimally at those loading conditions, not at 100%. In other words, said Rabaey, you should design these systems so that they “do nothing well”—fully shut down, and then go from there.

“Attention-optimized systems” are designed with a top-down strategy starting with the application(s) and working down from there. “Don’t design cheetahs unless they’ll sleep a lot of the time,” advised Rabaey. Real cheetahs don’t dissipate maximum energy all of the time, he explained. Systems designed like cheetahs need to switch modes quickly and need to run optimally at every operational point.

“We need hugely scalable platforms,” he said and then gave an example. Bluetooth systems waste a lot of power keeping the wireless connection open. It would be better for them to wake up on interrupt and not poll all of the time.

Then Rabaey talked about statistical computing, where the design trades robustness (design margins) for energy efficiency. It seems that design margins are the big problem in energy-efficient design. Small design margins are good for power efficiency but not so good for reliability or accuracy. For example, if you reduce supply voltage below a certain threshold, some critical-path timing will become slow enough to cause errors in the circuit’s computational results.

However, that’s not always a bad thing. Some applications—audio and video processing, for example—can tolerate errors. In fact, audio- and video-compression algorithms intentionally introduce errors that “won’t be noticed” based on detailed models of the human audio and visual processing systems. In many other systems, we use lookup tables to speed computation by approximating the answers we need. The approximations are “good enough.” It’s possible to save energy by using approximations as substitutes for the correct answer when an error-detection mechanism determines that a computed answer is wrong. So clearly there are certain types of computation that can tolerate errors. System designers should always be asking the important question, “Do we really need digital accuracy here or can we save some energy?”

The final novel approach covered by Professor Rabaey is ultra-low-voltage, sub-threshold design. A circuit’s optimal energy point is actually below its threshold voltage but there’s a huge performance penalty for operating at such low voltages. There’s also a large sensitivity to process variation at sub-threshold operating voltages. You can’t get GHz operation in the sub-threshold domain; performance drops exponentially here. However, you can run devices from the small amount of energy available from energy-harvesting mechanisms (heat, light, vibration, and ambient RF energy).

Researchers are currently running sub-threshold circuits at tens of MHz and you can get an order of magnitude worth of performance back by operating just above the device threshold voltage, at the expense of additional energy consumption. You can also exploit the low-power operation of sub-threshold logic by using asynchronous (self-timed) logic design where timing margins self-adjust to process variation.

In summary, said Rabaey, the future of low-power design is tied to:

  • A rethinking of low-power dogma
  • Run-time circuit optimization
  • Aggressive deployment of many low-power approaches
  • Realization that computational errors are not always fatal

In the end, we will need even more innovative design strategies to go further, concluded Professor Rabaey.

You can find Part 1 here, Part 2 here, and Part 3 here.

 

 

This entry was posted in Low-Power and tagged , , , , . Bookmark the permalink.

Leave a Reply