Multicore server, PC, and embedded designs push memory power, drive use of advanced DDR3 SDRAMs

Systems designers try all sorts of methods to reduce system power consumption. For years, we’ve relied on circuit tricks and have been reducing logic supply levels from the 5V power supplies that were so common in from the 1970s and throughout the 1980s to the 1V levels we now employ with today’s advanced logic chips. Memory supply voltages have dropped as well. For example, the original DDR SDRAMs had a 2.5V supply voltage and DDR2 SDDRAM employs 1.8V supply voltage. That’s nearly double today’s SOC, processor, and microcontroller core voltages. The reason for this lag in supply-voltage reduction is that memory vendors prefer to stay in the economic sweet spot for IC lithography as opposed to logic design which prefers to stay on or near the bleeding edge. Consequently, memory’s share of a system’s power-consumption pie has been rising and there really hasn’t been much attention paid to reducing memory power consumption. The advent of DDR3 SDRAM provides another opportunity to cut memory power through further reductions in memory supply voltage and coupled with advanced process technology, Samsung has attained a supply voltage of 1.35V for its 40nm DDR3 SDRAMs. This drop in memory supply voltage can produce a 38% cut in server power consumption, according to Samsung.

 

Performance isn’t really the engine that drives DDR3 adoption. The real driver is bandwidth and there are two design trends that force the quest for ever-increasing amounts of memory bandwidth. The first such design trend is the wholesale adoption of homogeneous and heterogeneous multicore architectures. As an industry, we’ve embraced the use of multiple processor cores as a solution to the death of Dennard scaling. Although most people attribute the increase in operating frequency and the decrease in per-transistor power consumption through lithographic shrinks to Moore’s Law, which Gordon Moore codified in an article he published in 1965 while working at Fairchild Semiconductor, that attribution is not factually correct. Moore simply predicted that the number of transistors on a chip would grow exponentially over time as lithographies shrank. It was IBM’s Robert Dennard who observed in 1974 that lithographic advances in IC manufacturing also consistently produced faster transistors that consumed less power. For decades, we’ve used Dennard scaling to produce faster and faster processors (while attributing the improvements to Moore’s Law).

 

The semiconductor industry has poured billions of dollars into keeping Moore’s Law alive but Dennard scaling died at 90nm. We continue to get more transistors on a chip with each advance in IC lithographic scaling, but the transistors no longer get appreciably faster, so the MHz wars have ended. Worse, pushing transistors to their performance limit now produces leaky transistors that dissipate as much power when off as when on. We now recognize that the way to get more performance is to use the transistor bounty to increase the number of processors and to distribute the work load across these processors without striving for multi-GHz clock rates.

 

With all of these on-chip processors executing code and accessing data on a multicore chip, system designers must find a way to make large amounts of inexpensive memory available to these processors. For the last decade, the most cost effective way to provide a system with large amounts of low-cost memory has been the SDRAM. The classic system design teams a multicore processor or SOC with one or more SDRAM channels. As memory bandwidth needs rise, the SDRAMs’ per-channel transfer rate and the number of SDRAM channels used has increased. DDR transfer rate have now reached and exceeded 1600 Mtransfers/sec and it’s not uncommon to find server processors with three SDRAM channels, for example. Because of the constant thirst for memory bandwidth, DDR3 SDRAM sales exceeded DDR2 SDRAM sales beginning with the first quarter of 2010, according to the leading SDRAM vendor Samsung, and the company expects DDR2’s share of SDRAM market sales to drop below 20% by the end of the year.

 

When you move that much data between a processor and memory, you’re likely to dissipate a considerable amount of power and indeed, memory power consumption has been on the rise. Lowering memory power consumption can substantially lower system-level power consumption. For example, states Samsung, going to 40nm, 2-Gbit DDR3 SDRAM with a 1.35V power supply can cut a server’s memory power consumption by 80% compared to the equivalent number of storage bits implemented with 60nm, 1-Gbit, DDR2 SDRAMs running at 1.8V and can even cut memory power consumption by 38% compared to equal-sized memory arrays consisting of 60nm, 1-Gbit, DDR2 SDRAMs running at 1.5V.

As a result, according to Samsung’s measurements, 40nm, 2-Gbit DDR3 SDRAMs running at 1.35V can cut power by an astonishing 38% at the system level for servers. To put that into economic perspective, says Samsung, the use of 1.35V DDR3 SDRAMs in a server can save 2564 kilowatt-hours per year. Samsung estimates that there will be 32 million servers operating in data centers worldwide by the end of this year. If they all were equipped with 1.35V DDR3 memory, the annual power consumption would be reduced by 82 terawatt-hours, worth an estimated $28 billion. That kind of money gets any data-center manager’s attention.

The same sort of energy savings apply to any multicore system whether it’s a server, a PC, or an embedded system based on a heterogeneous multicore processor design.

This entry was posted in Design, DRAM, Green Design, Low-Power, SDRAM and tagged , . Bookmark the permalink.

4 Responses to Multicore server, PC, and embedded designs push memory power, drive use of advanced DDR3 SDRAMs

  1. ewertz says:

    So Steve, before I got to you last paragraph I was thinking (yeah, hard to believe already) that this probably means nothing to me with a PC.

    Wouldn’t Samsung’s quoted systems-level-savings numbers be predicated on having a machine (server) loaded with memory (I’m assuming ~128Gb), *and* that the machine is beating the daylights out of it?

    While it’s almost certainly the case that these are the best number for them to tout (but similarly applicable to a substantial part of their customer base), is it really the case that DDR3 is a slam-dunk for everyone else?

    All other things being equal, it seems like the vast majority of savings are only going to be realized when driving the bus full of passengers at high speed without stopping, you know, like Sandra Bullock/Dennis Hopper-style?

    Do my PCs and embedded systems really care (yet)? Unless I suspend-to-disk or whatever I have to do to let me turn my memory full-off, it doesn’t sound like I’m saving much as long as my SDRAM is simply powered and not beating the daylights out of it.

  2. sleibson321 says:

    Hey Eric, thanks for stopping by. Don’t know about you, but when I rest my hands on my laptop, it’s warm. When I look at the long list of background tasks running on my supposedly quiescent PC, say when I’m not typing into this comment box, I know the SDRAM is still getting banged. Samsung’s claims are made strictly for servers and server memory is either getting banged a lot of the time or else the servers are an underutilized or wasted capital asset. PCs and embedded systems running DDR memory can also stand to gain from lower memory supply voltage. Will they save terawatts? Maybe not. But they will save something. However, I don’t think even the power savings is the most important reason for creating new designs that use DDR3. Samsung’s prediction that we’ll exit 2010 with DDR2 memory representing less than 20% of SDRAM sales means that DDR3 memory (perhaps not 1.35V DDR3 but even 1.5V DDR3 dissipates less power than 1.8V DDR2) will be the cheapest on a per-bit basis. So any design that’s 6 months or more from production had better use DDR3 if it’s going to use a lot of RAM. Of course, for anyone running Linux on 90MHz Pentiums because that’s all that’s needed–well they don’t need to worry about DDR3. And no, I’m not referring to you in that last sentence.

  3. ewertz says:

    Phhht — my Linux Pentiums run *10 times* that fast! :-)

    Sure, DDR3 will naturally push out DDR2 on the store shelves — it’s always just been a matter of time. It’s just that I think that we’re still not seeing DDR3 interfaces in a lot of places that I’d expect to see them by now. DDR3 still seems to be looked at as merely an inevitability by everyone but the server market. The desktop market (the whaaaaaa?) seems not to be a driver, laptops sure, but I’m not sure I see it in much of anything else.

    I’m guessing that if it saved/gave us teraminutes of Facebooking-while-driving, we’d be all over it. But it seems like noone respects even a shred of the price premium, and that we’re just playing the margins until phase-bubble-nano-spin memory gets here for everything else.

    DDR3 uptake just seems to have been the slowest (or perhaps weakest) memory transition in, well… memory.

    I, for one, welcome our DDR3 overlords, when they eventually arrive in our pockets — more power to ‘em. Errrr… less, I mean.

    In the meantime, what’s good for Google, is what’s good for America. For Facebook, well…

    Thanks for the write-up.

  4. sleibson321 says:

    Eric,

    Thanks for the comments. Right now, DRAMexchange shows DDR2/DDR3 pricing essentially at parity, with one or the other costing more per bit on any given day. (See this blog on the Denali Memory Report: http://www.denali.com/wordpress/index.php/dmr/2010/04/06/ddr3-ddr2-price-crossover-reached) As DRAM vendors shift to DDR3 manufacturing, I expect the scales to tilt more fully in favor of DDR3 economics. I’ve written an as-yet unpublished White Paper that quotes a manager at a prominent PC vendor who says that his company will adopt DDR3 “as soon as it’s one cent cheaper” than DDR2. I’ve no reason to disbelieve that.

Leave a Reply