Hands On: Evaluation Kit Eases Lighting Design Starts

CY3267 dev kitNormally you order an evaluation kit to check out whether a particular microcontroller seems appropriate for a design you have in mind; if everything seems OK, you then order a more costly development kit to prototype your design. Cypress’ CY3267 PowerPSoC Lighting Evaluation Kit manages to cross that line, enabling a quick out-of-the-box evaluation within a few minutes but including a full suite of tools, circuits, and programmable components to enable developing some sophisticated lighting control systems.

The CY3267 PowerPSoC kit includes a main board built around a CY8CLED04D PowerPSoC MCU in a floating load buck topology. The PSoC core drives four 1A internal MOSFETs that power a 10W 4-channel RGBA LED mounted on a separate daughter card sitting atop a large heatsink. A power supply, USB cable, LED diffuser, an assortment of jumpers, and a MiniProg programming connector complete the kit.

Within five minutes of opening the package I was able to connect the daughter card to the main board; connect the main board to my computer; power up both boards; and cycle through the different colors in the LED array using the two Capsense buttons. Five minutes later I had installed the Intelligent Lighting Control application included on the kit CD and could experiment with basic lighting control.

Figure 1

Figure 1: Intelligent Lighting Control GUI

The Intelligent Lighting Control application (Figure 1) works with the default firmware to demonstrate 4-channel color mixing. From the CIE Color Selection tab you can click on any point on the color gamut and watch the LED array output that color. You can set the intensity by moving the Requested Luminous Flex slider. You can also set the white intensity by moving the Color Temperature Control slider (up to 4000K).

Clicking on the Direct LED Control tab you can move each of the four sliders to select the intensity of the red, green, blue, and amber LEDs. More…

 

Posted in Lighting, Microcontrollers, semiconductors | Tagged | Leave a comment

Storing Volts

While electric vehicles have been around since the late 19th century, they only became practical with the development of energy storage systems that sport a lot better horsepower-to-weight ratio than bulky lead acid batteries.

By the mid-90’s automakers had pretty much given up on being able to go very far on batteries alone, which led Toyota to introduce the Prius—the first commercial hybrid—in Japan in 1997. In EV mode the Prius is powered by a sealed 38-module 6.5 Ah/274V NiMH battery pack weighing 53.3 kg. That works out to 1.78 kWh total capacity. According to the EPA’s formula, one gallon of gasoline is equivalent to 33.7 kWh—almost 20x what the Prius’ battery alone can deliver. So it’s hardly surprising that the Prius relies primarily on its internal combustion engine for propulsion.

Volt battery packThe Chevrolet Volt features a much larger battery with a considerably higher energy density than the Prius. The Volt uses a 16 kWh (197 kg) manganese spinel lithium-polymer prismatic battery pack, which alone can power the Volt for 35 miles (56 km). The Volt’s lithium-ion battery is 2.5x larger in terms of energy density than the Prius’ NiMH battery (.0812 vs. .0319 kWh/kg). Considering that the energy density of NiMH is under 2x that of NiMH—140-300 Wh/liter for NiMH vs. 250-620 Wh/liter for lithium ion—that’s well on the high side of what you would expect.

In addition to having a greater energy density than NiMH—in terms of both weight and volume—lithium-ion batteries also display a much lower self-discharge rate; a greater maximum number of charge/discharge cycles (i.e., they last longer); a more linear discharge rate, which enables more accurate prediction of remaining capacity; and they perform better at low temperatures.

As far as durability goes, both battery types are about the same: NiMH batteries can be discharged and recharged 500-1000 times, with Li-ion batteries being good for 400-1200 cycles. Since replacing an EV battery pack can be a very expensive proposition—currently about $8,000 for the Volt—manufacturers typically guarantee them for an extended period. GM guarantees the Volt’s battery bank for 100,000 miles or eight years.

Not Your Dad’s Li-Ion Battery

Li-ion batteryOK, assuming your Dad had Li-ion batteries, the ones in the Volt are better. The Volt’s battery design is based on technology developed at Argonne National Laboratory. The Lab used x-ray absorption spectroscopy to study new cathode compositions. They came up with a manganese-rich cathode that resulted in a dramatic increase in the battery’s energy storage capacity while at the same time making it less likely to overheat, and therefore safer and easier to maintain. To complete the trifecta, the new cathode material is also cheaper to manufacture.

Even if there isn’t much beyond Li-ion in terms of energy density—unless you’re comfortable with a thorium-based energy source—there’s still room for improvement. According to Khalil Amine, an Argonne senior materials scientist, “Based on our data, the next generation of batteries will last twice as long as current models.” Chances are your car would give out long before your battery does.

Recycling

When your Volt battery bank finally sends you an End of Life notice, what can you do with it? For one thing you could keep it and use it to help recharge your new Volt battery. Or you might rig it to an inverter bank as a backup source of electricity during power outages or at least peak billing times.

If GM gives you a credit for turning in your old battery on a new one, what can they do with it? The EPA claims that rechargeable batteries are not an environmental hazard if they’re not dumped in landfills; European governments aren’t quite so sanguine, since Li-ion isn’t exactly something you’d like to wind up in your water supply. Both the cathode and anode material can be recycled, which is what most jurisdictions require.

In the end the Volt’s energy storage system turns out to be as high-tech as the rest of the car. Considering how much more reliable electric motors are than internal combustion engines, Volt owners could wind up owning their cars for a very long time.

[This article is part of a series on the Chevy Volt for the UBM/Avnet series Drive for Innovation.]

Posted in Automotive, Batteries, Clean energy | 1 Comment

Get on the Drivetrain

There are a lot of reasons for thinking of buying a hybrid electric car—ecological, economic, political, and just getting cheesed off at seeing all those hybrids with one passenger whiz by you in the Diamond/HOV lane. Besides, admit it—the technology is cool. So just what is the technology inside the Chevrolet Volt?

You Want Gas with That?

Series hybrid vehicleThere are two basic types of hybrid drivetrains: series and parallel. Series hybrids have a gas engine that turns a generator that charges a battery bank that powers an electric motor that powers the car; the engine is not connected to the drivetrain. The Chevy Volt—which GM refers to as “an extended range electric vehicle (EREV)”— is essentially a series hybrid, though with a twist that we’ll describe in a moment.

Parallel hybrid vehicleIn parallel hybrids both the electric motor and the gas engine are connected to the transmission through clutches that enable one or the other to power the vehicle.

Then of course there’s the series/parallel hybrid. In this configuration the two power sources are joined in a planetary gear set that enables either the motor or the engine to power the vehicle, or to share the burden as needs be. Despite being primarily an electric vehicle, the volt actually falls into this category. When you need rapid acceleration, the engine works in parallel with the motor until you let up on the accelerator. Also, the gas engine takes over from the motor when you exceed 70 mph. That’s an appropriate place for the motor – which has its greatest torque at low rpm – to hand control over to the engine, which generates maximum torque at high rpm. Besides, at 80 mph you’ve ceased being an ecopurist and are just in a hurry.

Both the Volt and the Prius are essentially series/parallel hybrids. The main difference is that on the open road the Volt relies more on electrical power and the Prius more on its engine. The Volt as a result has a considerably larger battery bank: 16 kWh for the Volt vs. 5.2 kWh for the Prius. Not surprisingly the Prius has a larger gas engine: a 1.8 liter/98 hp engine vs. the Volt’s 1.4 liter/80 hp engine. OTOH the Volt’s 111 kW (149 hp) electric motor can generate 273 lb-ft of torque, considerably more than the Prius’ 80 hp, 153 lb-ft motor. You might think of the Prius as a gas/electric hybrid and the Volt as an electric/gas hybrid.

Looking at the table, the Volt has about the same power as my Mazda 3, though it gets >3x better gas mileage—and infinitely more for trips under 35 miles, where it’s purely electric. It’s also a lot quieter and more fun to drive.

Table 1The Train is Leaving the Station

In late 2010 GM formally introduced the Voltec powertrain on which the Volt is based, though its roots go back to 2007. The basic design combines a small gas engine and a large electric motor that drives the vehicle, though they can work smoothly in tandem when it makes sense to do so. The large lithium-ion battery bank is designed to be recharged at home overnight—in 10 hours from a 110 VAC source or 4 hours from 220 VAC.

The table shows the basic specifications for the 2011 Chevy Volt. GM has announced plans to use the Voltec powertrain in other cars, SUVs, and even trucks, bring down the cost by using the platform across a much larger base of vehicles. Expect the Volt specs to scale for SUVs and trucks. Even Porsche is getting into the act, toying with the idea of an electric 911 (though not the Turbo GT2).

Maybe drivers won’t miss the roar of a big engine so much while they’re quietly zipping past yet another filling station advertising gas for $4/gallon.

Note: This article was first posted at http://www.driveforinnovation.com/get-on-the-drivetrain. Please check out the site if you’re at all into electric vehicles and follow Brian Fuller as he pilots the Chevy Volt across America–well, parts of it anyway. –JD

Posted in Automotive, Batteries, Clean energy | Leave a comment

SiliconBlue Rolls Out 40-nm Low-Power FPGAs

To date winning a cell phone socket has been a bridge too far for FPGA vendors. Xilinx’s CoolRunner CPLDs have been successful there by adding glue logic, but FPGAs have long been too bulky, expensive, and power hungry to get into anything smaller than a military manpack. Startup SiliconBlue intends to change that.

SiliconBlue Technologies has announced that it is sampling its Los Angeles family of low-power FPGAs – the LP series for smart phones and the HX series for tablets. The FPGA fabric routing in the LP series is optimized for low power and in the HX series for speed. Both product lines are based on TSMC’s 40-nm LP CMOS process and achieve, according to SiliconBlue CEO Kapil Shankar, static power of “tens of microwatts for LP and hundreds of microwatts for HX.”

SiliconBlue’s unique contribution is an SRAM-based FPGA fabric that, according to Shankar, “can operate from a 1.0V core and consume 50% less static power and over 50% less dynamic power than 1.8V ‘low-power’ PLD alternatives.” The Los Angeles family tops out at 16,192 logic cells (800K system gates), a good order of magnitude higher than CPLDs, opening up a far wider range of possible applications.

How Did They Do That?

Going to 40 nm certainly helps to reduce dynamic power, since you can drop the core voltage to 1.0V. On the other hand quantum tunneling through very thin gate dielectrics increases leakage current and drains off the charge from SRAM capacitors. SiliconBlue has introduced some ‘secret sauce’ CMOS process improvements and altered the gate geometries to minimize off-state leakage. Their iCE65L04 chip—with 3,520 logic cells or 2,700 equivalent macrocells—draws 26 µA in standby mode.

There are some other interesting tweaks to the usual SRAM FPGA fabric. Instead of constructing LUTs from N-channel transistors, SiliconBlue uses matching N- and P-channel transistors, effectively limiting leakage. The chips use a buffer-free interconnect, dispensing with the usual 4-6 buffers per interconnect. Finally, the routing fabric is designed for minimum leakage, not maximum speed.

Shankar told Low-Power Design that the chips have no static or full shutdown mode, though only the portions of the chip that are actually used are powered up, the rest are shut down; static power is measured at 0 Hz, namely with the clock shut down.

SiliconBlue uses a 2T non-volatile SRAM memory—based on Kilopass’ XPM CMOS NVM process—that avoids the expense of embedded Flash or EEPROM. Traditional floating-gate memories such as EPROM, EEPROM, NOR and NAND Flash as well as SONOS store electrical charges near a transistor gate; at smaller geometries—helped by mobile ion contaminants—those charges can bleed off quickly. SiliconBlue’s Non-Volatile Configuration Memory (NVCM) “uses the controlled electrical change of transistor gate dielectric from insulator to conductor as the basis of the memory.” NVCM blocks are built on the same bulk CMOS die as the programmable fabric, reducing processing costs and die size while adding an ‘instant on’ capability to the chips. The company claims the NVCM blocks take up only 2-5% of the die area and draw 8 µA operating current.

Packaging also targets high-density PCBs. The smallest parts come in a 2.5 x 2.5 mm (0.4 mm pitch) micro plastic BGA package, made possible by using wafer-level chip-scale technology.

What’s a CMD?

You don’t grab handset sockets selling FPGAs. QuickLogic, for example, doesn’t make (OTP) FPGAs, they make Customer Specific Standard Products (CSSPs), a sort of customizable ASSP. SiliconBlue, for its part, makes Custom Mobile Devices (CMDs). Its mobileFPGA chips are “ready-to-use devices that incorporate custom functionality as well as standard building blocks that are standard to handset applications.” The entire chip is programmable, with Silicon Blue offering 50+ “mobileWARE customizable function blocks” to assist in custom designs. Basically there’s nothing custom about Custom Mobile Devices until you customize them yourself or have SiliconBlue do it for you.

If all of this sounds like a marketing pitch, frankly it is. But with impressive power and density figures, coupled with a lot of cell-phone oriented IP, the company is trying to take their chips where no FPGA has gone before. They push the flexibility, time-to-market, and BOM cost reduction arguments, which are all legitimate; the FPGA camp has been making them since Day One, but they’ve only gained traction as power consumption declined and custom ASICs became a game only the big dogs could play.

Still, LA family devices have some clearly targeted uses. SiliconBlue wants its CMDs to be companion chips to existing mobile chipsets, targeting video and imaging, sensor management, memory management, and port expansion. MobileWARE IP blocks support a wide range of protocols useful on handsets, including SLIMbus, DBI, ECI, MIPI-DBI/DPI, WUXGA, DDR 133, SDIO 3.0, and USB 2.0. Considering the increasingly wide range of sensors found in cell phones, CMDs could find full employment interfacing them with the applications processor.

High-speed, high-definition video is another promising area for low-power FPGAs, whose massively parallel structure makes them a natural for an application where DSPs are starting to run out of steam. For imaging the iCE40 features flexible, cascaded BRAM and extra PLLs to support high-speed LVDS signaling. iCE40 CMDs can stream video at 525 Mbps, enabling HD720p (1280 x 720) at 60 Hz and HD1080p (1920 x 1080) at 30 Hz.

Despite having a low profile in the U.S., SiliconBlue has some major design wins in Asia. Shankar claims the company has shipped 7 million of their 65-nm devices to over 250 customers, including tier one customers like Samsung and Huawei. Their chips are found in 30-40 products to date, including smartphones, cameras, personal media devices, and e-books.

The iCE40LP8K and iCE40HX8K, 8000 logic cell LP-Series and HX-Series devices are available now, with the smallest package starting at $1.99 in high volume. Remaining members of the Los Angeles family are expected to be in full production by Q4 2011.

–John Donovan

Posted in semiconductors | Tagged , | Leave a comment

MEMS Sensors Still Center Stage

I recently wrote an article for EE Times titled Sensors: More Than MEMS reviewing the recent Sensors Expo in Chicago. That wasn’t meant to diss MEMS sensors, which are where the action still is as a number of sensor vendors reminded me recently. In addition to powering the Wii game console in my son’s bedroom, MEMS sensors find huge markets in consumer, automotive, networking, medical, and industrial applications. Sensors are already pervasive and on their way to being ubiquitous.

For starters take a look at your smart phone: it’s likely to contain as many as a dozen sensors, including an accelerometer, magnetometer, gyroscope, camera, GPS/A-GPS, microphone, pressure sensor, light sensor, capacitive touchscreen, and temperature sensor. And coming soon: proximity sensors so your phone can turn off the screen when you hold it to your ear or wake up when you reach for it; and MEMS-powered pico-projectors for giving impromptu presentations or vacation recaps to a group of people. Not all of these sensors are MEMS based, but many of them are. According to HIS iSuppli the MEMS sensor content in mobile devices will increase five fold in the next two years alone, with revenues for new CE and mobile MEMS devices reaching US$457.3 million in 2011.

There are numerous MEMS sensors in the average automobile, enabling airbags, electronic stability control, tire pressure monitoring, active suspension, and parking and braking systems. MEMS accelerometers are also a key part of forthcoming collision avoidance systems. Again according to iSupply, the automotive market will consume over 700 million MEMS sensors in 2011.

In his recent Digi-Key/EE Times Sensors Virtual Conference keynote, Stéphane Gervais-Ducoret, Freescale’s global marketing director for sensors, detailed some of the interesting applications that suddenly become possible when you bring enough sensors to bear.

Ducoret’s key point is that when you integrate enough different types of sensors into a device you achieve context awareness. Your cell phone might be made aware of all the relevant elements in your immediate environment, including your exact location, elevation, relative motion and direction; the temperature, humidity and noise level; your activity patterns; and your connectivity options.

Sensors can enable a wide range of localized, highly targeted services. By being able to locate you with a high degree of accuracy—thanks perhaps to a combination of GPS, cell tower and Wi-Fi triangulation—suddenly very detailed location-based services (LBS) become possible. For example, you might be on the second floor of a shopping mall and want to find a music store. Instead of searching for a directory at the far end of a long hall you query your phone, which pops up mall map and walks you to the store. Or perhaps the store puts out an ad to the phones of just those people in the mall—or even just those near the store—advertising its sale of ZZ Top posters (you pass on that one). Of course an application could quickly look you up on Facebook, determine that you were a big Rolling Stones fan, and pitch you on that poster instead. That’s when these things may be getting a bit too intrusive.

Ducoret’s larger point is that adding intelligence to MEMS devices opens up a lot of possibilities for devices and services that weren’t previously available. We used the example of context awareness in a shopping mall, but context awareness is critical in remote patient monitoring, another market where smart embedded MEMS devices are having a dramatic impact.

With silicon vendors now offering a seemingly endless stream of high precision converged sensor devices—so called sensor fusion—if you’re designing an embedded system that needs to be aware of its environment, you’re now only limited not by the silicon but by your imagination.

Think in other categories.

Posted in Sensors | Tagged , | Leave a comment

More than MEMS

The fact that I spend too much time focusing on consumer electronics was brought home to me vividly last week by a visit to the Sensors Expo 2011 in Chicago. Far from the niche show that I expected, it was swamped by over 4,000 attendees checking out 140 exhibiting companies, making navigating the aisles a good application for GPS, LIDAR, a 3-axis accelerometer and a collision avoidance system.

While the bulk of the $9.7B U.S. sensors market is MEMS-based accelerometers—the not-so-secret sauce empowering the 34 million Wii game consoles sold to date—there were plenty of other sensor technologies on display, including proximity, light, piezo-electric, thermal, pressure, touch, gas, chemical, IR and probably more that I missed. The applications consisted of a wide range of consumer, industrial, medical, environmental and security devices, all of which relied on sensor data for input. If you can’t measure it, you can’t control it—the problem this show addressed.

On with the Show

Instead of rolling out individual products ROHM Semiconductor chose to showcase a number of them at once with its Sensor Race Track, which featured a model Hummer circuiting a track populated with nine different sensors:  3-axis accelerometers, an ambient light sensor, a UV sensor, a Hall Effect sensor, an optical proximity sensor and an inclinometer. All of these inputs fed into a sensor hub and then to a wireless networking module, which in turn presented the data in real time on a large screen.

Digi International used Google Earth to demonstrate its “cloud-based wireless sensor network,” which enables centralized monitoring and control of disparate resources worldwide—from rotating solar panels to tracking trucks to monitoring vending machines—all using wireless sensors nodes connected to the internet.

Some companies such as ROHM, Epson, MEDER and many others displayed numerous individual sensors; others showed products that could integrate data from different sensors—so called sensor fusion. STMicro highlighted its iNEMO inertial measurement unit (IMU) devices, which combine data from various motion sensors with magnetic (compass), barometric/altitude and GPS data to enable location-based services. ST stressed the low-power angle, a theme echoed by TI, Maxim, Microchip, Linear Tech, Analog Devices and most other vendors. The chip companies, by and large, focused on managing the power going to and the data coming from remote sensor devices.

Energy Harvesting

A number of companies focused on extending the useful life of remote sensor nodes by using energy scavenging techniques. Cymbet uses a combination of tiny solar panels backed up by their proprietary thin-film batteries to supplement coin cells in wireless sensor nodes; Microchip and TI, among others, rely on Cymbet’s board to power their energy scavenging kits.

Powercast pulses RF from a central source to top up power in and gather data from remote sensor nodes. The Powercast P2110 receiver is an RF energy harvesting device that converts RF to DC and stores it in a capacitor. The Powercast transmitter can power an array of battery-free receivers throughout a building for industrial monitoring, HVAC and smart-grid applications—all of which resembles a wide-area active RFID system.

Nextreme’s miniature, embedded thermoelectric generators (eTEG) are essentially thin-film thermocouples that fit between a heat source (MCU, PA, etc.) and its heatsink. Converting temperature differences of as little as 5°C into electrical power, the eTEG is designed for powering gas sensors; trickle charging wireless sensors in dark or remote places; and improving fuel efficiency in automobiles.

Posted in Energy scavenging, Sensors, trade shows | Tagged , | Leave a comment

How Green Is Your MCU?

LPD_Transparent_Logo_551x538With energy efficient, ‘green’ designs devices being all the rage, embedded developers need to be asking semiconductor vendors, “How green is your MCU?” (OK, so it’s black. Work with me here.)

Ever since Intel hit the Power Wall in 2004—when the Pentium 4 drew 150W and approached 1000 pins—low-power design has come into its own. Over the past decade smart engineers have come up with a seemingly endless number of innovative tricks to stave off the frequently predicted death of Moore’s law, which was supposed to happen first at 90 nm, then 65 nm than 40 nm, etc. Still, when gate doping variations of several atoms can cause a transistor to fail, the laws of physics are finally asserting themselves. As one wit observed recently about Moore’s law, the party isn’t over but the police have arrived and the volume has been turned way down.

On one level better process technologies have gone a long way toward enabling low-power design. Smaller geometries enable lower voltage cores, which helps exponentially on the power front. Strained silicon, silicon-on-insulator, high-K metal gates and other clever process innovations have all enabled the continuing push to smaller geometries and more energy efficient designs.

On the system level design engineers have developed a long succession of power management techniques. Modern MCU’s typically rely on power gating, clock gating, and more recently dynamic (even adaptive) voltage and frequency scaling to minimize power consumption in both active and inactive modes. With the number of sleep modes and voltage islands proliferating, fine-grained power management becomes so complex that most CPUs now rely on separate power management ICs (PMICs). Since MCU’s are more self-contained, much of the power management burden is shifted from the embedded developer back to the chip designer.

Low Power –> Ultra-Low Power

If not the chips then the ‘race to the bottom’—in terms of power—between MCU vendors is getting heated. With the numbers they’re hitting, it’s hard to argue that the newest MCUs are indeed ‘ultra-low power’.

TI promotes its 16-bit RISC ‘ultra-low power’ MSP430 line in a wide range of applications, including a wireless sensor circuit that can operate from a single coin cell for up to five years (thanks in part to a very short duty cycle). The MSP430C1101—with 1kB of ROM, 128B RAM, and an analog comparator—draws 160 µA at 1 MHz/2.2V in active mode, 0.7 µA in standby mode, and 0.1 µA in off mode. This week TI announced its Grace software platform, a free plug-in for Code Composer Studio that provides a detailed graphical user interface to simplify low-level programming of MSP430 MCUs.

Microchip’s answer to the MSP430 is its eXtreme Low Power PIC Microcontrollers with XLP Technology.  XLP processors include 16 to 40 MIPS PIC24 MCU & dsPIC DSC families with up to 256 KM of memory and a variety of I/O options. On its web site Microchip emphasizes how low power its devices are in deep sleep mode, comparing the PIC24F16KA102 favorably to the MSP430F2252 LPM3 at 3V. Comparing power in active modes is considerably more complex, being highly application dependent. That’s what evaluation kits are for.

Silicon Labs claims that its C8051F9xx ultra-low-power product family includes “the most power-efficient MCUs in the industry,” with both the lowest active and sleep mode power consumption (160 µA/MHz /50 nA for the C8051F90x-91x) compared to “competitive devices.” Comparing data sheets is often and exercise in “apples and oranges,” but the numbers do justify the impression that ‘ultra-low power’ is a lot more than marketing hype.

NXP is definitely into green MCUs with its GreenChip ICs that “improve energy efficiency and reduce carbon emissions.” NXP’s recently announced LPC11U00—being a Cortex-M0-based MCU—is decidedly low power, but this one focuses more on connectivity, incorporating a USB 2.0 controller, two synchronous serial port (SSP) interfaces, I2C, a USART, smart card interface3 and up to 40 GPIO pins.

STMicroelectronics features 8- and 32-bit families of ultra-low-power MCUs, apparently skipping over the 16-bit migration path that Microchip needed to fill. The 8-bit STM8L15xx CISC devices can run up to 16 MIPS at 16 MHz but still only draw 200 µA/MHz in active mode and 5.9 µA down to 400 nA in various sleep modes. Like NXP, ST is into connectivity, including a wide range of options on different devices.

Connectivity and flexibility are the main selling point for Cypress’ programmable system-on-chip or PSoC. PSoC 5 is based on a 32-bit Cortex-M3 core running up to 80 MHz. Incorporating a programmable, PLD-based logic fabric, the CY8C54 PSoC family can handle dozens of different data acquisition channels and analog inputs on every GPIO pin. The chip draws 2 mA in active mode at 6 MHz, 2 µA in sleep mode (with RTC) and 330 nA in hibernate with RAM retention.

Grill the Gurus

If after reading all the datasheets you still have questions, this Thursday you can ‘grill the gurus’ online in real time as EE times presents the Digi-Key Microcontroller Virtual Conference: New Directions in MCU Designs, from 11-6 EDT. From 11:15-12:15 EDT I’ll be moderating the panel “Low-Power Design—Keeping Hot Designs Cool,” and questions from the audience are encouraged.

From 12:30-1:30 EDT Scott Roller, Vice President and General Manager, Microcontrollers at Texas Instruments will deliver the keynote, “What Will Make The Biggest Impact: Low Power? Connectivity? Simplicity? Yes.” TI sees the market for embedded MCUs exploding over the next several years, and it’s working on some interesting innovations that should open up new markets for developers.

Throughout the day there will be series of panels, webcasts, chats and exhibits at (virtual) pavilions of interest to the embedded design community. Click here to check it out. I hope to see you there.

Posted in Clean energy, Microcontrollers, Power management, trade shows | Tagged , , | Leave a comment

MEMS Motion Sensors: The Technology Behind the Technology

MEMS accelerometers and gyroscopes are all the rage in portable design, putting the ‘smarts’ in smart phones and a new level of fun in gaming consoles. But exactly what are they, how are they made and how do they work?

By John Donovan, Low-Power Design

Despite the fact that MEMS accelerometers have been built into automotive airbags since the mid-90s, few people were aware of their existence until 2006 when the Nintendo Wii game consoles started taking over their living rooms. MEMS motion sensors are now widely used in automotive electronics, medical equipment, hard disk drives, and portable consumer electronics. Today a smart phone can hardly be called ‘smart’ if it doesn’t include a MEMS accelerometer, gyroscope and possibly a compass, too. A small niche product five years ago, MEMS sensors now constitute a multi-billion dollar industry.

So what exactly are MEMS motion sensors and how do they work?

MEMS Motion Sensors

Form follows function, and there are several different types of MEMS motion sensors, each with unique construction and best suited to a particular range of applications.

Accelerometers

Single-axis accelerometers (Figure 1) detect a change in velocity in a given direction. They are almost universally used to inflate automotive airbags in the event of crashes. They are also used as vibration sensors to detect bearing wear in machinery, since vibration can be thought of as acceleration and deceleration happening quickly in a periodic manner. Analog Devices, Freescale and Bosch Sensortec all make single-axis MEMS accelerometers that are widely used in these applications.

Figure 1: Analog Devices ADXL150 single-axis accelerometer

Two-axis accelerometers add a second dimension, which can be as simple as detecting tilt by measuring the effect of gravity on the X-Y axis of the accelerometer. Accelerometers come in low-g and high-g sensing ranges, where low-g typically means less than 20x the force of gravity when the measuring body is at rest and high-g can range as high as 100x. Low-g MEMS accelerometers are used in handheld devices; high-g ones find a place in industrial, military and aerospace applications, where the g-forces are in excess of what humans could either generate or withstand.

Three-axis accelerometers can detect motion in three different directions. They widely used in mobile devices to incorporate tap, shake and orientation detection, all of which can result in different actions on the part of a cell phone.

Figure 2: Freescale MMA7660FC 3-axis accelerometer block diagram

Figure 2: Freescale MMA7660FC 3-axis accelerometer block diagram

The Freescale MMA7660FC 3-axis accelerometer (Figure 2) targets handsets by incorporating a range of user-programmable interrupts and sample rates in a small footprint (3 x 3 x 0.9 mm) DFN package. The MMA7660FC communicates 6-bit X-, Y- and Z-axis information with the processor over an I2C interface, eliminating the need for an A/D converter. The device draws as little as 47 µA in active mode at one sample per second; 2 µA in standby mode; and 0.4 µA in off mode.

Gyroscopes

Accelerometers measure linear motion, so they’re found in applications that measure acceleration, vibration, shock, and tilt. Gyroscopes on the other hand response to rotation, which is a measure of angular motion. They’re basically three-axis inertial sensors. Multi-axis MEMS gyroscopes are often embedded along with three-axis accelerometers in inertial measurement units (IMUs).

To read the full article, click here.

Posted in MEMS | Leave a comment

Power Management in USB 3.0

usb_logo

USB has become the most successful PC peripheral interconnect ever defined, with over 10 billion USB 2.0 products installed today. Still, despite its convenience, USB has never been either the fastest or the lowest-power interconnect protocol out there. USB 3.0 seriously attempts to address both of those problems.

Facing competition from other high-speed interconnect protocols like 400- and 800-Mbps IEEE-1394 (FireWire) and HDMI—both of which targeted high-data rate streaming of video—in 2008 the USB Implementers Forum (USB-IF) formalized the specification for USB 3.0, which promises a “SuperSpeed” data rate of 5Gb/sec, a 10x improvement over USB 2.0 while at the same time reducing power consumption.

How can they do that you ask?

For starters, by eliminating polling. A USB 2.0 host continuously polls all peripheral devices to see if they have data to send to the host controller. All devices must therefore be on at all times, which not only wastes power but adds unnecessary traffic to the bus. In USB 3.0 polling is replaced by asynchronous notification. The host waits until an application tells it that there is a peripheral with data it needs to send to the host. The host then contacts that peripheral and requests that it send the data. When both are ready, the data is transferred.

USB 2.0 is inherently a broadcast protocol. USB 3.0 uses directed data transfer to and from the host and only the target peripheral. Only that peripheral turns on its transceiver, while others on the bus remain in powered-down mode. This results in less bus traffic and a considerably lower power profile.

SuperSpeed USB enables considerable power savings by enabling both upstream and downstream ports to initiate lower power states on the link. In addition multiple link power states are defined, enabling local power management control and therefore improved power usage efficiency. Eliminating polling and broadcasting also went a long way toward reducing power requirements. Finally, the increased speed and efficiency of USB 3.0 bus – combined with the ability to use data streaming for bulk transfers – further reduces the power profile of these devices. Typically the faster a data transfer completes, the faster system components can return to a low-power state. The USB-IF estimates the system power necessary to complete a 20 MB SuperSpeed data transfer will be 25% lower than is possible using USB 2.0.

The SuperSpeed specification brings over Link Power Management (LPM) from USB 2.0. LPM was first introduced in the Enhanced Host Controller Interface (EHCI) to accommodate high-speed PCI-based USB interfaces. Because of the difficulty of implementing it, LPM was slow to appear in USB 2.0 devices. It’s now required in USB 3.0 and for SuperSpeed devices supporting legacy high-speed peripherals. LPM is an adaptive power management model that uses link-state awareness to reduce power usage.

LPM defines a fast host transition from an enabled state to L1 Sleep (~10 µs) or L2 Suspend (after 3 ms of inactivity). Return from L1 sleep varies from ~70 µs to 1 ms; return from L2 Suspend mode is OS dependent. The fast transitions and close control of power at the link level enables LPM to manage power consumption in SuperSpeed systems with greater precision than was previously possible.

Link Power Management

Link power management enables a link to be placed into a lower power state when the link partners are idle. The longer a pair of link partners remain idle, the deeper the power savings that can be achieved by progressing from UO (link active) to Ul (link standby with fast exit), to U2 (link standby with slower exit), and finally to U3 (suspend). The table below summarizes the logical link states.

Link State Description Key Characteristics Device Clock Exit Latency
U0 Link active   On N/A
U1 Link idle, fast exit RX & TX quiesced On or off µs
U2 Link idle, slow exit Clock gen circuit also quiesced On or off µs-ms
U3 Suspend Portions of device power removed Off ms

Most SuperSpeed devices, sensing inactivity on the link, will automatically reduce power to the PHY and transition from U0 to U1. Further inactivity will cause these devices to progressively lower power. The host or devices may then further idle the link (U2), or the host may even suspend it (U3).

Both devices and downstream ports can initiate Ul and U2 entry. Downstream ports have inactivity timers used to initiate Ul and U2 entry. Downstream port inactivity timeouts are programmed by system software. Devices may have additional information available that they can use to decide to initiate Ul or U2 entry more aggressively than inactivity timers. Devices can save significant power by initiating Ul or U2 more aggressively rather than waiting for downstream port inactivity timeouts.

Backward Compatibility

While the advantages of SuperSpeed USB are impressive, these devices are just beginning to appear in a world dominated by USB 2.0. For backward compatibility SuperSpeed devices must support both USB 2.0 and 3.0 link speeds, maintaining separate controllers and PHYs for full-speed, high-speed and SuperSpeed links. By maintaining a parallel system to support legacy devices, SuperSpeed’s designers accepted higher cost and complexity as a price worth paying to avoid compromising the speed advantage of their new architecture.

Posted in Energy Efficiency, Power management | Tagged | Leave a comment

How Will Consumers Benefit from the Smart Grid?

smart_gridGeorge Arnold from NIST gave a talk with that title over lunch today at the Smart Energy Summit in Austin. The benefits of the Smart Grid may be obvious at NIST but they’re a lot less so to consumers. In fact as Arnold noted wryly at the beginning, “Neither consumers nor utilities are quite sure why we’re doing this.” As the National Coordinator for Smart Grid Interoperability at the National Institute of Standards and Technology (NIST), it’s Arnold’s job to get the word out, which this talk definitely did.

The basic structure of the electric grid today is not much different than it was 100 years ago, other than the fact that it supplies AC rather than DC. Reasons for modernizing the grid include reducing costs; using more renewables; improving reliability; and supporting electric vehicle recharging.

The arguments on the cost side are compelling. Half of all U.S. coal plants are over 40 years old, and the cost of upgrading or replacing them is estimated at $560 billion by the year 2030. Smart Grid technology can reduce both peak and average electrical usage, reducing the required investment. There’s also considerable leeway for conservation. In the United States per capita annual electricity usage is 13,000 kWh. In Japan the per capita usage is 7900 kWh. By providing feedback to consumers on their usage patterns and enabling them to shift loads to nonpeak – and therefore lower cost – hours, smart grids provide the feedback loop to consumers both enabling and incentivizing them to conserve electricity.

The US has nowhere to go but up in the use of renewable energy sources. The vast majority of our electricity comes from coal-fired power plants. According to the Department of Energy renewables account for only 8.4% of US electrical generation, with Hydro contributing 5.95%, wind only 0.83% and solar even less.

On the reliability issue: the average U.S. utility customer experiences 125 min. of power outages per year; the average Japanese consumer only has to put up with that for 16 min. per year. The estimated cost of these power outages to the US economy according to the department energy is approximately $80 billion per year.

Turning to the demand side, where does the power go? Residential use accounts for 37%, commercial usage is 36% and industrial applications account for the remaining 27%. On the residential side 17% of your electricity goes to air conditioning, 15% to lights, 9% to heating and the balance to other devices. Getting your kids to turn off the lights will help, but only so much.

Arnold spent some time discussing smart appliances. Smart appliances will need home control systems in order to store your preferences for them; it won’t be up to the electric utility to determine when and which appliances you run. This event was heavily supported by numerous players in the smart appliance and home control markets, including the HomePlug Powerline Alliance, the HomeGrid Forum, the Z-Wave Alliance, the Wi-Fi Alliance and numerous semiconductor, system and utility providers. The stakeholders came to share ideas and hear what NIST had to say.

As well they might. As Arnold asked rhetorically, “With a dozen different communications interfaces, how do you do a national Smart Grid?” Good question. Right now just about every RF protocol you’ve heard of – and some you may not have – is vying to be part of the smart grid. Lacking any kind of standardization, and with plenty of money invested in proprietary solutions, utilities are understandably reluctant to move forward with Smart Grid implementations, and consumers are at least as confused.

NIST has now finished reviewing the various protocols and is now passing that information back to industry to work out standards. In Arnold’s words, “Things are about to become very contentious and argumentative” as standards are hashed out. As Bismarck once remarked, “If you like laws and sausages, you should never watch either one being made.” The same certainly applies to electronics standards.

There actually is real progress being made, and we’ll report on that shortly. Meanwhile don’t despair, the Smart Grid really is happening. It’s just not going to be happening next week.

Posted in Clean energy, Energy Efficiency, Smart Grid | Tagged , | Leave a comment