Hands-on Review: $12.95 Freescale Freedom Platform for Freescale Kinetis L microcontroller based on ARM Cortex-M0+ processor

I received a new low-cost microcontroller development board only a few days after writing up the Ti Stellaris LaunchPad Eval board last month. (See “Hands On Review: Texas Instruments’ Stellaris ARM Cortex-M4F LaunchPad Eval Board—$4.99!!! (sort of)”) The new board, a Freescale Freedom Platform (FRDM-KL25Z) features a low-cost, low-power Freescale Kinetis L microcontroller based on the new ARM Cortex-M0+ processor core. I wrote about the Freescale Kinetis L microcontroller and the ARM Cortex-M0+ processor core back in March. (See “How low can you go? ARM does the limbo with Cortex-M0+ processor core. Tiny. Ultra-low-power.”)

The ARM Cortex-M0+ processor core resembles ARM’s Cortex-M0 core, but it reduces the number of processor pipeline stages from three to two, which cuts operating power (and reduces the maximum clock rate in a given process technology). The jump from M0 to M0+ also adds some useful features including an optional 8-region Memory Protection Unit, one non-maskable and as many as 32 physical interrupts, sleep modes (with an optional data-retention mode), an optional 32×32-bit hardware single-cycle multiplier, optional CoreSight JTAG and debug ports, and an optional Micro Trace Buffer. More important, the ARM Cortex-M0+ core adds a single-cycle I/O port that provides very high speed access to tightly-coupled peripherals. The fast I/O port is accessible both by loads and stores, from the processor and from the debugger.

The specific Kinetis L microcontroller soldered on the Freescale Freedom Platform development board is a KL25Z128 with a 48MHz ARM Cortex-M0+ processor core, 128Kbytes of Flash memory, 16Kbytes of SRAM, a Freescale-specific power-management controller and a low-leakage wakeup unit, and a slew of digital and analog peripheral devices including a really interesting capacitive touch controller. Two pins from the microcontroller connect directly to a pair of capacitive touch pads etched into the copper on the Freescale Freedom Platform’s pc board. The touch pads form a capacitive slider.

Like the TI Stellaris LaunchPad board I reviewed last month, the Freescale Freedom Platform also includes a 3-color LED that’s used in the power-up demo of the board. There’s also a 3-axis accelerometer (a Freescale MMA8451Q) soldered to the board and connected to the Freescale KL25Z microcontroller over an I2C bus. Oh yes. There’s also a reset button.

Here’s a photo of the board:

Freescale Freedom Platform Board

 

You can see from this image that there are two mini-USB ports along the board’s edge, shown at the bottom of the board in the image. One mini-USB port is connected to the USB controller in the Freescale Kinetis L microcontroller. The other mini-USB port is a Freescale OpenSDA debug port, which is operated by a separate Freescale microcontroller, also installed on the board. The Freescale Freedom Platform board will draw power from either mini-USB connector.

According to the Freescale Freedom Platform user guide, OpenSDA “is an open-standard serial and debug adapter. It bridges serial and debug communications between a USB host and an embedded target processor… The hardware circuit is based on a Freescale Kinetis K20 family microcontroller (MCU) with 128 KBytes of embedded flash and an integrated USB controller. OpenSDA features a mass storage device (MSD) bootloader, which provides a quick and easy mechanism for loading different OpenSDA Applications such as flash programmers, run-control debug interfaces, serial-to-USB converters, and more.”

Here’s a block diagram of an OpenSDA port from the Freescale Freedom Board manual:

OpenSDA Debug Port Block diagram

 

Like the TI Stellaris LaunchPad board, the Freescale Freedom Platform has an array of 0.10-inch headers that provide connectivity to the various I/O pins on the Freescale Kinetis KL25Z microcontroller. However, unlike the TI board with its proprietary header pinout, the Freescale Freedom Platform taps into the vast and growing array of Arduino “shields” (expansion boards). It does this through its outer header rows, which are compatible with the Arduino Rev 3 standard.

Here’s the pinout for the Freescale Freedom Platform headers. The outer rows are Arduino-compatible while the inner rows offer additional connectivity to the wealth of I/O pins available on the Freescale Kinetis KL25Z microcontroller.

Freescale Freedom Platform Board Pinouts

 

So much for the silicon; what about the software? You get four choices here:

  • Freescale’s CodeWarrior 10.3 Development Studio
  • The IAR Embedded Workbench for Freescale Kinetis microcontrollers
  • The Keil MDK-ARM toolkit (“Available soon!” for the Kinetis-L family says the Web site)
  • Code Red’s Red Suite (an evaluation copy available upon request)

Note—These can be BIG downloads. Here’s what Curt Carpenter wrote in October:

“Step Two: Download the latest (beta) version of the Freescale CodeWarrior IDE. This is a HUGE download (1GB), and after installation it occupied 3.5GB on my hard drive. Be forewarned that you won’t be able to save an image of this software on a standard CD, which is a bit inconvenient given that it takes almost an hour to download it over DSL, and you’ll want a backup.

The good news is that once you’ve installed this IDE, you will have IDE support (and more, including a limited version of the MQX RTOS) for just about every MCU Freescale has ever supplied. And it’s Eclipse-based, so you’ll probably find it a familiar environment to work with. And—the documentation and support is extensive, and very good—once you find it.”

You will definitely want to read all that Curt Carpenter wrote using the above link. Let him find the potholes for you; That’s my advice. I strongly agree with another thing that Carpenter wrote. Freescale needs to make all the files and documents associated with this board much easier to find and download. I felt like I was on a treasure hunt. I do not understand why all of the various files are distributed over several Web pages. Hey, Freescale; Put all the needed files, manuals, etc. on one Web page. Also, if you are interested in more information about the Freescale Kinetis L series, Carpenter recommends this Avnet site, which offers four training modules on the new microcontroller family.

People are already starting to do some interesting things with the Freescale Freedom Platform. For example, Erich Styger in Lucerne, Switzerland wrote a blog post about pairing the Freescale Freedom Board with an existing Arduino data-logging shield, which has an on-board Flash SD card. The data-logging shield is from AdaFruit, a prominent player on the Arduino scene. You can read about Styger’s efforts here.

I’ve not been nearly as enterprising as Styger or Carpenter. I powered up my Freescale Freedom Platform through a USB connection to my laptop PC. To do that, I first had to locate a USB-to-mini-USB cable because Freescale doesn’t supply one. I finally found such a cable on a Tom Tom GPS unit, which I temporarily appropriated. It seems the smartphone world has moved on to micro-USB cables, which are plentiful around my place, but mini-USB cables could be scarce where you are so you might have to spend some extra time finding the right cable like I did. (I later bought a spare USB-to-mini-USB cable at the local Dollar Tree store for, of course, $1.)

My Freescale board powered up and immediately started to cycle colors on the 3-color LED. Tilt the board and the sample code installed in the Kinetis L microcontroller on the board senses the movement and changes the LED color. Stick your finger on the capacitive slide sensor and the LED changes to white and the slider works like a dimmer. Cool demo! OK, the Freescale Freedom dev board works well enough for me to review it here.

The Freescale Freedom Platform is shockingly inexpensive. How cheap? Now that’s a little hard to answer. The “official” price is $12.95, which is about what dinner costs at your corner eatery. That’s not much for the amount of learning you’ll get out of the product. However as of this writing, Element14/Newark lists the Freescale Freedom Platform for $10.95. Even better. Down to dinner for two at In-N-Out Burger. I got mine for free directly from Freescale in exchange for an hour of my time attending a Freescale Kinetis L presentation at the recent ARM TechCon 2012 event held in Silicon Valley. So the price may not be exact, but “cheap” covers all three possibilities. No matter which way you end up going, I don’t see how you can go wrong with this board.

 

Posted in ARM, Low-Power, Microcontroller | Tagged , | Leave a comment

Will your low-power design run on batteries for 30 years? This design from 1981 did.

I hang out on the discussion forum for the hpmuseum.org Web site, mostly to soak up the ambiance of people who really love old HP calculators. I own an original HP 35 (the world’s first pocket scientific calculator) that I bought as a college junior, which I used to plow through Circuits 3 exams and iterate on the problems while my peers slaved over their slide rules just to run through one set of calculations. I also own an HP 41CX, given to me by a good friend, which is way too capable for my computational needs these days but it was a landmark pocket calculator in its day. Once in a while, I find a gem of a discussion that I like to share in my blogs and such a gem just appeared.

It’s about a member of the HP Voyager series of small, credit-card-sized calculators with CMOS guts (that’s what the “C” means in “HP 10C”). The earliest HP calculators pre-dated CMOS LSI and used LEDs rather than low-powered LCD displays so they employed rechargeable NiCd batteries that lasted a few days rather than decades. The most famous Voyager calculator from HP was the HP 12C financial calculator, which is still being manufactured after three decades because it’s the de facto calculator for financial calculations. Voyager machines were powered by primary coin cells.

Here’s the conversation that seemed blog-worthy to me:

 

“It struck me that my HP10c (SN 2247A02668) is now 30 yrs old. It has lived all these years on the original batteries! Anybody else experienced a long battery life?”

 

“Yes, some (a few?) users have witnessed about 3 decades of battery life in their Voyagers, something closer to what you have. Indeed, these are somehow rare experiences and I believe some factors, when happening together, are the most important:

  • calculator chipset – probably one amongst a set is tunned this way;
  • battery set – the three ones you just choose, together with the calculator chipset;
  • battery contacts – if the calculator has a power supply control that compensates battery voltage drop with current increasing (SMPS), having the battery contacts clean and with very low resistance leads to lower current drain and batteries will live longer;
  • usage – the way you use your calculator and how often you do that;
  • climate conditions;
  • as many others as needed (geographic location for magnetic fields?)

Or else it is just something that happens… ;)

Cheers. And congratulations! Your HP10C is one of a kind!”

 

“The coin batteries are awesome. I wish rechargeable batteries lasted that long.”

 

Now for today’s question. With 30 years of VLSI advancements and Moore’s Law progress, do you think you could design anything that might still be operating off of its original coin cell batteries 30 years after being manufactured and sold. That seems like a really large order to me.

In the intervening three decades, devices geometries have shrunk, leakage has gone up, and clock rates have risen significantly as well. According to the hpmuseum.org Web site, HP had developed a new processor used in the Voyager series that was “an 85,000-transistor circuit, which drew 0.25 milliwatts and had a standby leakage of 5 – 10 nanoamperes. The process was meant to allow calculators to run for a year from a set of small batteries but several owners have reported that they are still running on their original batteries after 20-22 years.”

Apparently, those words were written a few years ago because the forum discussion above puts one set of batteries at 30 years and still going. Quite an accomplishment! Try finding a microcontroller with a sleep-current rating of 5-10 nanoamps. The new Freescale Kinetis L series of 32-bit microcontrollers, which is really very good, is rated at 381 nanoamps typical (943nA max) in deep sleep mode and 40µA/MHz in run mode. So the run mode seems like it could perform better but that low-current sleep mode in the old, old HP calculator chip seems pretty darn good even today.

That 1981 semiconductor technology was nothing to sneer at, was it?

 

Posted in Low-Power | Tagged | 2 Comments

Hands On Review: Texas Instruments’ Stellaris ARM Cortex-M4F LaunchPad Eval Board—$4.99!!! (sort of)

This is truly a bargain-of-the-year kind of story. A few months ago, I ordered two of the new TI Stellaris LaunchPad Eval Boards, which incorporate a TI Stellaris LM4F120H5QR microcontroller. I ordered one for me and one for my new boss (as an ethical bribe, but that’s a different story). The TI Stellaris LM4F120H5QR microcontroller is based on an 80MHz version of the ARM Cortex-M4F processor core. The ARM Cortex-M4F processor core is a full-fledged 32-bit processor core and the “F” designation says it includes a hardware floating-point unit. The TI LM4F120H5QR microcontroller also incorporates 256Kbytes of Flash EPROM, 32Kbytes of SRAM, and 2Kbytes of EEPROM for memory.

When combined with the ARM Cortex-M4F’s largely 16-bit Thumb-2 instruction set, that’s a lot of program and data space crammed into a low-cost, 32-bit microcontroller. The price for the bottom-most member of the Stellaris microcontroller family is listed at $1.53 in 10K quantities. The LM4F120H5QR microcontroller that’s soldered to the $4.99 Stellaris LaunchPad Eval Board resides in the LM4F120 microcontroller series that’s one notch up from the bottommost Stellaris LM4F110 series.

Key features of the TI Stellaris microcontrollers include:

  • IEEE754-compliant, single-precision floating-point capability at 80 MHz
  • SIMD instructions
  • As much as 256Kbytes of embedded flash memory and 32Kbytes of SRAM
  • Low-power modes including power-saving hibernate
  • As many as two 12-bit 1MSPS ADCs and 24 analog input channels of
  • As many as two CAN controllers
  • Optional full-speed USB 2.0 with device, host, and OTG
  • Advanced motion control capability, with as many as 16 motion control PWM outputs and two quadrature encoder interfaces
  • As many as eight UARTs, six I2C, ports, and four SPI/SSI ports

Here’s a composite block diagram of the TI Stellaris LM4F microcontroller series that shows the large number of goodies you can get on one inexpensive piece of silicon:

TI Stellaris LM4F Microcontroller Composite Block Diagram

But this is a review of the incredibly low-cost Stelaris LaunchPad Eval Board, not the TI Stellaris microcontroller itself. Here’s a photo of the board with some key features called out:

TI Stellaris LaunchPad Eval Board

 

You can see the 64-lead Stellaris LM4F120H5QR microcontroller soldered diagonally in the center of the LaunchPad Eval Board so that most of its I/O pins can be more easily fanned out to the two 20-pin, dual-row, unisex, tenth-inch headers that flank the microcontroller on either side of the board. This diagonal mounting seems to have become common for surface-mount microcontroller packages. What’s that other 64-lead device at the top of the board? It’s a second LM4F120H5QR microcontroller, supplying the second USB port I/O and, I’m guessing here, operating the board’s serial debugging interface. There’s also a set of landing pads for a mini JTAG port on the back side of the board. That’s a nice touch that cost essentially nothing to add.

The Eval Board’s two dual-row headers have pins that stick up from the component side of the board and female headers on the opposite side of the board. These headers allow you to add auxiliary boards in the same way that you can plug shields onto the very popular Arduino series of microcontroller boards—another meme that’s becoming common in small microcontroller boards like the TI LaunchPad series and the Arduino series.

The TI Stellaris LaunchPad series uses proprietary header definitions that fit TI’s own “BoosterPack” series of add-on boards. TI sells Stellaris-specific BoosterPacks including a variety of input, output, and display boards and it also has a line of BoosterPacks for its MSP430 series of microcontrollers. These MSP430 BoosterPacks are also pin-compatible with the Stellaris LaunchPad board. At the time of this writing, the TI.com Web site shows 38 available BoosterPacks.

However, this is a review of the TI Stellaris LaunchPad board and not the BoosterPacks.

If you look at the above photo, you’ll see that the TI Stellaris LaunchPad board also includes two micro-USB connectors, an RGB LED, a power switch, a reset switch, and two user switches. The board is supplied preprogramed with a simple program that lights the RGB LED and cycles through several colors with the behavior based on user switch presses. One switch activates a continuous sweep and the other steps through the colors with each switch press.

How easy is it to get the demo working? Pretty darn easy. I took the supplied USB-to-micro-USB cable and plugged it into the laptop I’m using to write this review. I plugged the other end into the top micro-USB connector, threw the power switch to the right, and the board booted. It immediately started cycling the RGB LED through the spectrum.

Actually, that’s what I was supposed to do but it’s not quite what I did. I mistakenly plugged the micro-USB connector into the LaunchPad board’s other USB connector and when it didn’t power up as expected, I threw the power switch to the left, connecting board power to the left-hand micro-USB connector. That’s the connector that the LM4F120 microcontroller is supposed to drive. Even so, the board powered up as expected and I had just proven that it’s foolproof. Even a fool like me can get it working without reading directions.

Duh!

You’re supposed to plug the micro-USB cable into the top micro-USB connector, which is clearly marked “Debug.” This port includes the In-Circuit Debug Interface (ICDI) and serves as a virtual JTAG port for debugging purposes. I confirmed that doing it the right way also seems to work.

The board doesn’t come with any development software at all. There’s no CD included, which I might have expected if I hadn’t paid $4.99 (including shipping) for this board. Not to worry, however. Everything you need including docs, I/O drivers, specs, etc. can be downloaded for free from this page on the TI Web site.

You can also download free evaluation versions of four different software-development platforms including IDEs for the TI Stellaris microcontroller series:

  • TI’ Code Composer Studio (full version free for developing code for TI microcontrollers)
  • IAR Embedded Workbench—KickStart Edition (free version limited to 32Kbytes of code)
  • Keil RealView Microcontroller Development Kit (Eval version)
  • Mentor Sourcery CodeBench (30-day limited eval, full version)

Now for the bad news. I’m sorry to say that the $4.99 deal that includes shipping is over. Sadly, you can’t get a TI Stellaris LaunchPad Eval Board for a fiver any more. You can, however, buy the board from the TI eStore for $12.99, which is still darn cheap for what you get! The TI eStore Web page says that the board is now shipping—confirmed, I just received two of them—and to allow six to eight weeks for delivery.

Now, finally, here’s a bit of information on the low-power aspects of this microcontroller. The TI Stellaris LM4F microcontroller series includes a hibernation module that manages microcontroller power. When the processor and peripherals are idle, power can be completely removed from the device with only the Hibernation module remaining powered. Power can then be restored based on an external signal or at a certain time using the built-in, 32KHz Real-Time Clock (RTC). The Hibernation module can be independently powered from an external battery or an auxiliary power supply and the TI Stellaris LaunchPad Eval Board includes a jumper that supplies separate power to the hibernation module. If you pull that jumper and insert an ammeter in series with the jumper pins, you can measure the hibernation current.

The Hibernation module’s power pin is named VBAT and the Hibernation current, VDD3ON, is nominally rated at 5 μA on the microcontroller’s data sheet when the Hibernation module is enabled and running at 32KHz with the processor clocks stopped. One of the key strategies for using 32-bit microcontrollers like the TI Stellaris LM4F is to program them to get their business done quickly using their superior 32-bit processing power so that they can hibernate most of the time.

In conclusion, I think that the TI Stellaris LaunchPad Eval Board is a true bargain even at the new “higher” price of $12.99. I heartily recommend you get one to dip your toe into the world of low-cost, low-power, 32-bit microcontrollers. At $4.99 I would have suggested you skip lunch at Taco Bell and buy a TI Stellaris LaunchPad Eval Board. Instead, at $12.99, you can skip the lunch special at El Torito one day and buy a TI Stellaris LaunchPad Eval Board with the money you save.

 

 

Posted in Low-Power, Microcontroller | Tagged , , , , | 1 Comment

Replace an old IDE HDD with an SSD emulator to cut noise and power? The Korg D8 experiment, Part II—Compact Flash

Earlier, I reported on my attempt to swap in a solid-state disk emulator for a small IDE hard drive in an old Korg D8 multitrack audio workstation. (See “Replace an old IDE HDD with an SSD emulator to cut noise and power? The Korg D8 experiment.”) The Korg uses the hard drive to store digitized sound while keeping its OS in an on-board Flash memory chip. It’s a bit unusual in that it does not use a PC motherboard chip set to operate the drive. It’s based on a Mitsubishi (now Renesas) microcontroller that tickles the hard-drive’s IDE port in a way that gets data onto and off of the drive. I doubt that the microcontroller from the early 1990s had a built-in IDE port so the IDE port emulation is likely done in firmware running on the microcontroller.

My first attempt at IDE drive emulation involved a low-cost (less than $10) emulation board that accepts SD cards and makes them look like IDE drives. The Korg D8 had a hard time initializing the SD card (I’m not really sure that it did) and had no luck in using the drive emulator for recording.

When I ordered the SD-card emulator board from eBay, I also ordered a similar board that took Compact Flash (CF) cards. CF cards have built-in IDE emulation and so the emulation board is really just an adapter that changes the 2.5-inch, 44-pin IDE connector into a CF card socket.

IDE to CF Card Adapter and 1Gbyte CF card

I had hoped that the built-in nature of the CF IDE emulation would have delivered more satisfactory results. It didn’t.

I found this interesting discussion on a site operated by an Australian company named Yawarra Information Appliances that discusses different levels of IDE compatibility among CF cards from different vendors. Here’s an excerpt:

“However, not all CF cards are created equal. As compact flash was originally intended for photographic applications, some manufacturers have created CF cards that work well in digital cameras, but not so well in computers, which need good IDE emulation. The SanDisk cards have very good IDE emulation.”

I had CF cards from several different vendors including SanDisk in my collection of cards I use for my Canon 20D DSLR. Capacities ranged from 512Mbytes to 8Gbytes. I’d hoped that one of these cards in the CF adapter would satisfy the Korg D8 but that proved not to be the case. The Korg had just as much trouble initializing the various CF cards as it had with the SD-card IDE emulator.

The key indicator of a problem is that the Korg would finally return to operational mode after trying to initialize the drive and there would be wonky characters used in the name for the first song. Clearly, there’s a problem with data transfer between the emulated IDE port in the Korg and the emulated IDE ports in the SD-card and CF-card IDE adapters.

At this point, it’s not worth it for me to continue this particular experiment. The Korg is a terrific piece of equipment but it is long in the tooth compared to what you can do with a laptop these days. The Korg’s built-in 4Gbyte limit for all song files per attached disk drive pinches a bit. (There’s a 25-pin SCSI port for external drives on the Korg D8 to save more songs and tracks.) However, I’m convinced that this would be a smart move for bringing older embedded systems based on more standard PC hardware into a more reliable operating mode that draws less operating power in the bargain.

Posted in Low-Power, Microcontroller, SSD | Tagged , , , , | 1 Comment

Replace an old IDE HDD with an SSD emulator to cut noise and power? The Korg D8 experiment.

I’ve got an old Korg D8 digital multitrack audio wokstation. It was state of the art for recording sound back in the mid 1990s. The whole concept of digital audio workstations was new back then. The products were just starting to replace analog audio workstations that used compact audio cassettes to store sound and music and the sound quality you could get from an audio workstation was a big jump up from analog tape audio.

Korg D8 Multitrack Digital Audio Workstation

The Korg D8 was the top of the line digital audio workstation for prosumer gear back then. It was equipped with a 2.5-inch hard drive to store the multitrack audio. You could store 34 minutes across eight audio tracks with 16 bits of uncompressed resolution at a 44.1KHz sampling rate with the supplied 1.4Gbyte drive. Back then those specs were hot stuff. Now almost any self-respecting laptop can record two tracks of that type of audio for hours and hours at a time on fractional Terabyte drives.

I boosted the hard-drive capacity of my Korg D8 a long time ago by switching out the 1.4Gbyte hard drive and replacing it with a 4Gbyte Fujitsu hard drive. (The Korg D8 used a proprietary operating system stored not on the hard drive but on board-mounted Flash memory and the OS has a built-in 4Gbyte limit on the size of the hard drive.)

I’ve still got the Korg D8 packed away and I thought it would be fun to try upgrading the hard drive to a Flash drive. There are a few reasons you might want to revisit an old embedded design based on a hard drive and update it to an SSD. The first reason is acoustic noise. It’s not loud but I can certainly hear the hard drive spinning in the Korg D8. That’s not something you want in an audio workstation. You don’t want any extraneous acoustic noise in the audio recording zone if at all possible. The same holds true for many embedded designs. Acoustic noise—whether it’s the hard drive or a fan—calls attention to the embedded system. SSDs make no noise.

A second reason you might want to make the conversion is for ruggedness. Hard disk drives can be broken by physical shock or vibration. SSDs cannot.

A third reason you might want to convert an embedded design from HDD to SSD is for power reduction. It does cost power to spin the disk(s) of a hard drive and it generally takes less power to move electrons in and out of the floating gates of Flash memory. In addition, you can power down and power up an SSD much faster than an HDD.

A final reason you might want to convert an embedded design from HDD to SSD is for volumetric reasons. An SSD can be nothing but a circuit board. In fact, SSDs in Ultrabooks are already heading in that direction using the mSATA board format. HDDs are bigger.

With so many reasons for making such a conversion, I decided to see if I could make such a conversion on the Korg D8. Now the IDE disk interface used in the Korg D8 is already obsolete and it’s getting hard to find 2.5-inch HDDs with that interface much less SSDs that emulate 2.5-inch IDE HDDs. In fact, the 2.5-inch IDE SSDs I found were damned expensive like this one by Transcend from Memory.com for $67.35. That was too steep for my experiment. You can’t sell a Korg D8 for that much on today’s market, even one in pristine condition like mine.

I searched for other alternatives and came up with this slick little IDE to SD Card adapter on eBay for $7.79 including postage. This board accepts an SD or SDHC card and exposes a 44-pin IDE interface to the host system. I already had a 4Gbyte SD Card on hand, so this approach looked ideal for my experiment. It was cheap.

IDE to SD Adapter Card and 4Gbyte SD Card

I ordered the IDE to SD Card adapter board from China. It arrived in about two weeks in a small bubble-pack envelope. A couple of days later, during a weekend, I had time to try the experiment. I removed the board from its bubble pack. Shipping across the Pacific Ocean had not been kind to the board. One of the pins on the connector was bent (as shown in the above photo) and the SD Card sheet metal enclosure on the back of the board was also slightly bent. The bent pin would prevent easy connection with the IDE cable and the bent SD Card enclosure prevented the insertion of the SD Card.

Neither of these injuries was fatal. A pair of pliers and a flat-bladed screwdriver fixed the problems. I then inserted the 4Gbyte SD Card into the IDE to SD Card adapter board and was ready for the HDD to SSD transplant.

I opened the Korg D8’s drive bay on its bottom panel and folded out the 4Gbyte Fujitsu SSD.

Korg D8 HDD, 50-pin connector detail

The first problem reared its head. The Fujitsu drive uses a 50-pin IDE connector and the IDE to SD Card adapter board has a 44-pin connector. The difference appears in the diagram below. The 50-pin IDE connector has four extra vendor-specific pins and two key pins, shown at the top of the diagram.

50-pin IDE header

These six pins were added to the conventional 44-pin IDE standard for the 2.5-inch drive format. The original 44-pin format was derived from with the 40-pin 5.25- and 3.5-inch IDE HDDs and added pins for power. Well, these six extra pins present no real problem as long as you’re careful to bottom-justify the connector pins on the IDE to SD Card adapter board and are also careful about polarity. Polarity is a bit of a problem because the IDE to SD Card adapter board comes with no instructions but there’s both a triangle on the board’s silkscreen and a square pad on the connector pins to denote pin 1 of the 44-pin connector.

I lashed the whole thing together and switched on the power. The 5V LED and another LED lit on the IDE to SD Card adapter board. That was a good sign. The Korg D8’s LCD displayed “Working” and then indicated that it did not recognize the internal drive, which I’d expected. It then asked for permission to initialize the drive, which I gave. The Korg D8 chewed on that command for a while and then displayed an “almost” correct indication of readiness but then the audio workstation locked up.

I tried reformatting the SD Card in a laptop computer and I also tried some other things but in the end I concluded that the IDE to SD Card adapter board was not emulating an IDE HDD with enough fidelity to satisfy the Korg D8. It might be good enough for old laptop computers with standard chipsets and BIOS ROMs but it wasn’t good enough to fool the proprietary OS in the Korg D8.

A noble experiment, but a failure. Sigh.

However, the reason for the experiment is nevertheless valid. Perhaps one of the actual IDE SSDs on eBay might have worked. I’ve since found 4Gbyte, 2.5-inch SSDs (not adapter boards for SD Card or Compact Flash cards) in the $25 to $28 range. Had I found one of those before, I might have tried one of them. Perhaps I will at a later date.

Posted in Flash, Low-Power, SSD | Tagged , , , , , , | 3 Comments

Diagnosticians Beware: The Internet can Mess Your Mind

My wife’s car, a 2002 Saturn L300, has a problem. It’s losing coolant. I just added about three inches of bright orange GM Dexcool coolant to the overflow reservoir under the hood after having done something similar just a few weeks ago. I started to troubleshoot the problem so I could decide whether or not I could handle it. If not, if the problem exceeded my two semesters of automotive technology at De Anza College in Cupertino, I would need to take the car in.

Coolant loss can be caused by a number of problems. One frequent cause is a blown head gasket that allows the coolant to seep into an engine cylinder. The wayward coolant burns up in the cylinder and you can get white smoke out of the tailpipe. I didn’t see any white smoke and this car only has 35,000 miles on it (after 10 years!). A blown head gasket wasn’t likely and wasn’t indicated.

Another way for coolant to leak out is to, well, leak out—onto the ground. However, I didn’t see any puddles under the car’s parking space.

Time for more ideas so I turned to my good friend, Mr. Google. Googling “Saturn L300 coolant loss” turned up several possible culprits:

  • If the carpet in front of the passenger seat was wet, then I had a leak associated with the car’s heater core. Nope, no wet carpet.
  • It’s possible for the water pump to leak coolant but I didn’t see any encrustation on the engine and, again, I didn’t see any puddling under the car.
  • Now the biggie: This particular engine, a V6, has an oil cooler located in the valley between the cylinder rows on the engine block, underneath the intake manifold. The oil cooler is bathed in coolant and there’s an access cover over the oil cooler assembly that keeps the coolant inside of the engine block. That cover is sealed with a bead of RTV silicone instead of a gasket. That silicone bead apparently fails often in this engine design, at least according to multiple reports on the Internet.

Ugh.

If my wife’s car had this last problem, then we’re looking at 8 to 10 hours of labor (according to “The Book”) for removing the intake manifold, removing the coolant cover, scraping off the bad old RTV, putting on a bead of better new RTV, replacing the cover, replacing the intake manifold, and testing the repair. I’m not doing that in my condo parking space and at current repair rates, that’s more than $1000 for the repair labor, plus $1.98 for the new silicone.

Ugh.

Time for another opinion.

This morning, we took the car to a local repair shop in walking distance of our condo in downtown San Jose. This shop had more than 100 good and excellent ratings on Yelp.com, where we found it thanks to Mr. Google.

Once upon a time, we would have taken the car to the Saturn dealer. However, it’s been a couple of years since there were any Saturn dealers. GM closed the division and shuttered the dealerships. Our Saturns are now orphans. Oh, and my other brand of car is a DeSoto. Those dealerships have been shuttered for 50 years and people in Chrysler dealers’ parts departments look like you’re from that Pluto planetoid if you mention “DeSoto.”

In its infinite wisdom, GM has designated some of its remaining dealers as Saturn repair depots. In San Jose, we’re so lucky that GM has chosen a Cadillac dealer as the Saturn repair center.

No thanks. I didn’t buy a practical, plastic-sided, middle-of-the-road GM vehicle to have it repaired by a Cadillac dealer’s repair department.

Time to try an independent.

The repair shop we visited was open on Saturday. The highly caffeinated owner came out and talked with us about the problem. We opened the hood of the Saturn.

“Oh, it’s one of those,” said the repair shop owner looking at the V6 shoehorned transverse-wise into the engine compartment. That didn’t sound good. “I really don’t like Saturns” he continued. Double uh-oh.

“What’s wrong?” he then asked.

“Coolant loss,” I said. “It’s not coming out of the tailpipe. No white smoke. It’s not pouring on the ground. No puddles.”

“Good. You have observations. Do you have a theory?” asked the repair shop owner. That stopped me cold. What kind of repair shop invites you to opine about the problem?

I told him about the Saturn V6 coolant cover problem I’d discovered on the Internet.

“Good,” he said. “That’s your theory. Now I’ll look.”

He went to get a flashlight.

Then he looked carefully under the intake manifold. He didn’t see anything. “Clean, clean, clean” he said. “This is the cleanest 10-year-old engine I’ve ever seen. I don’t see anything that looks like coolant under the intake manifold. With the amount of coolant loss you describe, I’d expect to see something. Let’s look some more.”

He used the flashlight in a few more places and then called my attention to the side of the radiator.

“See those white streaks?” he said. “Now we’re getting somewhere.”

“You think it’s the radiator?” I asked.

“Wait,” he said, “let’s look some more.”

Then he called my attention to the area just below the junction of the upper radiator hose and the radiator. It looked like clusters of red algae were growing there. It was encrusted coolant. Orange GM Dexcool.

“It’s the right color,” he said. “I think you have a radiator hose problem. It’ll run about $25 for a replacement hose and about $80 in total.”

Glug.

The first thing you learn about coolant leaks in automotive technology class is to check the hoses. Just as it is with electronics (it’s always the connectors—unless it’s not), in cars the leak’s always caused by the hoses—unless it’s not.

Lesson learned. You can get a lot of help from the Internet, but you can’t troubleshoot a car or a circuit design just by surfing the Web. You need to exercise your troubleshooting skills first. Make sure to ignore Mr. Google’s siren song while making your initial assessment of any problem.

Posted in Design | Tagged , | Leave a comment

Charging the Chevy Volt

This is the second blog post describing my experiences with a Chevy Volt that GM loaned to me for three days in August. I’m going to spend a lot of time describing the charging of the Chevrolet Volt because that’s what I had to spend most of my time doing: figuring out how I was going to charge the vehicle. It seemed silly to test drive an electric car and drive it in gasoline mode all of the time. I needed to charge it. (For Part 1, see “Driving the Chevy Volt.”)

My first charging attempt was to use the included 110v Level 1 charging cord. This seemed like a simple alternative. One of my condo parking spaces has a convenient 110v outlet nearby so I plugged in the outlet end of the cord. The charging module in the cord has three red/green LED banks to tell you that state of the charging cord, as you can see in this image:

The top two LED banks indicate the state of the 110v electrical outlet and the state of the charging cord. If both glow green (as shown in the image), you’re good to go. When I plugged the cord in, both of these LED banks glowed red briefly, and then switched to a welcome green. So far, so good.

The bottom LED bank indicates the selected charging rate. If all four green LEDs are on, you’ve selected the highest charging rate for the 110v cord. If only the left two LEDs are lit, you’ve selected the lower charging rate. The rate selector is the orange pushbutton below the four LEDs. In the image above, you can see that the highest charging rate is selected.

I plugged the charging cable into the Chevy Volt and an amber light lit up on the top of the Volt’s dashboard. It soon changed to green and the car horn honked briefly. That was according to plan. Then, after a few seconds, the light changed back to amber and then again back to green. The car honked again. That wasn’t in the plan.

I left the car for an hour and had dinner. When I came back, there was no light on the dashboard and the top two LED banks on the charging cord had changed to an angry flashing red. Uh oh, it seemed that something was wrong. I pulled the charging cord’s plug from the 110v socket and the ground-fault interrupter popped. Clearly, something was wrong.

I reset the ground-fault interrupter and tried again. Again the 110v charging failed and the angry red blinking LEDs returned.

Some research on the Internet told me that others had seen this same problem and that the fault was with the wiring, the outlet, the ground-fault interrupter, or the charging cord. Not much help. Whichever one it was, I didn’t have an alternate electrical socket to try near my parking space so I gave up on this approach. I was pretty sure a 50-foot extension cord wasn’t going to help matters any.

Time to try one of the public charging stations. That was my only other charging option during the 3-day trial period.

At 7:30 pm, I drove the car over to the first of three public charging stations located directly on the street in front of San Jose’s ultramodern City Hall on Santa Clara Avenue. These charging stations are only a few months old. I saw them go in. GM had lent me a ChargePoint card, which has an integral RFID tag in it, to pay for electricity during the trial period. I held the tag next to the charging station and the station unlocked the charging cord, which I then plugged into the car’s charging port. The car started to charge.

Much better!

Note that the Chevy Volt’s battery-charging port is on the driver’s side, which is not great for street charging. When you’re in a gasoline station, the fuel-filler port can be on either side and you only need to know which side of the pump to park on. As it turns out, the charging port’s location is more problematic when it comes to public charging stations. With the charging port on the driver’s side, you will need to pay attention to oncoming traffic when connecting and disconnecting the charging cord.

By the way, the Chevy Volt’s fuel-filler port is on the passenger side, on the rear fender, far away from the electrical charging port. That’s probably a very good idea and may well explain the charging port’s location.

While I was at city hall setting up the charging session, someone tried to panhandle money from me. It’s that kind of a downtown location. Getting panhandled wasn’t fun, it usually isn’t, but I’ve been panhandled while fueling my Saturn VUE at a downtown San Jose gasoline station and this experience wasn’t all that different. Gasoline stations are public too. Nevertheless, I was flustered enough to leave the car switched on but locked as I walked the block back to my condo.

I returned in just two hours because I really didn’t want to leave a $45,000 loaner car charging in that unprotected and very public location for much longer. It was getting late. The car’s lights were still on, just as I had left them. The car was fully powered up but locked. After two hours of charging, there were now 12 miles of charge on the battery—about a third full. I drove the car back to my condo for the night after this partial success.

Just to be fair, I tried the 110v charging cord again. Maybe I’d get a better result now that the battery was partially charged. If so, I could top off the battery overnight. Nice try, but no. Same result. Angry red flashing LEDs. I pulled the charging cord from the car and accidentally set off the car alarm. Must remember to unlock the car before detaching the charging cord.

The next day, my second with the Chevy Volt, I drove to work and then back home in the evening. I’d used some battery charge to take a colleague to lunch in the Volt, so halfway home the car switched to gasoline power. Seamlessly. This time, I had a plan. I drove straight into the city-owned parking garage at 4th and San Fernando Streets and looked for the ChargePoint stations. They were just inside, past the parking gate. You have to pay for parking while charging your car. I parked, waved the ChargePoint card at the charger, and hooked up the car. It started to charge so I walked home in less than five minutes.

Fortunately, that parking garage has a restaurant named “Flames” on the ground floor and the restaurant validates parking so my wife and I decided to have dinner there. That knocked my 5-hour parking fee down from $14 to $3, which is OK for a one-time battery charge but I would definitely not make a habit of charging my car this way. I can’t afford to eat dinner at Flames every night. Dollars or calories.

Later, I walked back to the city garage. After five hours, the Chevy Volt’s battery was full—the battery gauge showed 35 miles of range. I pulled out the charging cord and promptly set off the car alarm again. Oh, well. I unlocked the car and drove home for the evening.

On my third day with the Chevy Volt, I drove the car to work confident that I’d have enough battery power to drive both ways using only electricity. A full charge would take me back and forth to work two days straight assuming I didn’t run many any errands during the day. Consequently, I didn’t charge the car again before they took it away on Friday afternoon.

As you can see, I spent a lot of time worrying about charging the Chevrolet Volt—almost as much time as driving it. That’s unfortunate, because it really is a fun car to drive. However, my experience with charging the Chevy Volt underscores how the logistics of driving an electric car are not at all the same as for a gasoline-powered car. In the early days of gasoline-powered automobiles, gasoline wasn’t that easy to find. However, the infrastructure for gasoline-powered cars has now been in place for many decades and it works well for most of us. You don’t need anything special at home to support a gasoline-powered car, you just need a place to park the car and you need to remember to fill it up. A gasoline fill-up takes five or ten minutes these days using “pay at the pump” and self-service refueling.

The same is not true of an electric vehicle. You will need support infrastructure at home to charge the battery. You’re better off if you also have support infrastructure at or near your place of business. There’s just more thinking involved with an electric car, for now. You need to be thinking about where your next charge is coming from. This is not so true for a range-extended vehicle like the Chevy Volt, but in my experience over three days, it is still true.

Perhaps in a few decades, things will be different. We’ll have batteries with more capacity that charge faster. We’ll have far more public charging stations. Homes will come with charging infrastructure built in. Businesses will supply charging stations to employees. Condominiums will figure out how to provide residents with charging infrastructure. However that’s the future, not now. Not quite yet.

Posted in Green Design | Tagged , | 2 Comments

Mars Science Laboratory rover runs on plutonium—uses somewhat less than 1.21 gigaWatts

Although both employ plutonium power sources, Doc Brown’s time-traveling DeLorean needed 1.21 gigaWatts to zip between time periods while the freshly landed Mars Science Laboratory (MSL) rover gets by with 125W to power its dual-redundant 133MHz BAE RAD 750 CPUs, various sensors and science experiments, radios, and its 6-wheeled power train.

Plutonium Pellet for RTG

The 125W comes from about 2kW of thermal energy thrown off by the slow fission of plutonium 238 inside of a radioisotope thermal generator (RTG) sticking out of the back of the MSL rover. The fissioning plutonium generates heat that’s converted to electricity by PbTe/TAGS thermocouples from Teledyne Energy Systems. With waste heat to spare, the MSL rover doesn’t need to use electricity to heat electronics in the Martian cold as did the solar-powered Spirit and Opportunity Mars rovers.

The MSL RTG is formally called the multi-mission RTG or MMRTG. It’s a standardized atomic battery used in many space missions where solar power is just not sufficient. The MMRTG was developed by Boeing’s Rocketdyne Propulsion and Power Division. During its 14-year expected life, the MSL MMRTG electrical output will fall to approximately 100W as the plutonium is used up. Plutonium has a half life of 87.7 years and is mostly an alpha emitter so it’s more easily shielded than other nuclear fuels.

Nevertheless, there are always concerns when lofting many kilograms of hot nuclear material on a rocket and even more concerns when depositing that material on another “pristine” planet. When considering the tradeoffs here, you should be aware that Mars has higher ambient radiations levels than Earth because it lacks any significant atmosphere or planetary magnetic field to deflect radiation coming in from space.

According to Wikipedia, “radioiostope power has been used on 8 Earth orbiting missions, 8 missions travelling to each of the outer planets (including  the Pioneer, Voyager, Ulyssess, Galileo, Cassini, and Pluto New Horizons missions) as well as each of Apollo missions following 11 to Earth’s moon.” In fact, NASA has flown RTGs for 35 years when solar power just could not satisfy the mission power requirements. (Note: For an look at NASA’s RTG efforts, see “An Overview and Status of NASA’s Radioisotope Power Conversion Technology NRA”)

Even with a reliable power source, 125W of electricity—falling to 100W over a 14 year expected battery life—isn’t a lot of power to run something the size of a new Mini Cooper. So I am certain that there was a lot of work on making the MSL rover’s power consumption fall into line.

When I found out about the MSL rover’s nuclear battery, I felt that I just had to write a blog post about it, especially after just writing about my own personal experience with an electric vehicle: a Chevy Volt. After the thrilling and simultaneously uneventful landing of the MSL rover earlier this week, it’s sort of comforting to know that this rover will not freeze to death after its solar cells are covered by Martian dust. If the MSL rover lasts, it could still be exploring Mars by the time the first humans land on the red planet.

 

Posted in Space | Tagged , , , , | Leave a comment

Driving the Chevy Volt

General Motors loaned me a 2012 Chevrolet Volt to try out for three days at the beginning of August. It’s a fun car to drive and three days—really not quite two and a half days—isn’t enough time to fully explore this vehicle but I can give you a flavor of what the car is like to drive and operate. Quite simply, it drives like a smooth, well-bred car and it’s clear that GM engineers worked really hard to make this range-extended electric car (GM’s preferred terminology over “hybrid”) feel and drive like a conventional—and very nice—internal-combustion vehicle, minus the engine noise.

The Chevy Volt is a 4-seater. A console containing the 16kWH battery runs down the center of the interior and divides the front and back passenger space into four seating areas: one for the driver and three for passengers.  The battery also occupies the floor of the cargo area, raising the floor of the cargo compartment and reducing its capacity a bit. The Volt’s battery weighs 435 pounds and the car seems to ride a bit heavy compared to the compact cars I am used to driving. (For details about the Chevy Volt’s battery system, see “Storing Volts,” by John Donovan.)

Nevertheless, there was plenty of acceleration available if you put your foot in it. I never had trouble achieving the speeds I wanted to reach in the amount of time I expected. Steering was tight and accurate. Braking was normal.

The driver’s cockpit centers on an LCD with all of the various gauges needed for driving such as the speedometer and odometer. There’s also a scale on the right side of this display which gives you a real time view of your acceleration or deceleration. A light foot on the accelerator or brake is rewarded with a swirling green ball. Heavy-footed driving turns the ball solid gold to let you know you’re wasting energy.

There’s a second LCD screen in the center of the dash console that displays navigation, climate-control, and entertainment information depending on the mode setting. While the screen is showing climate-control information, it’s also telling you how much energy the climate-control system is using. The Volt’s air conditioning system would draw as much as 24% of the total energy being used by the car. That means that almost a quarter of the battery capacity would go to cooling the car’s interior on hot days. With just the fan going, the climate-control system drew only 4% of the total energy. Even that seemed like a lot.

For the three days that I had the car, energy use was of prime concern and played a large role in the management of the car. The official EPA battery range rating for the Chevy Volt is 35 miles and when I was able to fully charge the battery, that’s what the battery gauge said. The car had 7 miles worth of charge on the battery when it was delivered, which was precisely enough to drive from the delivery point—my home—to work. I arrived at work with an “empty” battery. (In reality, the Chevy Volt doesn’t allow the battery to drain fully. There’s a reserve charge in case you also use up all of the gasoline.)

Although an empty battery sounds bad, or at least not so good, the Chevy Volt also has an on-board 1.4L gasoline engine that kicks in automatically when the battery is drained. The gasoline engine runs an electric generator that charges the battery and the combined range of the gasoline and electric energy sources is 379 miles, so there really isn’t any “range anxiety” with the Chevy Volt. (Note, the gasoline engine also works in parallel with the electric motor under heavy acceleration and when the car exceeds 70 mph, as explained in “Get on the drivetrain” and “Interfacing the motor with the engine” by John Donovan.)

You have to listen carefully to hear the gasoline engine when it’s running. It’s very quiet. The car manages the switch from electricity to gasoline all by itself. The only sign that there’s been a switch from electricity to gasoline is that the battery gauge on the left side of the control panel changes into a gas gauge. There’s a secondary energy gauge above the primary position to show you the state of the other on-board energy source, which you can see in the image above.

I drove home that first day using gasoline.

One important aspect of the Chevy Volt’s dual energy source is that the gasoline engine supplies the electricity to run the car only when the battery is depleted. There’s no surplus on-board electrical generation capacity to charge the battery while driving. During my first drive home, the car handled just as it did under battery power but I arrived home with the same amount of energy in the battery as when I left work—“zero.”

To add energy to the battery, you must charge the car from an external electrical source. Here, you have two choices. The Chevy Volt comes with a 110v charging cord (called a Level 1 charger) stored beneath the floor in the cargo area. The cord plugs into “any” electrical outlet and connects to the car using a charging port just in front of the driver’s door. The port and charging cord use a special connector set developed for electric vehicles.

Using the 110v charging cord, you are supposed to be able to charge the Chevy Volt’s battery in 8 to 10 hours, depending on how much charge is required. You could use this cord to charge the car overnight after work, for example.

Your second charging option is a 240v charging station, a Level 2 charger. Because there’s a lot more energy available using a Level 2 charger, bringing the battery to full charge takes only 4 to 5 hours. The fast charge is so much more desirable that I’m certain most Chevy Volt owners will opt to install a 240v Level 2 charging station in their home garages. However, that was not an option for me with a 3-day loaner vehicle.

(For more details on various electric car charging stations, see “The electric gas station” by John Donovan.)

Even if I bought a Chevy Volt, I would have a problem installing a 240v Level 2 charging station because I live in a condominium and cannot simply install 240v wiring and a charging station in the common area parking garage. The Homeowners Association would need to approve the installation and we have yet to figure out how to fairly meter the electricity used to charge a car because the parking garage lacks individually metered electricity. The situation may well be different for new condos built after electric cars become more popular, but my condo is 12 years old and was not built with electric cars in mind.

There is yet another charging alternative: public charging stations. Although I am at a disadvantage when it comes to installing my own electric charging station in my condo parking garage, I am fortunate to have more than a dozen ChargePoint “for-pay” public charging stations within a block and a half of my home in downtown San Jose.

Here are the ChargePoint stations located near my home:

ChargePoint is the market-facing arm of Coulomb Technologies, which manufactures, installs, and operates public charging stations. As of this writing, ChargePoint has 8626 public charging stations operational in the US and other countries. There’s an app on the ChargePoint Web site that can tell you the location of the nearest charging station and whether or not it is currently in use. You can also search for a public charging station using the Chevy Volt’s OnStar system. So keeping track of public charging stations is actually easier than keeping track of gasoline stations, in one sense. There’s help available. However, there are many fewer charging stations to use at this time compared to gasoline stations. For example, there are no charging stations within 30 minutes walking distance of my employer in North San Jose, which makes it impossible for me to charge the car while I’m at work.

Next month: Charging the Chevy Volt

Posted in Low-Power | Tagged , | Leave a comment

Want a peek over the shoulders during a real low-power microcontroller debate? Must-read info for all designers of low-power, microcontroller-based systems!

Recently, I published a blog post on my EDA360 Insider blog about the ARM Cortex-M0 processor core and its expected influence on mixed-signal, low-power IC design. (See “What effect does the ARM Cortex-M0 core have on mixed-signal microcontroller design?”) As usual with my blog posts, I also posted a note about the blog as a discussion in several LinkedIn groups including the “ARM Based Group.” What has followed in that particular group is a really interesting technical discussion about 8- and 32-bit microcontrollers used for low-power designs. This is a compelling set of arguments and you really should read through them if you have anything to do with microcontroller design. I guarantee that these perspectives will help you with your next design.

I reproduce a substantial part of the LinkedIn discussion thread here because there is so much meat in this discussion that it is a shame to keep it confined to one relatively small social-media sphere. Watch carefully as the discussion becomes more and more technical:

Bill Giovino: This is an old argument. There is no migration from 8-bit to 32-bit. To claim a migration is being driven by 8-bit owners is also false.

Growth of 8-bit microcontrollers continues to surpass growth of 32-bit microcontrollers. Witness 2008-2010 where 32-bit growth faltered but 8-bit growth remained strong amongst companies focused on 8-bit.

Andy Neil: @BGiovino – have you told the guys at @NXP? They’re killing off the 8-bitters in favour of 32…

Bill Giovino: Andy, I’m not sure what is the basis of your statement. No one is “killing off 8-bitters”

It’s simple – if 8-bit sales continue to grow, then no one is killing them off because they are thriving.

Benoit Dupuy: Hi Bill and Andy,

Today I am in a flat spin when I see 32-bit processors in a watch

http://www.youtube.com/watch?v=Azy5kpiU3_s,

http://www.youtube.com/watch?v=5xa_GUzTb00&feature=related (with MTK6516, http://forum.xda-developers.com/wiki/index.php?title=Chinese_Clones_MTK6516)

One decade before, just 8-bit architecture could be the right architecture with this market. This is not the case today. I do not say the bicycle is going to disappear because there will be the electrical car market or an electrical motorcycle market for example. The bicycle was and will be but this is not a priority for the market. It was. Manufacturing only bicycles does not put a company as the top ten in term of net profit.

I have spoken about iwatch but I can also speak about electrical domestic, future metering market. It is sure, there are always bicycle manufacturers.

Bill Giovino: Benoit, 8-bit put Microchip in the top ten in terms of net profit. The same could be said for NEC/Renesas and Freescale.

http://microcontroller.com/Modern_Microcontroller_Market_Part_2.htm

The fact is, for interrupt-driven applications an 8-bit architecture will be more efficient and lower power. Even the most efficient 32-bit architecture can use more than twice the instruction cycles than an 8-bit when vectoring to an interrupt.

Andy Neil: @Bill – sorry, I meant specifically *NXP* are killing off *their* 8-bitters in favour of *their* 32-bitters – Cortex-Mx in particular.

See: http://www.8052.com/forum/read/181200 – noting an NXP presentation in which the NXP Product Marketing Manager says they have “no roadmap” for 8-bit.

There have been more NXP 8-bit discontinuations since then – and, as we know, Microchip have been quick to capitalise on that…

Andy Neil: Sorry – the link in the 8052.com post to the NXP presentation now points to something completely different.

Bill Giovino: @Andy, you are absolutely right – thanks for clarifying.

However, the problem NXP is having is that they are inviting in their competition to scalp their 32-bit business. See, many complex systems have a 32-bit along with an 8-bit that does peripheral processing. Since NXP doesn’t bid on 8-bit anymore, the customer buyer has to invite NXP’s competitors that are 8-bit suppliers like ST, Atmel, etc. to bid on the 8-bit. And while they are there, the competitors gaze at NXP’s 32-bit socket and lick their chops.

It’s not a technical issue; it’s a sales strategy issue. An 8-bit or low-end 16-bit is necessary in the portfolio to prevent poaching of the 32-bit socket, even if you never win the 8-bit socket. ST knows this. Texas Instruments has an official strategy based around this.

There have been times when I’ve had to quote business I knew I couldn’t win just to get visibility into the project and protect business I had already won.

Zoltán Kócsi: @Bill Giovino:

Well, I don’t know. I can get a 32-bit ARM from NXP at the price of an 8-bitter. It has as good as or better peripherals. Has the same amount or more FLASH/RAM. It has a much more powerful core. Why would I need to have an 8-bitter?

There are special cases when you need a *particular* 8-bit or 16-bit chip, because it has some specific peripheral unit or some feature which is a must and only that chip offers it. But for general microcontroller applications the low-end ARM offering beats the 8 and 16 bit chips in price, peripherals and computing power while it is not more expensive and doesn’t have higher power consumption.

If I have a 32-bit ARM system which needs a peripheral processor for whatever reason, I would slap on one of the low-end 32-bit ARM processors. Same architecture, same compiler, same everything. That is, code maintenance is simpler, sharing code between the chips is simpler, everything is simpler. Why would I struggle with an 8-bitter when I can have an ARM for the same price?

Let’s compare the NXP LPC11Xxx series and the Atmel 8-bit AVR chips, an arguably very popular family.

For the same amount of FLASH/RAM the NXP is cheaper. However, the FLASH consumption of the AVR will be higher, because many things which cost you a single 16-bit instruction on the ARM will cost you several 16-bit instructions on the AVR. The ARM runs at 50MHz, the AVR at 20. The ARM has 32-bit timers, high-speed SPI with a FIFO, a UART with a FIFO, a 12-bit ADC. The AVR’s SPI is slower and no FIFO, the UART has no FIFO, the timers are 8 or 16 bits (and you have a lot less of them) and the ADC is 10-bit and slower. The NXP chip has pin-compatible drop-in variants with USB or with more analogue stuff or with a dual CAN controller on board. The ARM has a unified address space, making life very simple, the AVR has an explicit Harvard architecture, where data access to the FLASH requires special instructions and all sorts of hackery.

The only reason I’d use the AVR if I absolutely, positively needed 5V operation: the NXP is 5V tolerant, but can’t drive 5V logic levels directly. That’s about it, but even then I’d be tempted to use the ARM and slap some cheap level translator chip on the lines which drive 5V logic.

I used a lot of 8-bit chips, 8051, Z80, 6502 derivatives, 68HC1x, AVR. I also used 683xx microcontrollers a lot. When the 683xx was my 32-bit (well, 16/32 bit) main processor, indeed I probably would have used an 8-bitter for mundane slave processing functions or as the main processor for simple devices, basically because of the huge difference between the 8-bitter and the 683xx in terms of price, power consumption, board real-estate and circuit complexity. There is absolutely no complexity with the NXP Cortex-M0 based series. You give it power, that’s it. Doesn’t even need a crystal. Simpler than an 8-bitter. It is small. It is cheap. It is low-power.

I don’t really see any reason to keep using 8-bit microcontrollers except in very specific cases where a feature of the chip is needed – but that feature is independent of the core’s word width.

Bill Giovino: @Zoltán, I don’t think you understand the 8-bit segment. 8-bit applications aren’t driven by performance. Let me state that again – 8-bit applications aren’t driven by performance.

See, the vast majority of 8-bit applications can be handled by a mere 8MHz PIC16. So, for most 8-bit applications most any 8-bit microcontroller can do the job. Understand?

The technical aspects of an 8-bit application are driven by EFFICIENCY of architecture and interrupt response. For example, if it’s low power, every single clock cycle counts.

For example, interrupts on the Cortex-M0 require at least 12 clock cycles. Some 8-bitters require only four. In some interrupt-driven applications, those 12 clock cycles are a very long time. It’s actually possible for some 32-bit microcontrollers to be slower than an 8-bit for a particular application because of interrupt response.

So, you have a socket looking for an 8-bit where 95% of the many thousands of 8-bit micros are ridiculously overpowered for the socket. What does a 32-bit bring to the socket if performance is unimportant?

I’ve been in semiconductor sales & marketing for 20 years and I can tell you with confidence that the most powerful microcontroller almost never wins the socket. The final decision is always based upon the channel.

Also, remember my post on how 8-bit is a defensive strategy.

And let’s not get started on 16-bit. TI’s 16-bit MSP430 is possibly the fastest growing microcontroller out there – its sales are staggering and by itself it is bigger than many mid-sized semiconductor companies. The MSP430′s sales growth alone is bigger than most semiconductor startups.

Benoit Dupuy: Thank you Bill for your interesting answers. I closely follow this discussion. In the same time, I have tried to find some documentation about the same subject. Here are some links to that documentation:

http://eproductalert.com/digitaledition/8bit/2011/Engineers%20Guide%20to%208%2016%20Bit%20Technologies.pdf

Now Bill, in this following STMicroelectronics ‘ presentation, slide 23, we can see the 32-bit MCU trend (in term of Millions of Dollars) is very higher than 8 and 16 bits market:

https://www.kth.se/social/upload/4f3e6fe1f27654359b000004/KTH%202012%20Feb.pdf

I advise many people to read this presentation called “Microcontrollers

* Basics and Trends- written by Anders Pettersson, FAE Manager Nordic and Baltic

Bill Giovino: Beniot, first, the ST presentation contains many mistakes. Slide 7 incorrectly defines CISC vs RISC.

Second, the MCU trends is on slide 21 (not 23) and unfortunately is by IC Insights. They have been discussed here before – to put it politely, their figures don’t add up and disagree with everyone else. I don’t know what IC Insights’ methodology is, but they even get the past trends wrong.

Zoltán Kócsi: @Bill:

Well, I believe you, however:

The 12 clock cycles of the IRQ response on the Cortex costs you 12 * 20ns = 240ns. That corresponds to 4 cycles on a 12.5MHz 8-bitter. Plus, due to the higher clock frequency, the interrupt latency jitter is lower on the Cortex and if you are really touchy about the IRQ response time, the latency jitter is an important factor. Furthermore, if it then takes 2 more instructions to actually *respond* to the interrupt, that would take 40ns for the Cortex but 160ns on the 12.5MHz chip, so you’d need 22MHz 8-bitter to match the Cortex and your jitter would still be higher.

Also, I stated that there can be specific requirements which would warrant the use of a specific chip. You need a particular feature more than anything else, you find a chip which has that feature. If the chip happens to be 8-bit, so be it. If it is 32-bit, the better.

We can argue how many embedded applications require some extra-special feature. Power consumption is one such feature for battery powered devices; that is a frequent enough thing. Extra fast interrupt response, I believe, is not a requirement for the overwhelming majority of cases and where it is, most of the time you need a fast processor anyway and/or you use HW for the sub-microsecond responses (e.g. engine management systems).

I don’t care if the 32-bit chip is ridiculously overpowered and and double overkill for blinking a LED. As long as it is as cheap as the 8-bit chip one and its overall power consumption (for 99% of the time the CPU will idle with its clock stopped) is within my power budget, it achieves its design goals, therefore is adequate. If then I factor in the development time, tools, ease of use and other features which are not directly technical, the 32-bit chip will win. Yes, it is wasted silicon and as an engineer I don’t like wasting resources. But I consider that waste as the price to pay for a simpler, cleaner system, with more reserves in it.

Again, as I mentioned, if you have a specific problem which mandates the use of a specific chip or architecture, that’s one thing. But your garden variety consumer and industrial embedded systems do not have extra special requirements and in that case, I think, the low-end 32-bit chips will, more often than not, offer a cheaper and simpler solution.

It is my personal viewpoint. If you have proof that the market thinks otherwise, then that means that the majority of embedded engineers disagree with me. I am open to arguments: convince me that when there are no specific requirements which force you to use a specific chip, there is still merit in using an 8-bit chip instead of a faster, cheaper, simpler 32-bit one.

Bill Giovino: @Zoltán, where power consumption is king (and it is for many of today’s 8-bit sockets) efficiency is what rules. Cranking up the clock speed to make those 24 cycles go faster isn’t an efficient solution if an 8-bitter can do it at a slower clock speed.

Plus, an 8-bitter will always be lower power because it is a simpler architecture with one-quarter the bus size.

As I wrote before, the vast majority of 8-bit applications can be handled by an 8MHz PIC16. If performance isn’t the issue, and efficiency is, then a 32-bit offers no advantage over an 8-bit.

As you wrote, if you need a particular feature more than anything else, you find a chip which has that feature. If the chip happens to be 8-bit, so be it. If it is 32-bit, so be it – it is not “better”, it just is

Now, this is based upon my understanding and experience. I suppose that if I worked for ARM I might be privy to a wider range of examples that would make me see things differently. Because after all this, there are some 8-bit applications that CAN be served better by a 32-bit – usually when either the 8-bit reaches a 20MHz clock speed or the code has so many threads it finally requires a more complex RTOS.

To me the time when an 8-bit needs to move up in architecture is when the firmware has so many tasks that it needs a more complex RTOS. Without giving away proprietary numbers, I can tell you that most 8-bit sockets are at or below 16K program memory. Above that, unless it is linear code the 8-bit is encroaching on 16- and 32-bit territory.

But at that point the technical considerations take a back seat to political considerations. You have a Buyer for 8-bit MCUs at a large corporation that is soliciting bids for 8-bit MCUs. To try to get them to replace it with a 32-bit is to threaten his job.

Jonny Doin: @Bill Giovino:

> “[...] if performance isn’t the issue, and efficiency is, then a 32bit offers no advantage over an 8bit”

Much on the contrary. Efficiency has very little to do with registers size. Efficiency, in a given semiconductor process, has to do with energy-delay product (EDP). It is related to the amount of energy taken to perform some logic activity, versus the time taken.

EDP is a function of the process (i.e. the feature size for a transistor) and also of the architecture of a computing circuit.

Specifically speaking of 8bit processor like the 8MHz PIC16 versus a 32bit core based on ARM Cortex-M0, there are large differences on process (size) and architecture.

From an EDP (efficiency) perspective, a smaller process that is optimized for low-power has smaller transistors with lower gate capacitances and much lower energy per bit. It is also much faster, meaning that the same amount of logic processing can be done in less time. For the comparison of PIC16 vs Cortex-M0, the latter is much more energy efficient, due to the differences in process of the two chips.

Now, from the architecture perspective, both are simple architectures, with small pipelines, but very different core designs.

The ARM core has a large number of 32bit registers that are very close to the core, and take very little energy to be used. The instruction set can take advantage of those close registers and perform complex operations with just a few instructions. Furthermore, several chains of operations can be performed with different registers, so for example you can use multiple pointers to manipulate structs very efficiently, and several temporary registers to perform long expressions. Another architectural benefit is that the ALU can perform much more work in a single clock, and all operations are 32bit, meaning that the program needs no stitching of multiple registers to perform real-world math. Yet another important thing is that memory is a flat addressing space, so the program needs no banking selection to perform jumps or tables, or subroutine calls.

The PIC16 core architecture, on the other hand, was created in a time when the process size was huge, and was designed with severe limitations in core logic gates, to keep it small and fast. Those limitations, however, translated in a very constrained core architecture, with a single working register, limited ALU operations, a single index register and a heavily banked addressing scheme. As every other 8bitter, you have to combine registers in memory to perform any useful numerical computing, using carry-chain and a single operation ALU. The ALU has only ADD/SUB and logical ops, making simple multiplication and division extremely slow operations. Another severe limitation is that the stack is only 8 calls/interrupts deep, making interrupt code especially crafty, and limiting a call chain to very shallow functions.

The effect of the too simple architecture of the PIC16 makes just about any operation to be almost an order of magnitude less efficient than on a Cortex-M0, while some operations are 1000′s of times less efficient. Even interrupt response, as you mentioned, is much longer in a PIC16. At 200ns cycle time, (max core clock rate = 5MHz), it is 10x slower than a Cortex-M0@50MHz, and takes 4cycles (pipeline latency) + 8instructions (minimum context save+bank), or 2400ns for any interrupt to reach the first instruction.

Comparatively, a Cortex-M0 will take 12 cycles, including context switch, or 240ns. That is a 10x longer latency for the PIC16. Furthermore, any interrupt routine will take much more instructions on the PIC16, due to the instruction set, ALU and register limitation. Additionally, all the firmware in the PIC16 will need to be written in assembly, due to the very compiler-unfriendly microarchitecture. As a comparison, the Cortex-M0 is C compiler efficient, so even interrupt handlers can be written in C with high code efficiency.

In any scenario where efficiency is involved, it is impossible to argue that an 8bit PIC16, or any 8bit processor, is better than an ARM Cortex-M0. The M0 is 10 to 100 times more efficient, depending on the 8bit processor being compared.

In any scenario when performance is necessary, the M0 advantages are even clearer.

Maybe the only scenario where a PIC16 would be better than an ARM Cortex-M0 is for very high temperatures and noise, due to its inherently more robust (and older) silicon process.

So, except for extremely specific applications, choosing an 8bit over an ARM these days is hard to defend.

- Jonny

Jonny Doin: @Bill Giovino:

Perhaps some hard data makes the point very clear, without any adjectives.

PIC16f87xA:

  • Idd @ 20MHz, 5V: 7mA(typ), 15mA (max)
  • Idd @ 4MHz, 5V: 1.6mA(typ), 4mA (max)
  • Idd @ 4MHz, 3V: 600uA (typ), 2mA (max)

 

NXP LPC1100XL:

  • Idd @ 50MHz, 3.3V: 5.5mA (typ)
  • Idd @ 12MHz, 3.3mA: 1.4mA (typ)
  • Idd @ 6MHz, 3.3V: 850uA (typ)
  • Idd @ 3MHz, 3.3V: 600uA (typ)

So, simple and direct, you have more processing power, for less energy.

- Jonny

Bill Giovino: @Jonny, C’mon now. Let’s play fair.

You are specing an NXP part introduced four months ago against a Microchip part introduced NINE YEARS AGO.

http://microcontroller.com/news/microchip_nanowatt_pic16f.asp

I’m not sure why you chose to compare a modern NXP part against an obsoleted Microchip part.

This PIC16(L)F1939 is two years old and still beats the pants off the LPC1100XL. Plus, the PIC16 has a higher degree of integration, including an LCD controller.

Here is some honest hard data:

PIC16(L)F1939 (March 2010)

  • Idd @ 4MHz 3.3V 380uA(typ) 450uA(max)
  • Idd @ 4MHz 5V 450uA(typ) 520uA(max)
  • Idd @ 3MHz, 3.3V 320uA(typ) 390uA(max)

http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en538148http://microcontroller.com/news/microchip_enhanced_midrange_cores.asp

PIC16(L)F1939 Sleep current: 20nA

LPC1100XL Sleep current: “below 2uA”

As you can see, the PIC16 uses half the run current of the NXP part, and 1% (one percent) of the NXP’s sleep current.

Bill Giovino: Efficiency is the instruction cycles needed to get work done. In heavy interrupt-driven applications (a typical 8-bit app) interrupt latency is key. For the PIC16(L)F1939, interrupt latency is spec’ed at 3-4 instruction cycles for synchronous and 3-5 instruction cycles for asynchronous.

So for a typical synchronous interrupt, the PIC16(L)F1939 needs a maximum of 10 cycles to round-trip service the interrupt verses 24 for the NXP part. If we assume only 8 cycles to process a simple request then the PIC16 has an overhead of 125% versus 300% for the NXP part. Nest those interrupt and the NXP part looks weaker and weaker.

If you look at the above linked article, you’ll see that Steve bases his argument on the statement:

“Microcontrollers typically have applications where [they] wake up, take a sensor reading, and go back to sleep. Processors in the Cortex-M range are able to do this in fewer cycles and effectively reduce the amount of the active duty cycle for the device. A communications stack typically has 32-bit addresses. Moving this around with an 8-bit microcontroller, an 8051 for example, is going to take more cycles, so the entire device is powered up longer.?

As we can see, for modern 8-bit microcontrollers, this statement is false.

Zoltán Kócsi: @Bill:

Well, what is that application category where you need to respond to interrupts very fast but otherwise you do not need to do any calculations, processing, communication or any other activity? I understand from what you said that that’s the main application field of the 8-bit processors, but I wonder what that field is?

Anyway, let’s take a look at your claims.

I am not a PIC man, so I downloaded the PIC16F1939 datasheet and looked at it in a bit more detail. It turns out that the oscillator of the PIC can run at 32MHz. Very respectable. Except that a CPU cycle is 4 clocks.

Therefore, a PIC running at 32MHz will have an effective CPU cycle rate of 8MHz. In your example the PIC services the interrupt in 18 cycles, that is, 2.25us. The ARM at 50MHz needs 32 clocks (and 1 clock there is 1 cycle), i.e. 0.64us. The interrupt latency on the PIC, according to the datasheet, is up to 5 instruction cycles, that is, 625ns. On the ARM it is at most 3+12 clocks, that is, 300ns. The PIC latency timing diagram in the datasheet, on pages 89 and 90 shows the relationship between the clock and the CPU cycles pretty clearly.

You said “Nest those interrupts and the NXP looks weaker and weaker”. Well, you brought it up, so let’s examine this issue in a bit more detail then.

The PIC does not have a stack to save the context, it saves them in shadow registers. Which means that you can’t do nested interrupts, unless you save the shadow registers in memory explicitly, in which case of course your assumption about the 8 instruction per interrupt is blown to pieces.

In fact, your 8 instructions per interrupt is rather questionable in the first place. The PIC does not have vectored interrupts, you need to determine the source of the interrupt by reading the pending interrupt registers and see which bit is set in which register. Since the chip is 8 bit and there are 20 interrupt sources, it means going through 3 registers. This is done in your interrupt routine, by your code, not by hardware. You are burning clock after clock just to work out which peripheral to attend to.

In contrast, the ARM has a proper vectored, nested interrupt controller, with user programmable interrupt priorities and all. Each interrupt source has its own interrupt service routine, thus you do not need to work out who asked for the IRQ. Furthermore, the ARM does interrupt chaining, saving lots of clocks when the interrupt load on the chip is high.

Now, as per power consumption. Considering that your aim is to minimise your interrupt latency and response time, I assume you run the processor at its maximum clock speed. Thus, you run the PIC at 32MHz. It consumes about 3mA @ 3.3V and the core is running at 8 M cycles. Alas, you can’t put the chip into low-power sleep mode, because it needs 1024 clocks or 32us to get out of sleep, which of course is not an acceptable interrupt latency figure. To match the interrupt response time the ARM has to run at 14.22MHz. In sleep mode (the ARM can be put to sleep, as it wakes up immediately) at that frequency the ARM needs just a tad more than 1mA, i.e. one third that of the PIC. If you can afford the PIC’s power budget, then you can run the ARM almost at its full speed, in which case of course the PIC’s interrupt latency is over twice as long and even with the 8 instructions per IRQ assumption, its response time is 4 times slower than that of ARM.

The LPC1100XL sleep (well, deep sleep, actually) current is specified as below 2uA. But the deep power-down mode, when you stop everything except the circuitry needed to wake up, however, is only 220nA. Admittedly, the PIC is still 10% that of the ARM, I give you that.

So, as we can see, your selected modern 8-bit microcontroller is still massively inferior to the ARM in all your selected measures, except the deep power down power consumption.

Jonny Doin: @Bill Giovino:

Continuing the analysis @Zoltán started, lets pick some Cortex-M0 that is really low power, for example the EFM32ZG103, a very new part from EnergyMicro.

This chip process is optimized to low power, and has a top frequency of 32MHz.

Its figures are very comparable to the PIC16LF1939, with a 45uA/MHz @ 32MHz.

All the conditions that @Zoltán mentioned are applicable. The enhanced PIC will save the minimum context, but usually the user needs to detect the interrupt source, dispatch the interrupt and set the BSR, before running any ISR code. That can add an average of 10 cycles to the 4 cycles of hardware latency. At 32MHz (8MHz core), these 14 cycles take 1750ns. If nested interrupts are enabled, the new interrupt will need to save context to memory and dispatch to the new ISR, manually. That will add at least 10 more cycles, or 1250ns.

Comparatively, the ARM at 32MHz (with one clock per cycle, but at the same current) will take 12 cycles to vector to user code, or 375ns. For tail-chaining of nested interrupts, the ARM takes only 6 cycles, or 187ns. So compared, this ARM is from 6 to 10 times faster at the same clock as the PIC, and can be put to sleep at 600nA current, and 2us wakeup.

This is both faster and lower power than the best PIC16 cores.

Despite some improvements targeted to better C compiled code, the PIC instruction set is still much less efficient than the 16bit THUMB instruction set, and takes much more instructions to execute the same operations.

All that still points to the same answer, be it based on performance or on efficiency.

I must say that I used the PIC cores for several years, having designed dozens of circuits with them, and I am very fond of them. However, today I cannot use PICs or any other 8bit cores, even for the smaller functions. Using ARMs everywhere, I can reuse code like communication protocols, filters, and core functions.

- Jonny

Posted in ARM, Low-Power, Microcontroller | Tagged , , , , , | Leave a comment