Low Energy ANTs?

As we noted elsewhere Nordic Semi has just launched its nRF51 Series of multi-mode, ultra-low-power (ULP) wireless SoCs. The first two ICs to debut in the nRF51 Series are the nRF51822, a multi-protocol Bluetooth low energy/2.4GHz proprietary RF SoC, and the nRF51422, the world’s first ANT/ANT+ SoC. Both chips combine flexible RF front ends with ARM Cortex-M0 MCUs—no, not 8051s as you might assume. With 10x the processing power and half the power consumption of Nordic’s previous (2nd generation) chips, they’re indeed ultra-low-power. But the real story is in the software.

nRF51 chipsSoC vendors often, if not typically require embedded developers to use their proprietary software frameworks in order to take advantage of all the hooks they’ve embedded in their hardware. You need to follow their APIs, compile and link your code with the stacks they provide, and then spend a lot of time trying to resolve unexpected dependencies. Nordic claims to have created a software architecture that cleanly separates application code from the protocol stacks, which they provide as linkable, verified, and qualified binaries. The idea is to enable developers who are familiar with the ARM architecture to develop application code using the Keil, IAR, or other tools with which they’re familiar and not have to wrestle with vendor-specific tools. Nordic relies on calls to ARM’s Cortex Microcontroller Software Interface Standard (CMSIS) library to handle the interface between application code and their chips. The stacks are 100% asynchronous, event driven, and run-time protected so you can’t accidentally blow them up.

Nordic has set out to make the nRF51s series a real platform by making all the chips code compatible and with each group pin compatible. This should enable embedded developers to reuse their code base across multiple hardware platforms—well, OK, Nordic’s, but with the major rollout of chips planned for this family that should eventually cover a lot of devices.

About the chips themselves: the nRF51822 is a flash-based SoC that combines Bluetooth Low Energy and Nordic’s widely used (if not widely touted) proprietary Gazell 2.4 GHz protocol. The chip includes 256 kB of on-chip flash and 16 kB of RAM. Nordic provides a Bluetooth Low Energy stack that requires less than 128 kB of code space and 6 kB of RAM, leaving more than 128 kB of flash and 10 kB of RAM for application code. Included are low-energy profiles, services, and example applications. The nRF51822 draws 13 mA on RX peak and 10.5/16 mA TX peak at +0/+4 dBm with on-chip LDO down to 1.8V.

The nRF51422 is the first single-chip ANT solution. The very small stack requires only 32 kB of code space and 2 kB of RAM, leaving 226 kB of flash and 14 kB of RAM for application code. ANT is primarily intended for ULP sensor and control applications in personal area networks (PANs). It does ad hoc networking and can handle six simultaneous master/slave pairings with a burst speed of 60 kbps. At 16 MHz the MCU draws 4.4 mA while executing from flash­; the ANT protocol uses active mode sparingly and can go back to sleep in 2.5 µs.

There are numerous two-chip wireless sensor solutions out there, but Nordic has launched a couple of ULP single-chip implementations that are deserving of a close look. Samples are in limited circulation now, with full sampling planned for September and full production late this year.

The low-power wireless market continues to heat up as the chips power further down. It’s a wide open market where even smaller fish like Nordic Semi can come up with a good product and expect to do well. Best of all we’re still in the early stages of the “wireless revolution.” There are a lot more useful devices, not to mention fun toys, coming along that we’re yet to even imagine.

 

Posted in Uncategorized | Leave a comment

Low Power to the People!

No, this isn’t another of Donovan’s rants, it’s a DAC Pavilion Panel on low-power design that I’ll be moderating on Monday morning. Here’s your chance to grill the gurus–well, three of them anyway–on numerous aspects of low-power design. It’s in Booth 310 right after Gary Smith’s kickoff presentation, so check out Gary’s prognostications, grab some coffee, and stick around for a very informative panel session.

DAC Pavilion Panel
Monday, June 4
10:30 – 11:15 am
Booth space #310 on the show floor

Low Power to the People!

Power management has become the single biggest design challenge. Methods used to manage power will depend on the interaction among process technologies, IP, hardware design, and embedded software for the targeted application. New technologies such as 3D present further power management opportunities and challenges. Come hear these experts discuss power management struggles and solutions.

Chair/Moderator: John Donovan, Editor/Publisher, Low-Power Design

Panelists:

  • Clive Bittlestone, Fellow, Texas Instruments, Inc.
  • Bob Patti, CTO, Tezzaron Semiconductor
  • Aveek Sarkar, Vice President, Product Engineering & Support, Apache Design, Inc., subsidiary of ANSYS, Inc.
Posted in DAC, EDA, Energy Efficiency, Microcontrollers, Power management | Leave a comment

Trust and Verify

While system-level design has been proceeding apace for many years—approaching the goal haltingly and asymptotically—system-level verification remains the Achilles heel of that enterprise. SoC design in particular increasingly consists of assembling IP from a variety of vendors, reusing some of your own IP, writing new code, and then praying that all these elements work together smoothly—which they almost certainly won’t. You can trust the IP you licensed to work as advertized, but verifying that it works properly in your design has always been a time sink at best. Many, if not most EDA companies have point tools for verification, but to date it hasn’t been possible to even get the tools from any one vendor to work together smoothly across the entire design flow.

Last May Cadence took aim at the problem by announcing its System Development Suite, which introduced its Rapid Prototyping Platform and Virtual System Platform, integrating them with their Palladium XP Verification Computing Platform and Incisive Verification Platform. The goal was a unified system for top-down hardware/software co-design. While still not the Holy Grail of a seamless “algorithm to GDSII” design flow, the combination of four connected platforms did move the ball a lot closer to the goal, creating an integrated simulation-acceleration and emulation environment. Cadence claimed at the time that their approach reduced system integration time by 50%, based in large part no doubt by Palladium’s hardware-assisted emulations. But there was still work to be done.

Today Cadence announced new in-circuit emulation capabilities for its Incisive and Palladium XP platforms as part of its System Development Suite (SDS) as well as acceleration and emulations extensions to its Verification IP Catalog. The result is what Cadence calls “a single heterogeneous environment for system-level verification,” combining the speed of in-circuit emulation with the analysis that’s possible with RTL simulation. Design teams now will have a common, unified verification environment, which Cadence claims can result in “up-to-10x increased efficiency during system-level validation and root cause analysis.”

The addition of Universal Verification Model (UVM)-compatible accelerated verification IP (VIP) further smoothes the transition from simulation to acceleration, in-circuit acceleration, and in-circuit emulation, enabling designers to verify systems that are too large to effectively verify using RTL simulation.

According to Gary Smith, principal analyst at Gary Smith EDA, “The overall plan looks great—they really have done a good job. They’ve got probably a third of it done with this announcement.” What’s the third they’ve accomplished? “Well, they’ve tied the rapid prototype together with the emulator and the simulator. That’s a big breakthrough.” And the remaining two thirds? “Connecting the remaining boxes.” How long with that take? “It’ll take them a couple of years to put it together. It’s a big job…but they really have this whole ESL thing figured out pretty well now. Expect some further announcements later this year.” Stay tuned.

SDS 2012

Posted in EDA | Tagged , | Leave a comment

Dual Core AMP for Embedded MCU Applications

Symmetrical dual-core processors—using two identical cores—are hardly novel, in fact they’re a bit passé by now. And asymmetrical multicore processors (AMP)—usually combining a CPU and a DSP—have also been around for many years. What’s new is an AMP MCU that combines two cores that share a similar architecture but very different performance and power profiles, enabling each to work in tandem performing just those tasks to which they’re best suited. NXP’s interesting new LPC4350 is an AMP MCU that can get a lot of work done on a small power budget.

NXP recently asked me to review their new LPC4350 microcontroller, which combines a Cortex-M4 core with a Cortex-M0. This chip is not your basic “cheap 32-bit upgrade for your 8-bit legacy projects.” The LPC4350FET256 that I tested includes an ARM Cortex-M4 and –M0 running at up to 204 MHz; a memory protection unit (MPU); up to 264 kB SRAM for code and data use (multiple SRAM blocks with separate bus access; two SRAM blocks can be powered down individually); a floating point unit; JTAG and Serial Wire Debug; programmable serial GPIO (SGPIO); a DMA controller supporting up to eight DMA channels; and literally dozens of high-speed interfaces.

Not big.LITTLE—But Sort Of

Despite combining “big” and “little” processor cores in the same chip, this isn’t the same as ARM’s big.LITTLE approach that I wrote about earlier—combining Cortex-A15 and Cortex-A7 cores that tag team each other on a single chip (Figure 1). Instead the LPC4350 (Figure 2) takes a “divide and conquer” approach, sharing the OS when there is one and dividing application chores between the two cores. The -M0 offloads supervisory tasks from the -M4, letting it focus instead on high-speed data processing.

Figure 1: big.LITTLE simplified block diagram

LPC4350 block diagram

Figure 2: LPC4350 block diagram

The Cortex-M4 incorporates a 3-stage pipeline, uses a Harvard architecture with separate local instruction and data buses as well as a third bus for peripheral, and includes an internal pre-fetch unit that supports speculative branching; the M4 supports single-cycle digital signal processing and SIMD instructions. The Cortex-M0 is a stripped-down, low-power general purpose 32-bit processor that incorporates a 3-stage pipeline and a von Neumann architecture. The two cores communicate using shared SRAM as a mailbox, with one processor raising an interrupt on the other’s nested vector interrupt controller (NVIC); the other processor then responds by raising an interrupt on the sending processor’s NVIC.

This model isn’t as elegant as the big.LITTLE approach of using shared interrupt and cache controllers, but it’s decidedly less complex, more power efficient, and more appropriate for a small embedded MCU.  Using the big.LITTLE architecture, the lower power Cortex-A7 does all the work until it runs out of steam, at which point both the code and OS are switched over to the Cortex-A15. In contrast applications using the LPC4350 simply divide operations between the –M0, which primarily runs control code, and the –M4, which does high-speed signal processing. The –M4 only supports single-precision floating point DSP functions, but the floating point capability alone still makes it a lot faster than a comparable general purpose processor.

What’s New?

While the LPC4350 carries 264 kB SRAM for code and data, you can directly load and run code from external serial flash as if it were running in internal RAM—a unique feature that’s made possible by the quad SPI Flash Interface (SPIFI) with 1-, 2-, or 4-bit data at rates of up to 60 MB per second. After an initialize call to the SPIFI driver, the entire flash content is accessible as normal memory using byte, half word, and word accesses by the processor and/or DMA channels. You might well design your application to use external flash connected to the LPC4350 through the SPIFI interface to contain the M4 application and the internal ram to contain and execute the M0 application. This can enable you to run much larger applications than you could with more memory-limited MCUs.

Another interesting peripheral was the programmable state configurable timer (SCT), which enables a wide variety of timing, counting, output modulation, and input capture operations. You can limit, halt, stop or start operations depending on the results of an operation, even sequencing across multiple counter cycles. The SCT can be especially useful to get your program to respond to complex, dynamic changes in the operating environment.

How Low Is “Low Power”?

I checked out the LPC4350 using a not inexpensive but decidedly full-featured Hitex Eval Board and Keil uVision4 software tools. As of this writing NXP is yet to publish power specs for the chip, and trying to determine them proved to be no easy matter for my beloved but dated test equipment—until Jack Ganssle tipped me to inserting a small resistor in the negative supply lead followed by two op-amp gain stages. (BTW a $300 BitScope is a great poor man’s substitute for a $5,000 DSO.)

There’s no provision on the Hitex board for measuring the power consumption of each core independently, though both cores running together in Sleep mode draw about 10 mA; active mode current is highly application dependent, though the M0 seems to typically draw about ¼ as much power as the M4.

I’m still trying to accurately measure the current in Power Down not to mention Deep Power Down mode, which may require that $5,000 DSO after all—or a trip down the road to National Instruments to hook up to $12,000 worth of their test equipment. The last time I did that I pressed the wrong button on a test board and caused a 100 mA spike when we were expecting something in the nanoamp range. I hope they’ll let me come back.

Posted in ARM, semiconductors | Tagged , | 1 Comment

big.LITTLE is Big

Asymmetrical multiprocessing is a great idea that’s challenging to execute. It’s relatively straightforward to process high-speed, high definition video doing symmetrical multiprocessing (SMP) using an array of identical DSPs instantiated in an SoC or FPGA. However, doing asymmetrical multiprocessing (AMP) – with some cores processing data and others acting as supervisors – represents a real challenge for systems architects and embedded developers. ARM has gone a long way toward addressing those issues with the announcement last October of its big.LITTLE architecture.

ARM’s first big.LITTLE system combines a ‘big’ Cortex-A15 MPCore with a ‘little’ Cortex-A7. Looking at its datasheet it’s almost amusing to think of the Cortex-A7 as a ‘little’ core. According to the venerable Steve Leibson, ARM’s latest 28 nm iteration of the Cortex-A7 delivers 20% more processor performance with 30% less power consumption than the 45 nm Cortex-A8. This isn’t your Dad’s Cortex-Cortex-A7; it’s only ‘small’ when compared to the Cortex-A15.

More entertaining is the idea of using a Cortex-A15 MPCore-based chip – initially targeted at servers – in a cell phone. OTOH ARM’s stated purpose with the big.LITTLE architecture is to provide “both high-performance as well as extreme power efficiency to extend battery life.” ARM is actually serious about having you use this dynamic duo in battery powered embedded designs.

Twins—Sort Of

The central tenet of big.LITTLE is that both cores must essentially be architecturally identical so that all instructions will execute consistently across both cores. At first glance this hardly seems possible. The Cortex-A7 is an in-order, non-symmetric dual-issue processor with an 8-10 stage pipeline; the Cortex-A15 is an out-of-order, sustained triple-issue processor with a 15-24 stage pipeline. However, the Cortex-A15 and Cortex-A7 both share the full ARM v7A architecture including virtualization and Large Physical Address Extensions, so above the micro-architecture level they’re fully compatible.

On the micro-architecture level the pipeline differences alone guarantee that the Cortex-A15 will be faster and consume more energy than the Cortex-A7. However, with both processors operating at 1 GHz, when the Cortex-A7 runs out of steam you can migrate both the task and the operating system from the Cortex-A7 system to the Cortex-A15 system (Figure 1) in 20 µs, processing data quickly. ARM claims that “by selecting the optimum processor for each task big.LITTLE can extend battery life by up to 70%.”

Cortex core handovers

Figure 1: Cortex-A7/-A15 operating curves

ARM provides a software switcher that provides all of the mechanisms required for task migration between the Cortex-A7 and Cortex-A15 systems, including saving and restoring states; bringing the processors in and out of coherency; and migrating interrupts. The switcher also hides the minor differences between the cores from the operating system.

block diagram

Figure 2: big.LITTLE simplified block diagram

Tight system integration is key to making this all work together (Figure 2). The Cortex-A7 cores all share a common Level 2 memory, as do the Cortex-A15 cores. Both the Cortex-A15 and Cortex-A7 pairs also share a programmable Generic Interrupt Controller (GIC-400), which distributes up to 480 interrupts among the various cores. Both banks of cores share memory controller and system ports through a common Cache Coherent Interconnect (CCI – 400). In addition there’s a one-to-one mapping between the state registers in the inbound and outbound processors, and all registers are read and written in an architecturally consistent manner.

Expect to see a number of big.LITTLE SoCs starting later this year from ARM’s numerous licensees.

Posted in ARM | Leave a comment

Share the Air(waves)

cell towersWith seemingly everyone in the world over the age of six owning a smart phone, the FCC estimates that the demand for wireless services will continue to increase over 50% year-over-year. While the cellular network load due to voice traffic has remained relatively flat, data traffic has soared. Cisco reports that almost half of data traffic is streaming video, and AT&T has discovered that the killer app on its network – as in network killer – is the iPhone, with 4% of its iPhone customers recently accounting for more than half of the data traffic on its 3G network. This is a problem that isn’t going away anytime soon, and cellular network operators are desperately trying to deal with a huge surge in demand they hadn’t anticipated.

Spectrum utilization (Courtesy of the FCC Office of Broadband Development)

Network operators have a number of ways to address the capacity problem, none of them attractive and all of them expensive. First they can buy more spectrum at auction from the FCC or from another company, such as AT&T’s purchase of spectrum from Qualcomm; or they can buy another operator for the same reason—most notably AT&T’s acquisition of Cingular and abortive acquisition of T-Mobile USA. Or operators can just splurge and build their purported 4G networks as quickly as possible – witness the current race between Verizon and AT&T. Finally, they can introduce tiered data pricing plans – and we all know how popular those have proven to be.

A pending move by the FCC could go a long way toward addressing the problem, but not as far as the dynamic spectrum access (DSA) made possible by cognitive radio techniques.

Enter the FCC

The FCC has done its part, selling large blocks of spectrum at auction and opening up TV white spaces for unlicensed portable devices. In addition its National Broadband Plan promises to “find” an additional 300 MHz of spectrum within the next three years and 500 MHz within five years—though it’s unclear where those frequencies are hiding. Since the RF spectrum isn’t getting any larger, this promises to be largely a zero sum game involving moving existing users to less used portions of the spectrum. But not entirely: the FCC has ordered all land mobile radio (LMR) stations below 512 MHz to further narrowband their channels, which could result in up to twice as many available channels below that frequency. Also, opening up the TV white spaces to unlicensed portable devices could have the same dramatic impact that creating the ISM bands did with the introduction of Wi-Fi, Bluetooth, etc. The use of digital frequency hopping techniques plus free spectrum made all this possible.

The economic impact of freeing up spectrum in what was previously thought of as “junk bands” like 2.4 GHz has been considerable. According to FCC Chairman Julius Genachowski, “The economic benefit created by unlicensed spectrum is estimated at up to $37 billion a year.” The reward from selling off licensed spectrum isn’t exactly bupkis, either. Again according to Genachowski, “Spectrum auctions have raised more than $50 billion for the U.S. Treasury, and economists regard the economic value created by FCC auctions as being about 10 times that number, or $500 billion in value.”

Always keen to turn a quick buck, yesterday (February 16) Congress stepped into the act. Assuming that the current bill becomes law, the FCC will be mandated to auction off large chunks of spectrum, thereby raising an estimated $25 billion, of which $15 billion goes to the Treasury; $7 billion to build a national public safety network; and $1.75 billion to compensate TV stations for giving up the spectrum they own but no longer use thanks to the transition from analog to digital TV.

The FCC did get it’s wrist slapped, however, in my opinion for not allowing AT&T and Verizon to turn the last auction into something resembling a 19th century land grab by the railroads. This time around they are explicitly entitled to bid on reclaimed chunks of spectrum. The newly aggregated public safety bands won’t be a freebie, either; they’ll “be developed by cellphone companies that would agree to give first priority to public safety transmissions during an emergency.” Hopefully they’ll be given a lot of priority.

There is one major plus to the pending auction other than money. According to the Times article, “The legislation also provides for the creation of bands of unlicensed airwaves, so-called white space, around each segment of auctioned spectrum for use in building large Wi-Fi networks in urban areas and for use by cellphone companies in temporarily easing crowding on their networks.” Assuming that the devil isn’t in the details, this could open a whole new chapter in wireless development.

Dynamic Spectrum Access

The most promising solution to spectrum congestion is cognitive radio networking. Cognitive networks move intelligence to the edge of the networks, enabling different transmitters to dynamically change their frequency or modulation in order to avoid interfering with other stations sharing the same portion of the spectrum. Cognitive radios need to be able to sense and respond to the presence of other signals in their intended operating bands, using advanced software radio techniques – known as dynamic spectrum access – to minimize interference.

Spectrum management can be achieved one of two ways: by reference to a central database or by dynamically responding to other signals. Spectrum Bridge has developed a database approach to frequency reuse for the TV white spaces, an approach which the FCC recently approved. Under Part 15 of the FCC rules if you want to use an unlicensed TV band device (TVBD) on these frequencies, you must first check Spectrum Bridge’s database for a list of authorized channels at your location and input the exact location of your device into their database before proceeding.

While the database approach is very helpful, it’s a static solution to the problem. The Shared Spectrum Company (SSC) was recently granted four patents that cover the basics of dynamic spectrum access: determining spectrum availability within a network; monitoring and detecting channel occupancy; detecting and classifying signals within a channel; and implementing an efficient method for reusing spectrum while mitigating interference. SSC has developed DSA-enabled cognitive radios that can operate in the TV white spaces without causing interference to other devices, thereby greatly increasing spectral efficiency and improving quality of service over what has been impossible to date.

xG Technology Inc. claims to have built “the world’s first carrier-grade cognitive radio cellular network” in Fort Lauderdale, FL. The company’s xMax network automatically reallocates mobile units to different frequencies in order to minimize interference and optimize network utilization. While xG’s network technology is still in the trial stage, it’s passed some preliminary military testing and looks to be getting ready for prime time.

Whether it’s xG’s cognitive radio network or someone else’s, dynamic spectrum access—made possible by cognitive radio technologies—is a potential game changer that could enable billions of humans—not to mention the billions of machine-to-machine (M2M) devices now starting to come online—to ‘share the air’ without fear of bringing networks to their knees.

 

 

 

Posted in Cell phones, FCC, regulation | Leave a comment

802.11 to the nth Degree

It seems like every major wireless protocol is coming out with a variant that can make it under the low-power limbo bar. Bluetooth has spawned Bluetooth Low Energy and ZigBee now has a low-power healthcare profile. Not to be outdone, the Wi-Fi Alliance developed 802.11n to be a high-speed, lower power alternative to 802.11a/b/g, and it’s been rapidly adopted. Recently even lower-power versions of 802.11n chips have been coming on the market. But the Big Kahuna is 802.11ac, for which first silicon is just starting to appear.

Operating in the 5 GHz band, 802.11ac chips will

  • have 2-4x the bandwidth of 802.11n (80 and 160 MHz channels vs. 40 MHz for 11n);
  • achieve a data throughput of up to 1 GBbit/s—~10x better than 11g and about 3x better than 11n for 2- and 3-stream implementations;
  • support multi-user MIMO with up to 8 data streams (vs. 4 in 11n);
  • support up to 256-QAM vs. 64-QAM in 11n;
  • theoretically result in a considerably better power profile than 11n.

The “theoretically” hinges on the fact that the 802.11ac specification is yet to be ratified. The Initial Technical Specification Draft 0.1 was confirmed by IEEE 802.11 TGac on January 20, 2011. The specification isn’t expected to be finalized until mid-year at the earliest, at which point the Wi-Fi Alliance expects to ratify it, though IEEE ratification will take longer.

Are We There Yet?

That hasn’t stopped a rush to market with ‘pre-ac’ silicon, exactly the same thing that happened before the 802.11n specification was ratified. Last time the first out of the chute was Broadcom, whose ‘pre-n’ 802.11 chips hit the market well before the warring camps in the IEEE working group had ironed out their differences.

At CES earlier this month Broadcom announced that it is sampling 802.11ac silicon—the BCM43xx family, which it refers to as ‘5G WiFi’—though it is yet to announce a date for full production. Early adopters of Broadcom’s 11n chips took a big chance but came out unscathed. Will they be as lucky this time? According to Michael Hurlston, Broadcom’s senior vice president of Broadcom’s Home and Wireless Networking business unit, “I’m confident that any changes to the spec beyond this point and before final ratification will be window dressing, and relatively small.” History, hype, or hope? Only time will tell. Still, having pulled it off before—and pushing a lot of chips, as it were, onto the table—it would be foolish to bet against Broadcom.

Also joining the ‘pre-ac’ race is Redpine Networks, currently sampling its Quali-Fi™ 802.11ac chip. The Quali-Fi product is accompanied by Redpine’s software framework that includes an access point, Wi-Fi certified client and Redpine’s Wi-Fi Direct™ functionality. Redpine CEO Venkat Matella tells Low-Power Design that modules with 801.11ac chipsets will be available late this year or early 2013.

I’d be very surprised if Qualcomm/Atheros and Samsung—who co-chair the IEEE 11ac Task Group—as well as committee members Cisco, Intel, LG, Marvell, Mediatek, and others—didn’t announce 11ac chips shortly after the specification is ratified—if not before.

With even once power-hungry Wi-Fi now joining the low-power race, low-power wireless is no longer just a trend, it’s mainstream. We may not be ‘there yet’—and never will be, since the goal is one you can only approach asymptotically—but silicon vendors are making an impressive amount of incremental progress. Stay tuned for more exciting developments.

 

Posted in RF/Wireless, semiconductors | Tagged , | Leave a comment

Bluetooth Goes Ultra-Low-Power

There’s hardly a cell phone on the planet that doesn’t have a Bluetooth transceiver for connecting to a wireless headset. Most new PCs now incorporate Bluetooth chips for the same purpose, letting you type while you talk or listen. Many, if not most new cars, have Bluetooth to let you talk hands free while driving. However, while that’s all well and good, there is a wide range of applications for which Bluetooth isn’t appropriate – or at least it wasn’t until now.

Bluetooth is a connection-oriented protocol designed to handle continuous streaming of data at relatively high speeds, making it well-suited to connecting wireless headsets to cell phones. While attempting to remain low power, most changes to the Bluetooth specification have concentrated on boosting the data rate. The basic rate (BR) enables synchronous and asynchronous connections at up to 720 kbps. Bluetooth Version 2.0 (2004) added an extended data rate (EDR) of 3 Mbps (in practice more like 2.1 Mbps). Bluetooth 3.0 (2009) added a high-speed (HS) data capability of up to 24 Mbps by using an alternative MAC/PHY (AMP) that communicates over a co-located 802.11 link. Despite some clever engineering, the quest for higher speed necessarily resulted in higher power consumption.

Bluetooth Low Energy, in contrast, was designed from the beginning to be an ultra-low-power (ULP) protocol to service short range wireless devices that may need to run for months or even years on a single coin cell battery. Introduced in Bluetooth Version 4.0 (2010), Bluetooth Low Energy uses a simple stack that enables asynchronous communication with low-power devices, such as wireless sensors that send low volumes of data at infrequent intervals. Connections can be established quickly and released as soon as the data exchange is complete, minimizing PA on time and thus power consumption. Continued

Posted in Wireless | Tagged | Leave a comment

Synopsys Buys Magma—But Will the Marriage Last?

By John Donovan

Synopsys yesterday (11/30/2011) announced that it has signed a definitive agreement to buy Magma Design Automation for $507 million, the largest acquisition in the EDA industry in many years. The acquisition will strengthen Synopsys’ position in both analog and digital EDA tools, at the same time removing a struggling competitor with whom it’s had a less than friendly relationship.

On the financial side Magma represents an easy acquisition for cash-rich Synopsys and an unexpectedly good exit strategy for Magma. Synopsys has agreed to acquire Magma for $7.35 per share in cash, a 27.8% premium over the $5.75 at which LAVA was trading on the NASDAQ the day before the acquisition was announced (it immediately jumped to $7.09 after the announcement). For Q1 of this year Magma reported a GAAP net loss of $(0.1) million, or $(0.00) per share, compared to a net loss of $(3.3) million, or $(0.06) per share for the year-ago first quarter—a slow crawl back from a bad year but hardly enough to keep up with giants Synopsys, Mentor, and Cadence, all of whom are experiencing healthy growth. Needless to say Magma’s Board of Directors unanimously approved the merger.

On the analysts call announcing the merger Synopsys CEO Aart de Geus was upbeat about the synergies between the companies, highlighting the acquisition as an opportunity to expand their R&D talent pool. John Chilton, senior vice president of marketing and strategic development at Synopsys, underscored de Geus’ point, adding, “We really are getting more requests for more technology. Deep-submicron CMOS is very complex in terms of materials, the number of transistor and the parasitics. Tools have to do more.”

The question of overlapping tools loomed large and will remain a matter of speculation until the deal closes in the middle of next year. Chilton said Synopsys would not discontinue any Magma products at the time of the deal closing, though analysis of how to integrate them into Synopsys’ product lines will clearly be front and center for the next several months.

While de Geus said on the call that Synopsys was not motivated by Magma’s strength in any one particular product area, readers are permitted to take that with a grain of salt. Magma’s FineSim Pro simulator for analog/mixed-signal SoCs has reportedly been gaining key accounts that had previously been using Synopsys HSPICE simulator for RF and analog design, which hasn’t really taken off. On the digital side Magma doesn’t offer logic simulation—much to its detriment—but in Talus and Titan they do have a very capable tool flow from RTL synthesis right through silicon implementation, with Talus’ timing analysis capability being especially attractive to Synopsys. Magma’s yield management tools will also be a plus, though Synopsys isn’t lacking there.

According to Gary Smith, chief analyst at Gary Smith EDA, “It’s a great deal for Synopsys,” not to mention Magma. Also according to Smith, FineSim and Talus fill important gaps in Synopsys’ product offerings, “making them whole.” Over the next few years the Magma acquisition should prove to be quite successful for Synopsys. Over time Smith foresees possible problems integrating both Magma’s tools and its engineers. On the tools side, Magma’s data-driven development paradigm differs considerably from Synopsys’, raising the question of whether their tools, however synergistic, can indeed be integrated; if not, which ones survive and which ones receive an End of Life notice? And will the engineers at Magma, arguably not enamored of Synopsys during their long-running legal battle, stay on after the deal closes, or will they cash out and start their own companies? Smith agrees with de Geus that the engineers are the crown jewel of the acquisition. Keeping them happy and on board will be key to the merger’s long term success.

How all this will shake out remains to be seen. Will Mentor or Cadence respond quickly by acquiring one of the dozens of small, capable EDA companies to fill gaps in their own tool flows (probably)? Will Rajeev wind up in an office next to Aart (<1% chance), or will he take the money and start a new company (>90% chance)? Only time will tell. The one certainty is that the big three EDA companies will continue their acquisition binge, becoming stronger and more capable while at the same time providing a happy landing for some intrepid entrepreneurs.

 

Posted in Uncategorized | Leave a comment

Electric Flight—the Ultimate Energy Efficiency Challenge

If you think electric cars are impressive, how about an electric 747? On a smaller scale, that flight of fancy just became a reality.

Last month in Santa Rosa, CA, an electric-powered 4-seat light plane won the NASA/Google Green Flight Challenge by flying over 200 miles non-stop at over 100 MPH while achieving 403.5 passenger miles per gallon (mpg) using the equivalent of less than one gallon of gasoline. Compare that to the Chevy Volt—the current state of the art in electric (land-based) vehicles—which gets the equivalent of 112 mpg in all-electric mode while driving slowly over flat roads. And even with the benefit of wheels and a 435 lb. battery, the Volt can only keep that up for 35 miles, at which point it reverts to its gas engine, which gets 37 mpg.

The winner of the $1.65 million prize was Team Pipistrel from Penn State, flying a Taurus G4 manufactured in Slovenia. The G4 is a four-seat, twin engine plane with a wingspan of 69’2” and weighing 2,490 lb, slightly less than a Volkswagen Beetle. The two 145 KW (194 HP) motors can drive Pipistrel to about 114 mph, so it won the Challenge race running almost flat out.

Detailed data on the custom-built G4 is hard to come by, but not for the production model Taurus Electro G2. The body is a composite of epoxy resin, fiberglass, carbon fibers and Kevlar in a honeycomb structure. The motor is a high-performance synchronous 3-phase outrunner with permanent magnets, delivering 40 kW on takeoff and 30 kW continuous. The best glide ratio is 1:41, which really qualifies it as a powered glider. To put it in perspective, the typical glide ratio for a two-seat general aviation plane is about 1:10. Aside from getting unimpressive mileage, you really don’t want to run out of gas while flying your Piper Cub. Or in a 747 for that matter.

Electric gliders have been around for a while. The first commercial one was the AE-1 Silent, which first flew in 1997. Weighing a mere 430 lb., the AE-1 is easily powered by its 13 kW (17 Hp) electric motor, which in turn works from a 4.1 kW/77 lb. Li-Ion battery. If you’re so inclined the AE-1 is FAA certified as an ultralight aircraft and it’s still being produced.

More high powered is the Antares 20E from Lange Aviation GmbH, in production since 2004. The 20E is powered by a 42 kW (52 hp) BLDC electric motor weighing 64 lb. Energy storage consists of 72 Li-Ion cells each rated at 44 Ah at 3.7V, for a combined capacity of 12 kWh @ 266V. With a wingspan of 65 ft. and weighing in at 1,455 lb, this is a serious airplane—though still a one seater. The 20E can self launch and climb to 3,300 ft. in four minutes and climb to 10,000 ft., where it can fly for 1.5 hours. Assuming you’ve covered 93 miles at that point and a maximum glide ratio of 1:56 (!), the maximum range then becomes (93+(2×56))=205 miles.

Now let’s figure the mileage for just the powered portion of the flight. Assuming your flight fully depleted the 12 kWh batteries, that works out to 12 kWh/93 miles or 12.9 kWh/100 miles. Using the same formula the EPA applied to the Chevy Volt—where 36 kWh/100 miles = 93 mpg-e—the Antares comes in 2.8x better at 260 mpg equivalent! That’s a pretty energy efficient way to travel.

In an interesting twist Lange is now producing the Antares DLR-H2, which is powered by hydrogen fuel cells, with the tanks slung in pods under the wings. The actual motive force is a 42 kW BLDC motor. The 130 lb. fuel cells can generate 20 kW continuously, twice the 10 kW required for level flight. The DLR-H2 can attain a height of 12,000 ft and has a top speed of 105 mph and a range of 1,240 miles.

Using solar cells to recharge your batteries while in flight can greatly extend your range. In 1990 the solar powered plane Sunseeker flew across the U.S. powered by a 250W array of thin-film solar cells. Since solar cells obviously don’t work at night, it took two weeks to accomplish this task.

The first solar powered plane to complete a 24 hour flight was Solar Impulse. Claiming to have “the wingspan of an Airbus [208 ft.]…the weight of a family car [3,500 lb.]…and the power of a scooter [40 hp],” its designers plan to fly it around the world in 2012. The solar cells on the wings of Solar Impulse cover 650 sq. ft. and can generate 6 kW (8.2 hp), which is stored in Li-Ion cells during the night. All things being equal, this should be enough to keep the 1.6 ton plane aloft day and night while traveling at just over 40 mph.

Even electric commercial airliners are in the works. In Europe EADS, Airbus’ parent company, has proposed the VoltAir ducted fan engine that would power commercial airliners. To achieve the energy density required to move such a massive aircraft, the VoltAir motor would be constructed of high-temperature superconducting (HTS) materials, cooled by liquid nitrogen. HTS motors are expected to reach power densities of 7-8 kW/kg, comparable to 7 kW/kg for today’s turboshaft engines. The batteries will still be Li-Ion, which EADS hopes will become more efficient, or Li-Air should it become commercially viable by then.

Coming to an Airport Near You

While electric flight is both fun and interesting—especially to engineers—it may impact you sooner than you think. Every major city and most smaller ones have general aviation airports. The Taurus G2 and numerous others like it would make quiet, inexpensive air taxis practical. Not only are the planes inexpensive—about the cost of a high-end car—they’re extremely inexpensive to operate, highly reliable, quiet, and essentially non-polluting. Instead of fighting the traffic between New York and Boston or San Jose and Sacramento you would be able to hop a quick, cheap flight there and gaze smugly down at the congestion below.

So there you have it. Electric boats and cars—been there, done that. Stay tuned for electric aircraft. You hopefully won’t have to stay tuned for long, and it will be worth the wait.

Posted in Clean energy, Electric flight, Energy Efficiency | Leave a comment