Weightless Weighs In

astronautNow that PCs are old news and seemingly everyone on earth has a cell phone, the Next Big Thing promises to be machine-to-machine (M2M) communication, giving rise to the Internet of Things (IoT)—presumably a parallel universe to the Internet of People (IoP).

Whether you believe AT&T’s prediction of 50 billion connected devices by the year 2020 or IBM’s of 1 trillion devices by 2015, the numbers are huge. Every vendor with a vested interest is arguing that their wireless solution is the best way to connect these devices, at least in certain applications. Now suddenly there’s a new entrant in the race—Weightless. As one wag joked, “Weightless is not 1G, 2G, 3G or even 4G – it is ZERO G!”

Weightless is a new low cost, low power, long-range wireless protocol designed for M2M communications. The design is the brainchild of Professor William Webb, co-founder of Neul Ltd., CEO of the Weightless SIG, and author of Understanding Weightless. First announced in 2011, Weightless has picked up some serious backers, including ARM, CSR, and Cable & Wireless. The goal is to make it the first global standard for M2M communications.

Its proponents claim Weightless has a number of advantages vs. other protocols:

  • Cost—Cost is comparable to Bluetooth modules, less than $2. Also the cost of the infrastructure would be a lot less than for cell phones, since the protocol can go up to 10 km—all things being equal—meaning you need a lot fewer base stations. Finally, Weightless reuses the unlicensed white space between TV channels, so there’s no upfront massive investment in spectrum (unless the FCC decides to put it up for auction).
  • Power consumption—Weightless devices are designed for a minimum of 10 years battery life, since remote wireless sensors aren’t amenable to frequent battery replacement. This is possible in part because Weightless is a very lightweight protocol that spends little time in active mode. Also, it uses spread spectrum technology, which minimized output power. Finally, Weightless devices have allocated time slots, so they aren’t constantly listening to the network and can stay asleep most of the time. Weightless basestations only page connected devices every 15 minutes, varying the symbol rate based on signal strength.
  • Range—Using sub-GHz frequencies, Weightless devices have very good propagation and penetration characteristics vs. Wi-Fi, Bluetooth, and other protocols that utilize the 2.4 GHz ISM band.

How does it work?

Weightless is designed to work in the so called “white spaces” previously occupied by analog TV signals; typically this is in part of the UHF band approximately spanning 470MHz – 790MHz depending on the country. The FCC has ruled that these bands can be used for unlicensed devices, but only if they can detect the presence of other users and not interfere with them. That pretty much rules out Wi-Fi, which comes up short on interference detection and frequency agility.

Weightless uses time division duplexing (TDD), so both the uplink and downlink occupy the same channel. Since Weightless devices are assigned a particular time slot, they can spend most of their time asleep and needn’t constantly poll the channel, just waking up and transmitting only at preset intervals.

Weightless uses either phase shift keying or quadrature amplitude modulation (QAM) depending on signal strength and the amount of interference. It also utilizes a “whitening” algorithm to spread the signal and make it appear more like white noise, thus reducing interference. The data rates for the downlink range from 2.5 Kbps to 16 Mbps.

modulation schemes

As the table indicates Weightless uses a spreading algorithm to create a longer data sequence when the signal levels are weak. It reduces the data rate and shifts to a simpler modulation scheme in order to reduce the error rate or to gain additional range.

Despite relying on a TDD scheme Weightless also implements frequency hopping in order to reduce interference and maximize the data rate. This also helps to reduce the effects of Rayleigh fading. In its preferred implementation Weightless makes use of narrowband uplink channels in order to balance the link budget with relatively high power base stations and low-power terminals.

Weightless utilizes root raised cosine pulse shaping to convert the square waves from the digital baseband into sine waves that can be fed to the RF PA. This would typically be handled by a DAC, but Weightless provides a software approach should you choose to go direct from baseband to antenna.

Weightless systems operate in master-slave mode, with the basestation as the master and the terminals as slaves. Basestations have separate IP addresses and backhaul capability. When they go live they contact a master database, which knows their location, power, and estimated coverage radius. When a terminal within the coverage range of a basestation announces its presence, the basestation queries the database for a clear frequency, which it then assigns to that terminal. When the terminal starts transmitting the basestation sends that information back to the database server along with signal levels.

As conditions change the basestation negotiates changes in frequency and modulation with the terminal as needed. The central database—with a few already in place in the U.S. and the U.K.—is key to enabling this all to work, since sub-GHz signals can travel over the horizon, causing interference of which the transmitting station was unaware, since intervening mountains might prevent it from hearing the distant transmitter. If this happens the database server will be aware of the problem and instruct the basestation to shift to another frequency.

Launching an Open Standard

Webb and the Weightless SIG folks argue that only wireless protocols have been standardized will be really successful, since different ends of a wireless link will come from different vendors, and without a universal standard that connection isn’t liable to work. However, seeing an immediate market opportunity, the Weightless SIG chose to develop their own standard rather than wait years for the IEEE or ETSI to hash one out. This was the approach taken by the Bluetooth SIG, and that’s worked out.

weightless chip

In late 2011 Neul, a founder Member of the Weightless SIG and a member of the Weightless Promoter Group, presented v0.6 of the Weightless Specification to a small group of companies for ongoing development work to commence. The Weightless SIG currently has a draft specification (version 0.9) under review by its members, and it plans to formally release version 1.0 on April 3, 2013. The specification will be open to all but with licensing arrangements that are yet to be formalized. Once the specification is published the Weightless SIG proposes to pass it to ETSI for consideration as a formal specification; presumably an IEEE specification would follow at some point.

Weightless moved beyond the concept stage last month when Neul announced first silicon of Iceni, which it bills as “the world’s first TV White Space ASIC.” Iceni operates over the entire TV white space frequency  range from 470 MHz to 790 MHz supporting both 6 MHz and 8 MHz channel bandwidths. It features the adaptive modulation schemes listed above; data encryption; programmable I/Os for controlling an external RF front end; an on-board, low-power MCU; and a memory-mapped parallel bus interface and discrete interrupt lines for waking an applications processor.

Will the M2M Future be Weightless?

The danger is that the Weightless SIG is basically a startup, with only an early specification and limited vendor support. That is changing rapidly, with over 500 members registering in the last 12 months. But then again there is no lack of capable IEEE protocols that never gained much market traction, including—fairly or not—ultra-wideband (UWB), HyperLAN, 802.22 (WRAN), and WiMAX. Technical success doesn’t ensure market success, and it’s too early to judge either in this case.

Weightless seems to be a well designed protocol for M2M communications, though it’s not without competition; and standardized or not it’s not necessarily the obvious choice for all applications. However as an alternative to cellular it makes a lot of sense. Far from being weightless in that domain, it may well turn out to be a heavyweight. But that will take time, and only time will tell. Still, the Weightless SIG is off to a good start, and we wish them well.

 

Posted in ARM, RF/Wireless, semiconductors, Spectrum, Wireless | Leave a comment

Mmm—Raspberry Pi!

Pi logoHaving had great fun playing with Beagle ($125) and Panda ($175) boards, I was happily surprised when my backordered Raspberry Pi suddenly arrived a few days ago. At $35 for a tricked out, credit-card size single-board computer, it was too good a deal to pass up. Despite its diminutive size and price, the little puppy stacks up well to its heftier competitors.

Actually “competitors” may be the wrong choice of words, since the Beagle and Panda are pitched to embedded developers and the Raspberry Pi’s purpose is to teach young students about computers. The computer was developed by the Raspberry Pi Foundation, a charitable organization set up by Cambridge University to develop a tiny, cheap, programmable computer for kids.

The Raspberry Pi is built around a Broadcom BCM2835 SoC, which contains an ARM1176JZFS, with floating point, running at 700 Mhz, and a Videocore 4 GPU. The GPU is capable of BluRay quality playback, using H.264 at 40 MBits/s. It has a fast 3D core accessed using the supplied OpenGL ES2.0 and OpenVG libraries. There are two versions of the board available: the Model A ($25), which has 256 MB of RAM and no Ethernet connection; and the Model B ($35) with 512 MB of RAM, two USB ports, and an Ethernet port. Both are available exclusively through Newark/element 14 and (outside of America) RS Components. In addition there are lots of accessories, including cases, cables, expansion cards, and starter bundles. I bought the Model B board and a clear plastic box ($7.35) into which it neatly snapped.

Baking the Pi

Before you can boot the board you need to pre-load the operating system onto an SD card. The operating system is a Raspian-optimized version of Debian Linux. To get started you need to download an image file with the uninspiring (though possibly appropriate) name of “wheezy.” Once you download the zip file you need to run a checksum program (sha1sum) to verify that the checksum on the downloaded file corresponds to the one shown on the download site. After spending 23 minutes downloading the 482 MB zip file I wasn’t happy that the checksums didn’t match. I did a second download and they still didn’t match. I wasn’t prepared to try it a third time, so I figured what the hell, let’s try it as is.

Since I did the download on my Windows PC I next had to download, unzip, and run Win32Diskmanager. Then I was able to copy the Wheezy image file to the SD card and transfer it to the Pi. I connected the board via a USB cable to an external power supply—drawing up to 750 mA, powering from a PC isn’t possible—and connected a monitor to the HDMI port. After rebooting, logging in, and starting the desktop, “You will find yourself in a familiar-but-different desktop environment” according to the Getting Started guide. Right on both counts.

The Pi comes with a minimal set of programs and games, though through the Pi Store you can get a lot of different programs, most of them free. Of course you first need to get online, which I was able to do pretty easily by plugging an 802.11n WAN adaptor into one of the two USB ports (a wireless keyboard went into the other). But where was the browser? The only likely candidate was this funny looking green thing in the upper right hand corner of the screen, which I double clicked. That didn’t explain much since all the labels and explanations were in Arabic (or possibly Devanagari)! The same was true for the labels in most of the other programs, though fortunately the Pi Store and the Debian Reference were in English. Those mismatched checksums were starting to worry me. I’ve never encountered a Linux virus before, but that doesn’t mean it can’t happen.

Once I hacked at the mystery program I figured out that it was the browser, and typing a URL in the appropriate place took me where I wanted to go. That worked well, if slowly—well, at least by an unfair comparison to my 2.5 GHz PC. The Pi is supposed to be able to display HD video, so I went to the Amazon site to run a couple of movie trailers to check it out. When I tried to run one, I was told I needed to install the latest Adobe Flash player. I downloaded the Linux (YUM) 32-bit Flash player, but when it didn’t automatically install I opened the downloaded file—only to find all the instructions in Arabic/Devanagari. Aargh!! Time to bite the bullet, download Wheezy from another source, and try it again—later.

Buggy downloads notwithstanding, the Raspberry Pi is a great little computer with a large network of developers finding a lot of new consumer and even commercial uses for it—it is a general purpose computer after all. It doesn’t have the ecosystem of Arduino, for example, but that’s only so far. There are limits to what you can do with the Pi—you’re stuck with the 512 MB of memory, for example—but hey, you can overclock it and run some pretty cool games. Not bad for 35 bucks!

Posted in ARM, Raspberry Pi | Tagged , | 2 Comments

Does Your Rice Cooker Have Its Head in the Clouds?

security lockRecently I read an amusing article in IEEE Spectrum titled “Android My Rice Cooker: Gateway to Future Home Invasion?Why would my rice cooker need to be part of the burgeoning Internet of Things (IoT)? Because it’s lonely and wants to talk with other rice cookers? To conspire with my refrigerator and diss me on Facebook about that green glop in the back of the fridge?

Well, it was amusing until you actually read it and thought about it. If hackers can invade your fancy desktop computer, they sure shouldn’t have much trouble taking over the tiny 8-bit brain in your rice cooker. And once into your home network they can reprogram everything in ways you wouldn’t care to imagine. Equipping a small appliance with an operating system and giving it Internet access strikes me as a solution that went looking for a problem, and not finding one, created one. A potentially big one.

The Spectrum blog post picked up on a Bloomberg article “Google Android Baked Into Rice Cookers in Move Past Phone.” As the author explained, “Google Inc. (GOOG)’s Android software, the most widely used smartphone operating system, is making the leap to rice cookers and refrigerators as manufacturers vie to dominate the market for gadgets controlled via the Internet.” OK, I guess I could turn on my rice cooker from my cell phone on the way home from work, but if I put the rice in the water in the morning before I left then by 5:00 PM (or 7:00 PM+ if you live in Silicon Valley) it would be mush. Not well thought out.

rice cookerBe that as it may last year in Japan Panasonic introduced its $600 Android-controlled SR-SX2 rice cooker that lets users search for recipes on their Android phones and then transmit them to the cooker. The SR-SX2 works with FeliCa-enabled smartphones, FeliCa being an RFID smart card system developed by Sony. Through a downloadable app users can specify the type of rice they’re cooking, the length of timers, and other settings, all by touching their phone to a blue icon on the cooker’s lid. How exactly the humble rice cooker—well, at $600 I guess it isn’t exactly humble—can add different ingredients and spices is not specified.

Google, of course, is in it to collect more data and no doubt run ads on your cell phone app or possibly even your rice cooker, where you might see an ad from the local Target for Mahatma long grain rice on sale and touch the screen to add it to your cart—however that would work. Do you ever get the feeling that technology might just be getting a bit too intrusive? If you answered yes, you’re probably not an engineer.

Now don’t get me wrong—I’m a big fan of machine-to-machine (M2M) communications, for which there a lots of industrial, medical, and consumer applications. This is a large market about to become huge. According to IDC the number of Internet-enabled devices will double to almost 4 billion units in 2015 from more than 1.8 billion units and more than $1 trillion in revenue in 2011. Just as data communication over the Internet has long since passed voice traffic, the number of machines communicating via the ‘cloud’ will shortly exceed the number of people doing the same.

This brings with it security concerns, from which Android—while it’s more secure that Windows—is hardly bullet proof. I’ve programmed applications on Android and it’s a very capable OS. Still, with billions of machines going online, security will be a major concern for the IoT. Large cloud-based services should be quite secure, but small MCU-based devices like rice cookers are another matter. I don’t think this is an issue that has been adequately addressed.

I have a very nice, if dumb rice cooker that I bought from Target for $19.99. It cooks rice perfectly every time. I’m no Luddite but I can buy a lot of recipe books for the $580 I’m saving by not buying the Panasonic SR-SX2.

Posted in Embedded, IoT | Tagged , , | 1 Comment

Bluetooth Low Energy Gets Smart

Bluetooth logoBluetooth has long been one of those technologies we take for granted—it’s in all our wireless headsets and increasingly in ­hands-free audio in our cars. But until the emergence of Bluetooth Low Energy it had trouble breaking out of the ‘headset ghetto’ beyond migrating to wireless mice and keyboards. That’s now rapidly changing, thanks to the emergence of “Bluetooth Smart” and “Bluetooth Smart Ready” devices.

Bluetooth is a connection-oriented protocol designed for continuous, relatively high-speed data streaming—making it well suited to wireless headphones, its original and still main market. Operating at its Basic Rate (BR) Bluetooth enables connections at up to 720 kbps. Bluetooth 2.0 (2004) added an extended data rate (EDR) of 3 Mbps; and Bluetooth 3.0 (2009) added a high-speed (HS) rate of 24 Mbps. The focus has long been on higher and higher speed, which works against low power consumption.

Bluetooth Low Energy was designed from the beginning to be an ultra-low power (ULP) protocol to service short range wireless devices that may need to run for months or even years on a single coin cell battery. Introduced in Bluetooth Version 4.0 (2010), Bluetooth Low Energy uses a simple stack that enables asynchronous communication with low power devices such as wireless sensors that send low volumes of data at infrequent intervals. Connections can be established quickly and released as soon as the data exchange is complete, minimizing PA on time and thus power consumption.

Like Classic Bluetooth, Bluetooth Low Energy utilizes adaptive frequency hopping to minimize interference with co-located radios. However Bluetooth Low Energy uses three fixed advertising channels, only one of which (Channel 1) is subject to possible interference from neighboring Wi-Fi devices. When a connection is requested, these same advertising channels serve to connect the devices, which then proceed to use the data channels for communication. The initiating device becomes the master—initiating connection events and determining the timing, channels, and parameters of the data exchange—and the other the slave.

One of the ways Bluetooth Low Energy manages to minimize power consumption is by switching the radio on for only very brief periods of time. Bluetooth Low Energy radios only need to scan three advertising channels to search for other devices—which they can do in 0.6-1.2 ms—while Classic Bluetooth radios must constantly scan 32 channels, which requires 22.5 ms each time. This trick alone enables low-energy devices to consume 10 to 20 times less power than Classic Bluetooth ones.

In order to be backward compatible with billions of legacy Bluetooth devices Bluetooth 4.0 introduced two types of devices:

  • Single-mode chips running the compact Bluetooth Low Energy protocol stack
  • Dual-mode chips that can communicate with “Classic Bluetooth” devices as well as single-mode chips in ultra-low power devices

Dual-­mode chips will enable new PCs and cell phones, for example, to communicate with a wide range of medical, industrial, and consumer applications. This added capability comes at very little extra cost but opens up a lot of possibilities for machine-to-machine (M2M) communications. In the medical field, for example, smart phones will be able to connect to a wide range of wireless sensors in blood pressure monitors, blood glucose meters, even shirt-pocket EKG machines and alert your doctor should an abnormality occur.

Get Smart

In order to differentiate new Bluetooth 4.0 chips—at least in consumers’ minds—the Bluetooth SIG refers to single-mode Bluetooth Low Energy enabled products as “Bluetooth Smart” devices. Single-mode Bluetooth BR/EDR chips will continue to be called simply Bluetooth devices. New dual-mode chips—running both Bluetooth Low Energy and Classic Bluetooth BR/EDR protocol stacks—are now called “Bluetooth Smart Ready” devices.

Personally I think both labels were badly thought out. Bluetooth Smart­—in contrast to what, the legacy Bluetooth Stupid? And Bluetooth Smart Ready—ready for what? Ready to be smart? What’s wrong with Bluetooth Low Energy­ and Dual Mode? OK, I see the problem with Bluetooth Low Energy, as it would highlight that Classic Bluetooth isn’t that low power; but everything is getting lower power, and it’s already officially called Bluetooth Low Energy. And Bluetooth Dual Mode couldn’t be any clearer. But the horse is already out of the barn, so best to disregard my grumbling and learn the appropriate terminology.

The rush is now on to Get Smart. The first Bluetooth Smart Ready smartphone was the Apple iPhone 4S, introduced in 2011. Since then all iPhones, iPads, and MacBooks have been Bluetooth Smart Ready hubs, able to connect to Classic Bluetooth as well as Bluetooth Low Energy­—excuse me, Bluetooth Smart­—devices. Windows 8 and Windows RT both support Bluetooth Smart Ready chips, as do Apple’s OS X Mountain Lion operating system for PCs and iOS for mobile devices.

Bluetooth devices are getting smarter and lower power, and the increasing operating system support for them all but guarantees that Bluetooth Low Energy devices will continue to proliferate. Moore’s law may be slowing down, but the wireless revolution is only speeding up.

Posted in Wireless | Tagged , | Leave a comment

Kids FIRST

“To transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders.”–FIRST mission statement

LEGO LeagueAt last month’s Renesas DevCon inventor Dean Kamen delivered an amusing and inspiring keynote—which is exactly what you’d expect from one of the world’s most enthusiastic and prolific inventors. His numerous projects are all designed to improve peoples’ lives through the creative use of technology. But it was his worldwide program to inspire kids about careers in science and technology that really got my attention.

First the projects. Despite being known as Mr. Segway, the Segway was actually a spin-off of a stair-climbing electric wheel chair, the iBOT Mobility System. Kamen’s first invention—inspired by a comment from his brother, a doctor—was the AutoSyringe, the first portable infusion pump for diabetes patients requiring continuous low doses of insulin. Kamen next invented the Homechoice PD, a home dialysis machine that saves renal patients from having to make frequent and expensive trips to a dialysis center. He also invented an improved crown stent and a thin-prep PAP test. Currently under development is the DEKA Arm, a robotic arm for individuals with upper extremity amputations.

Kamen’s current big project is a cheap water purification device for use in poor villages where water sources are frequently polluted. According to Kamen 1.1 billion people lack access to clean water; and since these same people often lack access to electricity, his water purification system—basically a miniature compression distiller—is powered by a Stirling engine that can be powered by cheap fuels such as lamp oil; in addition to providing clean water it also generates electricity. Coca Cola is funding these small water/power/communication kiosks and helping to deploy them worldwide. Trust an innovative engineer to update an early 19th century invention—the Stirling engine—into the solution to a serious 21st century problem.

Kids FIRST

Kamen’s enthusiasm about technology as a force for good is contagious, and he’s on a mission to communicate that enthusiasm to kids through his FIRST foundationFor Inspiration and Recognition of Science and Technology. FIRST’s mission is “igniting young minds, nurturing passions, [and] practicing gracious professionalism”—the latter on the part of the engineers, teachers, and parents who constitute the mentors, coaches, and volunteers to make FIRST’s activities possible.

According to Kamen, “America isn’t having an education crisis, we’re having a culture crisis. You have teenagers thinking they’re going to make millions as NBA stars when that’s not realistic for even one percent of them. Becoming a scientist or engineer is.” In school smart kids are often branded as geeks, which is not intended as a compliment. Kamen wants to turn that perception on its head. “Geeks are the coolest people there are. Everybody else dreams about doing things; we do it.”

Founded in 1989, FIRST conducts robotics competitions for kids from kindergarten through high school. Kamen chose robotics because it requires kids to learn about mechanics, electronics, software, computers, system-level design, math, and teamwork. According to Kamen, “Robotics is a thinly veiled excuse to get people together around technology.” The idea is to get kids excited about technology, gain self confidence, learn some valuable skills, and have fun.

FIRST sponsors a number of programs for kids of all age levels:

  • Junior FIRST LEGO League (Jr. FLL) for grades K-3 (ages 6-9). Teams use LEGO bricks to build a model that moves; develop a ‘show me’ poster that illustrates their journey; and present their project at local events and celebrations.
  • FIRST LEGO League (FLL) for grades 4-8 (ages 9-16). Using LEGO MINDSTORMS technology teams develop an autonomous robot that performs a series of missions. Teams can participate in official tournaments and qualify for an invitation to the World Festival.
  • FIRST Tech Challenge (FTC) is for high school kids who want to compete using a sports model. Using a TETRIX® platform teams design, build, and program robots that compete on a 12’ x 12’ field against other teams. Winning teams can earn a place in the FIRST Championship.
  • FIRST Robotics Competition (FRC) is for high school kids who want to compete in “a Varsity Sport for the Mind™.” Working under strict rules, with limited resources, and with time limits, teams must raise funds, design a team ‘brand’, then design, build, and program a robot that competes with others to perform certain prescribed tasks. Winners at the local level can earn a place in the FIRST Championship and qualify for nearly $16 million in college scholarships granted by 148 different colleges and universities.

Not surprisingly the 22-year old FIRST program has been wildly successful. It currently involves nearly 300,000 students in 60+ countries and has maintained a 55% CAGR over the last several years. It’s so successful that the 2013 FIRST Championship will be held in the 67,000 seat arena in St. Louis. That’s a lot of excited, smart kids, mentors, and parents.

What FIRST still needs is for more geeks like you to get involved. If you have a child click on one of the links above to see if there’s a team near you; if there is, see about having your child join it or volunteer to help out; if not, look into starting one at your kid’s school. Few things you can do will be more rewarding than passing along your enthusiasm for engineering and setting a child on the road to a challenging and rewarding career. If there was ever a win/win scenario, this is it.

 

Posted in FIRST, Robotics | Tagged , , | Leave a comment

The Power Wall: Are we scaling it or is it just getting higher?

Cadence hosted a Low-Power Summit this month at which Jan Rabaey was the keynote speaker. Jan is the Donald O. Pederson  Distinguished Professor in the EECS department at U.C. Berkeley; Scientific Co-Director of the Berkeley Wireless Research Center (BWRC); and director of the Multi-Scale Systems Center (MuSyC). As someone who literally wrote the book on low-power design, he had a lot to say on the subject of the Power Wall.

We’re now in an era where electronic devices are quickly becoming the leading consumers of electricity. We have “millions of servers, billions of mobile devices, and trillions of sensors.” Whereas sensors tend to be energy frugal; and mobile devices are “energy bounded,” with a fixed amount of energy that must be carefully conserved; servers are “energy hungry”—and there are lots of them. According to Rabaey, “The Cloud is where 99% of processing is or will be done.” A typical server in a server farm can consume up to 20 kW; 100 servers in a small- to mid-sized server farm can easily consume 2 MW. Low-Power design has some high-power implications.

In his book Rabaey gives a tongue in cheek answer to “Why worry about power?” Assuming that Moore’s Law continues unabated and that computational requirements keep doubling every year, then assume:

  • The total energy of the Milky Way galaxy is 1059 J;
  • The minimum switching energy for a digital gate (1 electron @ 100 mV) is 1.6 x 10-20 J (limited by thermal noise);
  • The upper bound on the number of digital operations is 6 x 1079;
  • The number of operations/year performed by 1 billion 100 MOPS computers is 3 x 1024;
  • Then the entire energy of the Milky Way would be consumed in 180 years.

Not entirely convinced that computers will lead to cosmic catastrophe, Rabaey quotes Gordon Moore as saying, “No exponential is forever…but forever can be delayed.” Just how to delay it was the subject of his talk.

“Where is the next factor of 10 in energy reduction coming from?”

Over the last decade chip engineers have come up with a large number of techniques to reduce power consumption: clock gating; power gating; multi-VDD; dynamic, even adaptive voltage and frequency scaling; multiple power-down modes; and of course scaling to ever smaller geometries. However, according to Rabaey “technology scaling is slowing down, leakage has made our lives miserable, and the architectural tricks are all being used.”

If all of the tricks have already been applied, then where is the next factor of 10 in energy reduction coming from? Basically it’s a system-level problem with a number of components:

  1. Continue voltage scaling. As processor geometries keep shrinking, so to do core voltages—to a point. Sub-threshold bias voltages have been the subject of a great deal of research, and the results are promising. Sub-threshold operation leads to minimum energy/operation; the problem is it’s slow. Leakage is an issue, as is variability. But you can operate at multiple MHz at sub-threshold voltages. Worst case when you need speed you can always temporarily increase the voltage. But before that look to parallelism.
  2. Use truly energy-proportional systems. It’s very rare that any system runs at maximum utilization all the time. If you don’t do anything you should not consume anything. This is mostly a software problem. Manage the components you have effectively, but make sure that the processor has the buttons you need to power down.
  3. Use always-optimal systems. Such system modules are adaptively biased to adjust to operating, manufacturing, and environmental conditions. Use sensors to adjust parameters for optimal operation. Employ closed-loop feedback. This is a design paradigm shift: always-optimal systems utilize sensors and a built-in controller.
  4. Focus on aggressive deployment. Design for “better than worst-case”—the worst case is rarely encountered. Operate circuits at lower voltages and deal with the consequences.
  5. Use self-timing when possible. This reduces overall power consumption by not burning cycles waiting for a clock edge.
  6. Think beyond Turing. Computation does NOT have to be deterministic. Design a probabilistic Turing machine. “If it’s close enough, it’s good enough.” Statistical computing I/O is stochastic variables; errors just add noise. This doesn’t change the results as long as you stay within boundaries. Software should incorporate Algorithmic Noise Tolerance (ANT). Processors then can then consist of a main block designed for average case and a cheap estimator block for when that block is in error.

Rabaey emphasized several points he wanted everyone to take away to their labs:

  • Major reductions in energy/operation are not evident in the near future;
  • Major reductions in design margins are an interesting proposition;
  • Computational platforms should be dynamically self-adapting and include self-regulating feedback systems;
  • Most applications do not need high resolution or deterministic outcomes;
  • The challenge is rethinking applications, algorithms, architectures, platforms, and metrics. This requires inspiration.

What does all of this mean for design methodology? For one thing, “The time of deterministic ‘design time’ optimization is long gone!” How do you specify, model, analyze and verify systems that dynamically adapt? You can’t expect to successfully take a static approach to a dynamic system.

So what can you do? You can start using probabilistic engines in your designs, using statistical models of components; input descriptions that capture intended statistical behavior; and outputs that that are determined by inputs that fall within statistically meaningful parameters. Algorithmic optimization and software generation (aka compilers) need to be designed so that the intended behavior is obtained.

For a model of the computer of the future Rabaey pointed to the best known “statistical engine”—the human brain. The brain has a memory capacity of 100K terabytes and consumes about 20 W—about 20% of total body dissipation and 2% of its weight. It has a power density ~15 mW/cm3 and can perform 1015 computations/second using only 1-2 fJ per computation—a good 100 orders of magnitude better than we  can do in silicon today.

So if we use our brains to design computers that resemble our brains perhaps we can avoid the cosmic catastrophe alluded to earlier. Sounds like a good idea to me.

Posted in Cloud computing, Low-power design, semiconductors | Tagged , | Leave a comment

Mixed-Signals: Tribulations of Combining Analog and Digital Design

“Oh, East is East and West is West, and never the twain shall meet.”—Rudyard Kipling

MS design graphicKipling’s line goes back over 100 years but today it could just as well apply to digital and analog engineers. Engineering graduates for at least the last 40 years have been highly skilled in the latest digital design techniques but only passably literate in analog design, a black art practiced by unkempt, bearded über-bright guys like Bob Pease. System architects would partition designs into their digital and analog components and toss each over the wall to their respective design teams, who would meet again at the RTL stage only to discover that their work products didn’t mesh.

With most designs today involving mixed-signal design, those days are over. However converging the tools, techniques, and mindsets of digital and analog designers is still a work in progress. On September 20th Cadence put on a one-day Mixed-Signal Technology Summit to address those issues. Not coincidentally the Summit coincided with the publication of their Mixed Signal Methodology Guide.

Chi-Ping Hsu, Cadence’s senior VP of R&D for Silicon Realization, kicked off the day pointing out the importance as well as the challenges of mixed-signal design. Over the last four years Cadence has spent over $50 million designing mixed-signal design tools, and it currently employs over 2,000 R&D engineers trying to stay ahead of the curve. Chi-Ping explained that Cadence had started working on low power in 2006 and has built its design flow around a matrix-driven verification methodology.

Professor Ali Niknejad of UC Berkeley’s wireless lab gave the Academic Keynote. Prof. Niknejad focused on designing 60 MHz chips. His lab has produced a 60-GHz transceiver that can transmit at 40 dBm ERP at a shorter range but far higher data rate than Wi-Fi. CMOS devices operating at up to 300 GHz FT are available; interconnects and passives are now the main constraints. PAs at these frequencies usually run Class D modulation, displaying up to 77% efficiency. PAs are acting as DACs at RF frequencies, oversampling and then filtering to remove spectral lines. According to Prof. Niknejad the main EDA challenge at these frequencies is simulating the designs. In addition, “We need better AMS tools and designer training.” In summary: “If you want to innovate with deep CMOS technologies you have to be a mixed-signal designer.”

Chris Collins, the director of TI’s analog division, gave the Industry Keynote. Collins couldn’t resist urging Ch-Ping to spend more on mixed-signal design tools, since TI has spent over $9 billion on analog and RF designs, a lot of that having been spent trying to get their analog tools to play nicely with their digital and RF counterparts. Collins’ talk Mixed-Signals: Tribulations of Combining Analog and Digital Design drew on TI’s long history of wrestling with the problem. His key points are worth noting:

  • Pure analog designers need to stop doing digital
  • Mixing of signals between digital and analog partitions of designs is tightly coupled more than ever
  • Languages and data type connection challenges are increasing, giving simulation performance improvements an uphill task
  • Setup and debug time are becoming critical measures in meeting verification deadlines
  • Having a plan configuration management and discipline is key, if you don’t think it is you need to wake up

Long story short: Collins suggested adding digital engineers to analog design teams and analog engineers to digital teams. If you have one combined team and the top layer of the SoC being designed is digital, put a digital engineer in charge; if it’s analog, put an analog engineer in charge.

Next up was Mladen Nizic, engineering director of Cadence’s Mixed Signal Solution division, who reassured the audience that Cadence was on the case. In particular he focused on the low-power challenge in mixed signal design resulting from increasing digital content, increasingly complex LP techniques deployed in IP, and the burgeoning AMS verification issues that result. Cadence Virtuoso AMS Simulation pays special attention to connecting modules in analog-digital co-simulation, converting corrupted digital values into analog values. Virtuoso Power Intent Export Architect generates CPF macro models from schematics and inherited connections; it then generates a structural netlist for use by Encounter Conformal Low Power (CLP). CLP performs structural and functional static checks; it is test bench independent while being fast and comprehensive.

Cadence panel

Panel: “Are We Closing the Gap in Mixed-Signal Design?”

After numerous other speakers attempted to demystify mixed-signal design, the afternoon wrap-up panel “Are We Closing the Gap in Mixed-Signal Design?” brought both optimism and skepticism to bear. Experts from Maxim (Neyaz Khan), Broadcom (Nishant Shah), IC Manage (Shiv Sikand), TI (Bill Meier), and Cadence (Bob Chizmadia) all agreed that mixed-signal design tools have come a long way, especially since what TI’s Chris Collins said earlier referred to as “the dark ages of mixed-signal design” from 1995 to 2005. However they also agreed that with the explosion of chip complexity at smaller geometries – to which digital can scale much better than analog – the EDA tool makers are going to have to keep innovating quickly to stay ahead of the curve.

IC Manage’s Sikand listed the main challenges for mixed-signal design as internal IP reuse; SoC assembly; network storage bottlenecks; and bug management. All the panelists agreed with him that “getting knowledge across the design team is very important.”

The interface between digital and analog is where the rubber meets the road; converting 1’s and 0’s into a continuously variable range of voltages isn’t an easy task. Cadence’s Nizic had explained earlier that Virtuoso creates connectors that link analog blocks into an otherwise digital design. Broadcom’s Shah suggested that “AMS connect modules need more enhancements,” including support for multi-power domains in particular. Maxim’s Khan was more direct: “Somebody throws in a level shifter and all hell breaks loose.” You could wind up with back to back level shifters surrounding each analog block—not a pretty picture.

While TI’s Meier lamented the fact that “connect modules don’t have the ability to track analog voltages”—and therefore CPF has no way to do so—he did offer that “CPF is the way to go for connect modules.” The problem right now is, How do you verify that you’re not in a power down condition? You can do static checks but nothing for dynamic verification. Meier would like to see SystemVerilog connect module assertions, which would address the problem that currently there isn’t an assertion that works throughout the entire simulation flow.

Someone from Cadence R&D assured the panelists that Cadence has made a lot of progress on the connect module problem and is addressing the other pain points they’d listed.

It fell to Cadence’s Bob Chizmadia to wrap it up. He pointed out that Real Number Modeling (RNM) has gone a long way toward addressing the analog-digital conversion problem, which he feels is well within reach at larger geometries. However, “We’re not there yet at advanced nodes—20 nm on down” where “the system implementation is daunting.” Variability at those line widths greatly restricts how you design, and parasitics have exploded. Bottom line: “We’re still trying to figure out how to do it.”

All told the conference was very data intensive, and judging by the large number of questions very useful for AMS designers. Design complexity may be exploding, but if anyone at the conference felt daunted it wasn’t apparent. After all, engineers enjoy challenges and AMS design offers plenty of those.

 

Posted in EDA | Tagged , | Leave a comment

Solar is only part of the solution

I’m a big fan of solar energy but also a realist, so I’ve long taken a skeptical view of claims that renewable energy sources—solar in particular—will obviate the need for more (hopefully fairly clean) fossil fuel power plants. A new solar farm near Austin gave me a chance to play with some numbers.

Webberville, TX solar farm

Austin has long been a pioneer in solar energy—it’s the only city I know of that has a Solar Committee to review ways in which the city can promote the use of solar power. In 2010 the City Council approved the Austin Energy Resource & Climate Protection Plan, which calls for the city to derive 35% of its electricity from renewable energy sources by 2020, including 200 MW from solar and 1,000 MW from wind sources. That’s a pretty aggressive goal considering that right now they’re at 10%.

With that goal in mind city-owned Austin Energy recently brought a new $250 million 30 MW solar farm online, located in nearby Webberville. The project covers 380 acres and utilizes 127,728 polycrystalline solar modules mounted on single-axis trackers. The plant is expected to generate 63,000 MW-hours annually, enough to power approximately 5,000 homes.

Just for fun I was curious how much bigger (and more expensive) the Webberville solar farm would have to be to serve 35% of Austin homes, much less 100% of them. As of 2011 Austin had a population of 820,611 living in 276,611 houses/condos. So right off the bat the Webberville plant can only service 1.8% of Austin homes. Now if it takes 380 acres of solar panels to service 5,000 homes, to serve 35% of Austin homes (96,814) Webberville would need to grow by almost 20x—or about 6,390 acres. Considering that Austin itself only covers 177,920 acres (278.1 sq. mi.) that amounts to 3.6% of the size of the city. While that’s a lot of land maybe it’s doable as long as the additional cost doesn’t also increase 20x to $5 billion, which seems likely.

Finally, how large would Webberville have to be to serve 100% of Austin homes? All things being equal it would cost $13.75 billion—an increase of 55x—and cover 21,022 acres, about 11.8% of size of the city—basically the entire downtown area including the capital. This clearly isn’t going to happen.

Austin Energy is spreading its bets, having recently signed two 25-year wind power contracts designed to deliver 700 MW from West Texas wind farms by 2013. This is a lot more cost effective approach—relying on someone else to build the wind farms instead of the DIY approach taken toward solar.  Wind power is also a nice complement to solar, since the wind blows primarily at night while solar sources lie dormant. However given that facts that (1) alternative energy sources are a lot more expensive than fossil fuel sources; (2) the Texas power grid is very near capacity and falling farther behind; and (3) Texas isn’t real big on EPA pollution controls; the need for fossil fuel power plants is only going to keep rising. And this is hardly just a Texas problem.

While alternative energy sources are an important part of the solution to our energy shortage, they’re long going to remain just a part of it as long as the generating sources (solar panels, wind turbines) are expensive and inefficient. It’s much like the drug problem—unless you can get demand under control, tinkering on the supply side won’t change things much. Energy efficiency is the real key here—actually on both the demand and supply side—and it’s ultimately up to engineers to change the equation, not politicians.

 

Posted in Energy Efficiency, Solar, Wind power | 2 Comments

Verifying the Cloud

cloud_computingSo called “cloud computing” is the presumptive wave of the future, and not just to deliver software-as-a-service (SAAS), which will certainly challenge Microsoft’s business model. The ‘cloud’ is of course all those mainframes that Amazon, Google, Rackspace and others use to deliver services over the Internet, not to mention to your iPhone and iPad. Susan Peterson, Cadence’s Group Director of VIP Product Marketing, got my attention this week when she remarked off-handedly, “For every 600 iPads a server is born.”

Low-Power Design hasn’t been paying much attention to servers, but we should. It takes a lot of energy to move data around. Two data points: (1) server farms account for over 3% of the total energy consumption in Europe; (2) using figures from Dataquest I calculate that just moving the cellular traffic in the U.S. and Europe in 2015 will require the equivalent of six new 100 kW power plants. The cloud is very energy intensive to the point where you have to wonder whether the current course is sustainable—or at least at what price.

One way to save energy, of course, is to move data more rapidly—bursting it across the dataplane (whether wired or wireless) as quickly as possible—which with extremely data intensive devices like the iPhone and iPad is a necessity. One group working hard on that issue is the PCI-SIG, who recently announced that PCI Express (PCIe) 4.0 will be able to be able to handle 16 gigatransfers per second (GT/s), double the bandwidth over the PCIe 3.0 specification at approximately the same power levels. Another way to look at is servers will be able to handle twice the data at the same power level. That’s only part of the solution, but it’s real progress.

Go with the Flow

But as Peterson pointed out, even with a well defined specification integrating PCIe IP into a complex SoC—not to mention a complex design flow—is a non-trivial task; then comes the far more tedious task of verifying that all the various pieces of IP play well together. That was the point of Cadence’s announcement at this week’s PCI-SIG Developer’s Conference of support for the pending PCIe PIPE4 (PHY Interface for PCI Express) specification; Accelerated PCIe VIP (AVIP); and its TripleCheck test suite for testing full PCIe compliance throughout the design process.

How do you take the PCIe specification smoothly through simulation, acceleration, and emulation, verifying each step of the way that all your IP is working smoothly together? Peterson walked us through how Cadence approaches the problem:

AVIP interfacesEach design team’s development cycle varies but they generally start from architectural exploration and conclude with a working prototype.

Software simulation is used predominately in the earlier phases during late architecture exploration, to algorithmic verification through block or IP design and verification.  As users move to chip/system level validation (often including software) they often need more performance than simulation can provide.

Using Cadence’s Palladium Verification Computing Platform (VCP) users can use one single compute resource to span all the way from signal-based acceleration to In-Circuit Emulation.  Each use mode is now described along with the Accelerated VIP (AVIP) use mode to support it.

1. Signal-based acceleration increases speed over simulation while retaining the same UVM testbench. Only the DUT is accelerated; the interaction between software and hardware remains at the signal-level.   Cadence AVIP’s UVM use model supports signal based acceleration.

2. Transaction-based acceleration (TBA) provides additional performance, in some cases, even up to MHz speeds but more likely hundreds of KHz.  The design and portions of the testbench are accelerated with the interaction between software and hardware raised a level of abstraction to the transaction level. TBA offers the unique benefits of maintaining interactive debugging capabilities and the look and feel of a simulator.  This makes it easier for design and verification engineers to adopt transaction based acceleration. The hardware can be viewed simply as an added, higher speed compute resource.  Cadence AVIP’s UVM and C interfaces support this Palladium use mode.

3. In synthesizable-testbench mode (STB) both the DUT and the testbench are accelerated.  This, of course, requires that the testbench be compiled in the emulator.  Cadence AVIP’s embedded use mode supports this Palladium mode.

4. In-circuit emulation offers the additional benefit of live stimuli and responses, increasing the verification accuracy.  Cadence SpeedBridge rate adapters support this use mode.

While Cadence is focusing on PCIe at this week’s show, its Cloud Infrastructure VIP Suite supports not just PCIe but just about every other data transfer protocol you’re likely to find in the cloud food chain.

With not just more data but more processing moving to the cloud, verifying that complex, cloud-based servers work both correctly and efficiently is both increasingly challenging and increasingly critical. The EDA community is tackling the part of the problem it can address, which is a lot of it. With a little luck and a lot of ingenuity we’ll see more and more iPhones and iPads coming online without having to break ground on a batch of new power plants.

Posted in Cloud computing, EDA, Energy Efficiency | Tagged , , , | Leave a comment

Low Energy ANTs?

As we noted elsewhere Nordic Semi has just launched its nRF51 Series of multi-mode, ultra-low-power (ULP) wireless SoCs. The first two ICs to debut in the nRF51 Series are the nRF51822, a multi-protocol Bluetooth low energy/2.4GHz proprietary RF SoC, and the nRF51422, the world’s first ANT/ANT+ SoC. Both chips combine flexible RF front ends with ARM Cortex-M0 MCUs—no, not 8051s as you might assume. With 10x the processing power and half the power consumption of Nordic’s previous (2nd generation) chips, they’re indeed ultra-low-power. But the real story is in the software.

nRF51 chipsSoC vendors often, if not typically require embedded developers to use their proprietary software frameworks in order to take advantage of all the hooks they’ve embedded in their hardware. You need to follow their APIs, compile and link your code with the stacks they provide, and then spend a lot of time trying to resolve unexpected dependencies. Nordic claims to have created a software architecture that cleanly separates application code from the protocol stacks, which they provide as linkable, verified, and qualified binaries. The idea is to enable developers who are familiar with the ARM architecture to develop application code using the Keil, IAR, or other tools with which they’re familiar and not have to wrestle with vendor-specific tools. Nordic relies on calls to ARM’s Cortex Microcontroller Software Interface Standard (CMSIS) library to handle the interface between application code and their chips. The stacks are 100% asynchronous, event driven, and run-time protected so you can’t accidentally blow them up.

Nordic has set out to make the nRF51s series a real platform by making all the chips code compatible and with each group pin compatible. This should enable embedded developers to reuse their code base across multiple hardware platforms—well, OK, Nordic’s, but with the major rollout of chips planned for this family that should eventually cover a lot of devices.

About the chips themselves: the nRF51822 is a flash-based SoC that combines Bluetooth Low Energy and Nordic’s widely used (if not widely touted) proprietary Gazell 2.4 GHz protocol. The chip includes 256 kB of on-chip flash and 16 kB of RAM. Nordic provides a Bluetooth Low Energy stack that requires less than 128 kB of code space and 6 kB of RAM, leaving more than 128 kB of flash and 10 kB of RAM for application code. Included are low-energy profiles, services, and example applications. The nRF51822 draws 13 mA on RX peak and 10.5/16 mA TX peak at +0/+4 dBm with on-chip LDO down to 1.8V.

The nRF51422 is the first single-chip ANT solution. The very small stack requires only 32 kB of code space and 2 kB of RAM, leaving 226 kB of flash and 14 kB of RAM for application code. ANT is primarily intended for ULP sensor and control applications in personal area networks (PANs). It does ad hoc networking and can handle six simultaneous master/slave pairings with a burst speed of 60 kbps. At 16 MHz the MCU draws 4.4 mA while executing from flash­; the ANT protocol uses active mode sparingly and can go back to sleep in 2.5 µs.

There are numerous two-chip wireless sensor solutions out there, but Nordic has launched a couple of ULP single-chip implementations that are deserving of a close look. Samples are in limited circulation now, with full sampling planned for September and full production late this year.

The low-power wireless market continues to heat up as the chips power further down. It’s a wide open market where even smaller fish like Nordic Semi can come up with a good product and expect to do well. Best of all we’re still in the early stages of the “wireless revolution.” There are a lot more useful devices, not to mention fun toys, coming along that we’re yet to even imagine.

 

Posted in Uncategorized | Leave a comment