SoC interfaces going 'soft'
SoC interfaces going 'soft'
By Ron Wilson, EE Times
July 31, 2003 (10:10 a.m. EST)
URL: http://www.eetimes.com/story/OEG20030731S0008
Somewhere lost in the shadows of computer prehistory lies a huge cognitive leap. We will never know whether it first came in the applications engineering department of an established computer company, in a cluttered research lab where open windows supplied just enough breeze to cool a refrigerator-sized minicomputer, or on the workbench of an engineer puzzling through the data sheet for a newfangled microprocessor chip.
But the eureka moment must have gone something like this: "I need to design an interface. I'll need a data path to buffer and carry the data, and a state machine to perform handshaking. I can use octal latches. . . . But wait! This computer thingy can emulate data paths and state machines. Maybe I could code up a software-based interface that would work with just an addressable latch."
By the advent of the first 8048 microcontroller, the idea was firmly established. Give a simple processor a bunch of bit-addressable, bidi rectional I/O pins and software could use them to emulate a variety of I/O devices. Software-defined I/O pins could emulate a parallel bus to connect to another processor. They could emulate moderate-speed serial interfaces, with elaborate protocols, if the timing was sufficiently lax. Or they could emulate more-complex dedicated interfaces to meet the needs of another chip.
It didn't take long for the hardware designers to step in to deal with the caveats in that description. Microcontrollers quickly sprouted a range of hardware peripheral devices-counters, timers, memory bus interfaces and the like. Nevertheless, as often as not, some beleaguered designer would begin thinking, "I could code up a little interface here. . . . "
No one has made a formal announcement. But if you pull together a collection of scattered hints and sound bites, a pattern emerges. Consider the increase in shipments for 16-bit and 32-bit microcontrollers. Or the spread of ARM-based application-specific standard produ cts into embedded, nontelephony applications. Or the growing momentum of Ubicom Inc. and Triscend Corp., both based in Mountain View, Calif.
High-end microcontrollers are beginning to position themselves as an alternative to midrange processor-based ASICs and ASSPs. And that shift is rekindling interest in an old technique: software-defined I/O. Except this time, instead of software emulation of a counter or a parallel port, software is replacing complex blocks of intellectual property (IP).
There are limitations, and there are implications for MCU architects. But it appears that a whole new dimension is gradually being added to the range of tools in the designer's kit.
Designers today still want to reduce the number of device types they buy, and they still find that nothing quite fits their current needs. That, coupled with the increasing pressure to do absolutely anything to cut costs, is leading to a renewed interest in software-defined I/O.
Today there are three main branc hes of the software-defined I/O family, each delineated by a particular attitude toward hardware architecture. The first branch-call it the purists-holds that the technique should be based on an existing CPU architecture, with no changes to accommodate I/O needs. "If I can't get it to run on my ARM, then I'll do it with a little PLD."
The second branch is actually an integrated, but more sophisticated, version of the first-call it the configurable-hardware bunch. These designers buy the premise of the purists, but want the programmable logic to be specialized for interface design and integrated into their microcontroller.
Finally, there is the radical branch. These designers maintain that software-defined I/O is sufficiently important to justify changing the microarchitecture of the CPU itself.
Processors are getting faster. The 8051, which would have been hard-pressed to emulate an RS-232 interface at any interesting bit rate, has given way to an ARM or MIPS core that can blast throu gh millions of instructions per second and respond to an interrupt in less time than the 8051 took to fetch an instruction. Naturally, this vastly increases the range of interfaces that can be emulated entirely in software.
The critical factor in implementing the interface remains the same, however: whether the software can meet the hard deadline imposed by the fastest handshake in the interface protocol. Unfortunately, this is not simply a matter of instruction counting. Usually it is a question that involves maximum interrupt latency as well as execution time. And that, in turn, depends on the other interrupt-driven and uninterruptible routines in the system.
If interfaces had stayed at their old, slow rates this would not be an issue: Processor speeds would have overwhelmed the problem. But interfaces have become faster right along with CPUs, so it is still often the case that the critical state transitions in an interface can't wait for software control-they must be handled in dedicated h ardware.
However, as interfaces have grown faster, they have also grown more layers. The protocol stack for an interface these days may include signal processing-as in modem applications-or video encoding, as in digital cameras, or decryption, as in some smart-card applications. These tasks would until recently have been performed in dedicated hardware.
Steve Ikei, senior engineering manager at NEC Electronics, pointed out that many of these higher-level protocol tasks are now within the range of software modules running on a 32-bit microcontroller core. "In high-end digital cameras, a hardware JPEG engine is essential because of the very high pixel count," Ikei said. "But in midrange cameras, new microcontrollers can do this task in software." Ikei also pointed out that often when an application needs a modem-even for data rates as high as V.32bis-it is possible to run the necessary digital signal processing and control algorithms on the MCU core without added hardware support.
As Ub icom chief technology officer David Fotland observed, every I/O application has some dedicated hardware attached to the pins, and some kind of software driver. The question is how you partition the interface tasks between the two. That decision may be made as a matter of convenience-if the chip you want to use already has an Ethernet media-access controller (MAC), why fight it? Or it may be made out of necessity, if, for instance, certain handshake transitions happen too fast for the software to respond.
But in recent years an alternative has emerged: microcontrollers equipped with integrated programmable-logic fabrics. Vendors Triscend and Atmel Corp. (San Jose, Calif.) are two purveyors of this approach.
With an FPGA fabric intimately tied to the MCU core on one side and matrixed to the I/O pins on the other side, it becomes possible to implement anything from a simple addressable latch to a complex state machine with buffer memory in the programmable fabric. This naturally frees the MCU so ftware for tasks that are less time critical.
"Some customers implement interfaces this way in order to get just the mix of peripherals they need, or maybe to reduce the number of parts they have to inventory to cover a range of similar applications," said Larry Getman, senior director of business development at Triscend. "Others do it because they need an interface that isn't available commercially-often, a high-performance FIFO'd interface to a DSP chip or something like that. I'd say there's about a 50/50 split between the two camps."
Jim Panfil, director of MCU products at Atmel, gave a similar analysis, but with a 30/70 split. Often, he added, customers would develop an interface in the programmable fabric of the FPSLIC part, and later migrate to a hardwired interface in an ASIC implementation.
In any case, the portion of the interface that lives in the FPGA fabric does not have to be limited to delay-critical state machines. Designs can be quite sophisticated. Triscend has imple mented LCD controllers in its fabric, for instance. By sharing resources between interfaces that do not operate concurrently, it is also possible to implement a number of demanding interfaces-USB and 1394, for example-from a single part.
The third branch of the software-defined I/O tree is the most architecturally radical, but perhaps the most conceptually elegant. This school holds that if there are hard deadlines preventing you from moving a layer of I/O protocol into software, you ought to fix the CPU microarchitecture before you go about adding external hardware. This camp is dominated by Ubicom, which makes its living selling processors architected for software-defined I/O.
Pumping dataBut the idea is not unique to Ubicom. Some of the earliest applications of configurable processor cores from ARC and Tensilica followed this line of thinking. In these cases, the application was usually in networking or communications. Analysis of the interface indicated that the bottleneck would be not in a critical handshake, which could be dealt with in ASIC logic, but in the raw bandwidth the processor could devote to pumping data. Designers used instruction-set extensions to create, in effect, so-called stream-move instructions that could pump data at nearly the memory bandwidth of the system.
In more-general applications, however, the critical path is often in the lowest levels of the protocol stack-bus handshaking, media-access control or collision detection and resolution. In these cases it is the processor's ability to respond quickly and deterministically to an external interrupt that is critical.
This problem has been attacked by CPU designers since very early in the microprocessor age. The Zilog Z80 had a duplicate register file and the ability to switch files in a short number of clock cycles. The modern, more- elaborate version of this concept is multithreading.
The idea is that there should be enough duplicate hardware in the CPU to store a complete context for ea ch active thread. Either in response to an exception or based on some sort of time-slicing algorithm, a thread is given control, its context is switched on nearly instantaneously and the context for the previous thread is frozen for later resumption.
This is a startlingly powerful tool for software-defined I/O. Using a fast multithreading processor, Ubicom claims the ability to support such interfaces as a PCI bus and a 100Base-T Ethernet in software, in some cases concurrently.
What Ubicom means by software support is pretty comprehensive. "For the PCI interface, we have only one hardware counter driving the clock," Fotland said. "The rest of the bus logic is handled by the software." In the case of 10- and 100-Mbit Ethernet, the transceiver and serializer/deserializer are hardware, and everything from there on back-the MAC, in other words-is software. "For Gigabit Ethernet I'm pretty sure we'd need a hardware MAC, though," Fotland said.
The concept has substantial leverage for reduc ing the amount of dedicated hardware in an SoC. That, in turn, means substantial savings in negotiating, implementing and verifying IP blocks-often quite fussy ones, particularly if a nonstandard interface is involved. And it may mean that a platform chip can span more applications from a single mask set.
It may also offer a substantial reduction in power consumption. Software-defined interfaces will consume power only when in use and may, given the lower voltages and radical power management techniques lavished on CPUs today, be more efficient than hardware even when operating. Interface IP rarely gets the attention needed for voltage or clock throttling, and generally isn't designed to run at the lowest voltages available in the process, Fotland pointed out.
Given these issues, it seems likely that the software-defined I/O concept is about to come into its own as an SoC design tool, a key ingredient of ASSPs and-as in the good old days-a key weapon for the savvy microcontroller user.
Related Articles
- How to Turbo Charge Your SoC's CPU(s)
- Unleashing EEPROM Potential by Going Embedded: Introducing eMemory's NeoEE
- SoC Interconnect: Don't DIY!
- Smart Tracking of SoC Verification Progress Using Synopsys' Hierarchical Verification Plan (HVP)
- A cost-effective and highly productive Framework for IP Integration in SoC using pre-defined language sensitive Editors (LSE) templates and effectively using System Verilog Interfaces
New Articles
Most Popular
E-mail This Article | Printer-Friendly Page |