The rising cost of wireless handset and basestation IC development, coupled with growing complexity and rapidly evolving standards, has turned wireless system design into an increasingly expensive, high-stakes endeavor. But with the advent of software-defined radio (SDR) platforms, designers now have the opportunity to create the ultimate multimode handset at lower cost with longer battery life, while eliminating the need for "forklift upgrades" to basestations and access equipment. First, however, there is the need for an underlying SDR-processing platform that offers the versatility of general-purpose systems without their poor performance or software-development burden, combined with the horsepower of application-specific systems without their high cost and inflexibility. A new generation of DSP technology, a hybrid between general-purpose and application-specific systems, is one route to meeting these requirements. SDR raises processing bar SDR promises to revolutionize how advanced wireless systems are implemented, by eliminating multiple separate radio chains for each mode and band required to support 3G data and video services and multiple air interfaces. Some or all of SDR's dynamic reconfiguration is through software, but circuitry is still required and the challenge is to determine which capabilities go into the cheaper silicon vs. the more expensive software. Relegating more functions to programmable DSP or DSP-like hardware enables high-performance, ubiquitous wireless terminals, infrastructure equipment and services that allow seamlessness across diverse networks. It also allows integration of capabilities that encompass multiple standards and operational paradigms. A big SDR challenge has been that the underlying processing hardware must support changing, multiplying and ever-higher-rate wireless interfaces. For instance, Release 5 of the Third Generation Partnership Project High-Speed Downlink Packet Access specification is exceptionally difficult to implement. The task is even more daunting with the rise of WiMax, the need to support converged Wi-Fi/cellular access with seamless mobility, digital TV reception (DVB-H) and other standards. Part of the problem is complexity — tomorrow's cell phone may deliver up to 10 different, previously standalone functions: voice, PDA, portable e-mail, still camera, video camera, MP3 player, video player, GPS-enabled street guide, TV and game console. The challenge is to combine hardware programmability with good performance, which doesn't lend itself to antiquated design approaches. Prevailing baseband-processing solutions combine ASIC blocks optimized for key kernels of a wireless modem with a general-purpose DSP. The trend toward offering SDR-based handsets that use parallel DSP architectures has been the key enabling technology for future wireless modem platforms. However, current SDR technology cannot accommodate ubiquitous multimode baseband-processing requirements without severe cost, power and space trade-offs. So, over the past few years, there has been growing debate over the optimum mix of ASICs, FPGAs, DSPs and reconfigurable processors for advanced wireless systems. ASICs remain the most efficient option for intensive chip-rate processing, but the least flexible and most expensive solution. ASICs typically perform chip-rate baseband processing (e.g., the RAKE receiver) as well as encoding/decoding of source information (video). Conventional DSPs are usually employed for the vocoder and forward error correction, except in the case of Turbo decoding, which is implemented in ASIC logic due to its performance requirements. The medium-access control layer typically runs on an off-the-shelf microcontroller core. ASICs present several disadvantages in this context: They must be designed for the worst-case processing scenario; their long design cycle time increases time-to-market; they increase cost as several ASIC cores are required in multimode applications; and they can't adapt to new standards since they can't be modified after silicon tapeout. Spinning a new ASIC to support new standards or features becomes prohibitively expensive - as much as $40 to $50 million dollars, and one to two years in design time. Another option is to use a DSP with an FPGA. With this approach, the processor handles system control and configuration functions while the FPGA implements the computationally intensive signal-processing data path and control, minimizing system latency. To switch from one standard to another, the processor switches dynamically between major software sections and the FPGA is completely reconfigured in hundreds of cycles, as necessary, to implement the data path for the particular standard. Unfortunately, FPGAs have never worked well in applications involving the use of a battery. In short, general-purpose DSP architectures — combined either with ASIC blocks or FPGAs — haven't been able to accommodate ubiquitous multimode baseband-processing requirements without severe trade-offs. And while the industry has explored the flexibility, scalability and full programmability of traditional DSPs, these solutions haven't been able to simultaneously run all of the algorithms necessary to fully implement 3G SDR design requirements (i.e., power, area and performance). A new approach The highly parallelized DSP now being considered for SDR handset and basestation design tackles the requirements of SDR by integrating a 2-D array of 16-bit DSPs with on-the-fly software definition to quickly modify the chip as feature and standards requirements change. Performance is further enhanced by including specialized blocks for a variety of important baseband-processing functions. By enabling a lower clock rate, the architecture saves on system cost and power. Because the use of signal processing is broad-based, it is suited to wireless applications because of its parallelized, two-dimensional array structure and specialized integrated blocks. Using this architecture, a single highly parallelized DSP core has been proven to perform all the chip and symbol-level baseband processing for wireless systems, as well as source encoding/decoding (both voice and video) without the need for ASIC blocks. This approach requires no hardware/software partitioning of the baseband algorithms, which simplifies and lowers the cost of the design. Nader Bagherzadeh (naderb@morphotech.com) is professor and former department chair of Electrical Engineering and Computer Science at the University of California, Irvine. He is also advisor to and co-founder of Morpho Technologies (Irvine, Calif.). |