Signal-processing system design tackles tough wireless apps
Signal-processing system design tackles tough wireless apps
By Ken Karnofsky, EE Times
August 8, 2003 (3:02 p.m. EST)
URL: http://www.eetimes.com/story/OEG20030808S0031
A common belief among designers is that Moore's Law, which says that a chip's transistor density will double every 18 months, sets an upper limit on the rate that system performance can improve. However, chip density is simply a springboard for the architectural and algorithmic improvements that create breakthrough performance advances. Each of these advances was made possible by new development tools that addressed the critical problems of the day. But these advances create a new problem that is more relevant to wireless engineers: algorithmic complexity is growing at a faster rate than chip density. This phenomenon has been labeled Shannon's Law by Jan Rabaey, a professor at the University of California, Berkeley, and a well-know EDA guru. Claude Shannon defined the theoretical limit of information transmission in the presence of noise. Much of the complexity in next generation wireless technology results from sophisticated signa l-processing requirements. For example, IEEE 802.11a/g uses a variety of computational-intensive algorithms to mitigate channel impairments and achieve the highest data rate. The physical layer concurrently uses adaptive modulation, orthogonal frequency division multiplexing employing the Fourier transform, forward error correction and other techniques. Designing these complex systems requires corresponding advances in development environments. Mastering the design of computationally challenging wireless systems calls for a development environment that lets designers accurately model an entire system, including the behavior and interaction of hardware and software subsystems. Traditional procedural programming and hardware description languages and incremental extensions to those languages are not appropriate for modeling this level of algorithmic complexity. For decades, ITU and IEEE specifications, communications textbooks and engineers' whiteboards have used block diagrams to specify signal flow, timing and system architecture. It is not surprising, therefore, that graphical tools are the natural way to specify and design and verify such wireless systems. A good wireless system development environment must be able to handle DSP algorithms and architectures at the right level of abstraction that is, hierarchical block diagrams that accurately model time and mathematical operations, clearly convey the system architecture and map naturally to real hardware and software components and algorithms. In addition, the designer should be able to model other elements of the system and environment that effect baseband performance-such as RF impairments, channel effects and timing recovery. The system modeling abstraction should make it easy to represent sample sequences, the grouping of sections of the sample sequence into frames and the concurrent operation of multiple sample rates that are inherent in modern communications systems. The design environment must also allow th e developer to add implementation detail when, and only when, it is appropriate. This provides the flexibility to explore design trade-offs, optimize system partitioning and adapt to new technologies as they become available. However, raising the design-abstraction level is not enough. The environment should also provide a design and verification flow for the programmable devices that exist in most wireless systems including general-purpose microprocessors, DSPs and FPGAs. The key elements of this flow are automatic code generation from the graphical system model and verification interfaces to lower level hardware and software development tools. Signal-processing-system designers have to continually deliver systems with increasing complexity and performance. Some industry watchers have pointed to an upcoming "software crisis" or "productivity crisis" for communications and other embedded systems. However, they often take a crisis view based on an incorrect measurement: the number of l ines of code that an engineer can write in a day. This measure does not account for the level of programming abstraction; comparing a line of assembly code to a line of C code is comparing an apple to an orange. The C language represents more executable instructions per line of code than assembly code. Similarly, VHDL and Verilog represent digital hardware more concisely than previous logic-level tools. Yet these abstractions became practical for mainstream embedded design only through efficient code generation technology: C compilers and logic synthesis, respectively. And they did not become widely accepted until the complexity of design problems became great enough for engineers to accept automation of this process. Advances in abstraction and code generation inevitably lead to orders-of-magnitude productivity and performance improvements. Consider the following software-development example. In the 1960s, the digital autopilot for the Apollo lunar lander required 2,000 lines of assembly c ode, written at a rate of 0.3 line of code per person-day. It took four people four years to develop that code. Just 30 years later, similar software for the Deep Space probe took 40 person-months to develop. The 230,000 lines of C code were automatically generated by The Mathworks' Simulink and Stateflow products at a rate of 180 lines of code per person-day, a 600x improvement. In addition, the C code is far richer than assembly code, running a system that is much more complex than the original lunar lander. Like yesterday's spacecraft, the next generation wireless designs will demand big, order-of-magnitude productivity improvements caused by the introduction of a new programming abstraction that fits more powerful hardware architectures. This results in a new type of hardware/software development platform. It can take time for a new development platform to emerge that can accelerate system development without diminishing the performance advantages of the new hardware architecture. It is during th is time that the trade press announces a software-development "crisis." However, when the right model appears, the hardware/software platform and accompanying design flow are embraced as a new standard. Wireless system designers will benefit from the accelerated performance of innovative hardware architectures, more efficient design and development software and development systems that can successfully take advantage of these hardware and software advancements. The introduction of FPGAs for signal processing is particularly significant because they provide increased parallelism by allowing one to program DSP algorithms directly into hardware, resulting in faster algorithm execution. It is now possible to design fully programmable, high-performance systems that in the past would have required ASIC solutions. Yet these performance advantages come at a price. Programming algorithms into hardware brings signal-processing engineers into unfamiliar territory understanding the nuances of har dware design. As a result, FPGAs are often used to perform high-speed/low complexity processing in hybrid systems, where lower speed/higher complexity processing continues to be performed on DSPs and microprocessors. A similar phenomenon occurs when traditional embedded programmers attempt to incorporate DSPs into their designs. Without being able to model the execution of the algorithm at the appropriate level of abstraction, they can get bogged down in the details of programming and debugging code on a processor with an unfamiliar architecture. Traditional hardware- and software-oriented languages do not provide the abstractions needed to efficiently develop these algorithmically intensive hybrid designs. System design environments must let the designer accurately model both algorithmic and architectural complexity to obtain the order-of-magnitude performance benefits that today's programmable hardware platforms offer. This is the foundation for a model-based design approach to signal-proc essing system development: continuous simulation, refinement and testing of the system model from an idealized specification to a fully bit-true and timing-accurate representation, together with automatic code generation and the ability to incorporate legacy designs. Code generation also enables real-time prototyping of the design on target hardware as well as hardware-in-the-loop testing to verify the implementation under real-world conditions. Equally important, the generated code allows the signal-processing system design environment to be integrated into mainstream software and hardware design flows. This approach eliminates errors earlier in the development process, reducing the risk of design flaws and schedule slips. It also provides a benefit of not being locked into a particular implementation. When new hardware platforms appear, the designer can make the necessary system modifications to take advantage of them. Ken Karnofsky is marketing director, DSP and Communications at The Mathworks (Natick, Mass.).
Related Articles
- A RISC-V ISA Extension For Ultra-Low Power IoT Wireless Signal Processing
- System-Level Design Tackles Tough Soft Radio Framework Challenges
- Where Innovation Is Happening in Geolocation. Part 1: Signal Processing
- Selecting the right hardware configuration for the signal processing platform
- Unlock the processing power of wireless modules
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |