DSP app? 80-20 rule still works
DSP app? 80-20 rule still works
By EE Times
December 9, 2002 (3:50 p.m. EST)
URL: http://www.eetimes.com/story/OEG20021204S0039
Victor Berrios, DSP Marketing Account Manager, Motorola Semiconductor Products Sector, Transportation and Standard Products Group 8/16-Bit Products Division, Motorola Inc., Tempe, Ariz. If you have never developed a digital signal processor application, perhaps you've wondered how different it would be from the non-DSP embedded applications that you have developed through the years. If so, it is critical to understand the choices and decisions involved in developing a DSP application. Assuming that a DSP was selected as the proper processing platform for the application, various development options are available. Here are some of the major decisions you will have to make: Should we code in a high-level language or in assembly? Which development tool suite should we use? Should we develop our signal processing algorithms from scratch or use available libraries? What are our debugging alternatives? How do we perform code analysis and op timization? If you have done embedded software development, you will recognize that these decisions do not appear to be very different from those of non-DSP applications. DSPs traditionally have been viewed as processors that must be programmed in assembly language in order to take full advantage of their performance. This might have been true five to 10 years ago, but it is no longer the case. C compilers for DSP processors have become much better at delivering functionally correct and compact code. Further, with the availability of hybrid DSP/MCU architectures in the marketplace, efficient compiler technology is reaching new levels and should continue to improve for the foreseeable future. Ultimately, you will find that your signal processing application is no different than any other embedded application. Specifically, the 80-20 rule comes to mind: You will spend 80 percent of your execution time in 20 percent of your code. Considering the much-improved compiler offerings available for D SPs today, optimizing that 20 percent of your code will be a more-efficient use of your development time, and that optimization might require the use of assembly language. Just let the compiler take care of the other 80 percent. When it comes to software development environments, the choices can be overwhelming. Most DSP suppliers have their own development environment that they will want you to use. Then there are the many third parties that want you to understand their offering before you make any decision. The following guidelines should assist in making the tools decision: - Ease of use and platform support: Are the tools intuitive, or did you spend 15 minutes perusing the documentation to figure out how to create a project? Is the tool support consistent for different environments? You don't need to invest in a new computing platform just to use a new development system. - Completeness of the offering: Is there a full set of tools (for example, editor, compiler, assembler and lin ker) or just certain components? The more functionality the package includes, the better the investment. "A la carte" ordering might result in a superior environment but the cost will be higher. - Support and maintenance: No matter how intuitive and easy-to-use a tool is, users will need help to master it. What are the vendor's support policies? Is there a cost associated with support and/or maintenance releases? All these factors, and their associated costs, merit attention. Signal processing is math-intensive work, and its algorithms and applications are an exercise in the application of a set of numerical principles. A question arises, though: Will the application use standard/established mathematical principles such as Fourier Transforms, convolution, filtering and others, or will it use a custom mathematical recipe? If it is based on standard principles, using readily available implementations is a good idea. These are usually available from the DSP maker as well as a number of other so urces, including academia, third-party software vendors and the Web. Having a better Fourier transform implementation will not necessarily improve a product. Rather, how the product is used will make the difference. The goal is to build only where absolutely necessary, and buy whenever possible. A vote against emulators Embedded systems have traditionally employed hardware device emulators to aid in the design and coding of software prior to system hardware availability. When it comes to signal processing, emulators are not the option I recommend. The high performance and real-time requirements of signal processing applications demand more timing-accurate debugging solutions. Look for on-chip emulation capabilities when evaluating DSP devices. Most of these are developed around the Joint Test Action Group (JTAG) IEEE standard. These modules offer the traditional debug functions while letting the developer execute an application on the device itself, at speed. Nothing can repl ace the process of developing software on the actual platform for which it was designed to run. Earlier I recommended that designers concentrate on optimizing the 20 percent of software that consumes 80 percent of the execution time. But how can that 20 percent be identified? This is where code analysis and coverage tools come in. These tools profile the application and help identify the portions of the code that are consuming the most processing time. Also, these tools let developers identify any portion of the code that is seldom, if ever, used, which in turn provides memory usage-optimization options. Some of these tools are available with the standard development tools package; others will have to be purchased separately. As mentioned earlier, it is important to understand what is included in the tools packages and what the actual development needs are. DSP software development is not that different from traditional, control-oriented embedded software development. If developers understand the few differences, and how they affect choices, they will be able to design a better product.
Related Articles
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
E-mail This Article | Printer-Friendly Page |