Tutorial: Programming High-Performance DSPs, Part 1
By Rob Oshana, Texas Instruments
November 27, 2006 -- dspdesignline.com
INTRODUCTION
Many of today's digital signal processing (DSP) applications are subject to real-time constraints. And it seems many applications eventually grow to a point where they are stressing the available CPU and memory resources. Many of these applications seem like trying to fit ten pounds of algorithms into a five pound sack. Understanding the architecture of the DSP, as well as the compiler can speed up applications, sometimes by an order of magnitude. This article will summarize some of the techniques used in practice to gain orders of magnitude speed increases from high performance DSPs.
Make the common case fast
The fundamental rule in computer design as well as programming real time systems is "make the common case fast, and favor the frequent case." This is really just Amdahl's Law that says the performance improvement to be gained using some faster mode of execution is limited by how often you use that faster mode of execution. So don't spend time trying to optimize a piece of code that will hardly ever run. You won't get much out of it, no matter how innovative you are. Instead, if you can eliminate just one cycle from a loop that executes thousands of times, you will see a bigger impact on the bottom line.
E-mail This Article | Printer-Friendly Page |
Related Articles
- Platform FPGA design for high-performance DSPs
- High-Performance DSPs -> DSPs tread many paths to raise performance
- High-Performance DSPs -> AltiVec power: PCI buses fall short
- High-Performance DSPs -> Reconfigurable approach supersedes VLIW/ superscalar
- High-Performance DSPs -> Processor boards: Architecture drives performance