Compression/decompression tradeoffs for data networking and storage
Update: Exar Corporation Acquires Altior Inc. to Provide Additional Growth in Data Compression (February 19, 2013)
By Chad Spackman, CebaTech, Inc.May 09, 2007 -- networksystemsdesignline.com
Hardware vs. software compression
Compression, as the name implies, squeezes or "compresses" the size of a file or data set. Compression techniques are used for voice, video, audio, text, and program data in hundreds of different applications and product types. Much of the compression and subsequent decompression processing performed in products today is accomplished by a CPU running any number of different algorithms that have been specifically designed to efficiently reduce the size of a file or data set.
Unfortunately, all compression algorithms require the entire file to be examined and processed. Scanning through a file, especially a large file, is something that CPUs are generally not that good at doing. For applications that do not require compression and decompression to be accomplished particularly fast, a CPU is sufficient. When it comes to higher performance applications such as those involved with Internet file transfers or large data backups, CPU-based compression is not sufficient.
Hardware compression offers a significant improvement in the rate at which data can be compressed and decompressed. For example, benchmarks of the popular GZIP data compression routine on an 3Ghz Pentium class CPU result in maximum data rates of approximately 200Mb/s. Comparatively, hardware compression can achieve data rates of 2Gb/s or greater. With a 10X or more speedup in compression processing, it's clear that dedicating a hardware engine or co-processor to perform this function will result in much greater performance.
There is typically an overhead associated with deploying dedicated processing hardware, however. This overhead is most often observed in the form of additional power and area. There are a number of trade-offs that designers can make when deploying dedicated compression hardware that will help minimize the overhead while achieving the desired improvements in compression processing performance. This paper examines some of the specific trade-offs that a designer can make when deploying hardware for lossless data compression.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
- From a Lossless (~1.5:1) Compression Algorithm for Llama2 7B Weights to Variable Precision, Variable Range, Compressed Numeric Data Types for CNNs and LLMs
- NVMe/TCP Improves Data Storage
- Design trade-offs of using SAR and Sigma Delta Converters for Multiplexed Data Acquisition Systems
- FPGA-Based NVM Express Flash Storage Cards in the Data Center
- Data storage in non-volatile memory
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- UPF Constraint coding for SoC - A Case Study
- Dynamic Memory Allocation and Fragmentation in C and C++
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)