|
|||
Video architectures vie for mainstream
Ron Wilson, EE Times
(12/19/2005 9:00 AM EST) Digital video processing is rapidly becoming the premier signal-processing challenge. Combining the need for at least some level of programmability, a many-faceted computational load and extreme environmental constraints — ranging from the relative comfort of small set-top boxes to the confines of handheld devices — digital video is assembling a set of nearly unmeetable requirements. In response, champions of several competing architectural concepts are pressing forward with potential solutions. The migration of digital video into the consumer mainstream may finally establish the ascendancy of one of these approaches, or it may see them all eclipsed by an architecture that has yet to meet with commercial acceptance. Whatever the outcome, the profusion of video codecs and formats is creating a flat playing field on which much can be learned about the relative abilities and disabilities of the rival architectures. The default architecture for video processing is a single RISC microprocessor, programmed in C. Not only is this about the simplest hardware arrangement to implement, but it is, in a way, native. Most signal-processing algorithms get their first executable expression in some dialect of C as they are being developed and explored. If video compression and decompression were a simple process, and if video bandwidths were moderate, architectural development could stop right there. But video codecs are notoriously demanding of CPU resources. There are lots of operations to be executed between the input stream and the output, even for the comparatively simpler job of decoding. Then there is the matter of bandwidth. These days, only specialized apps like surveillance use low-resolution images. Handsets still get by with low resolution, but that's set to change. Any time the screen is large enough for it to make any sense at all, consumers demand at least standard-definition resolution, and even handheld-device designers are thinking about high-definition displays. Add to that the peripatetic attention span of viewers, demanding not just a preview picture within the main image but multiple simultaneous images, and you're talking serious bandwidth. The combination of computationally intensive algorithms and high data rates quickly overwhelms the single-CPU solution. There are three basic strategies for attacking the video-processing problem. One is to use dedicated hardware in place of programmable hardware, eliminating the overhead that can slow down a general-purpose engine. "If you have the option, dedicated hardware — done right — should always be more efficient than a programmable solution," observed Chris Day, general manager at Philips Semiconductors. So this should be an open-and-shut case: Analyze the video-processing algorithm, design dedicated hardware to execute it, and you can go as fast as necessary with a minimum of silicon area and power. Except the market hasn't made it that easy. "As we move toward high definition, we are seeing the devices that handle video becoming increasingly connected, multifunction and multiformat," said Analog Devices Inc. fellow Josh Kablotsky. "You can't solve the customer's problem by developing just one codec." Making matters worse, he said, the cost of hardware development is getting so high that chip vendors have to be able to hit more than one application segment with a chip — again, requiring different combinations of functions and codecs. This makes the dedicated-hardware approach increasingly untenable, in the view of many in the industry. Unless you can develop a single hardwired engine that can efficiently handle several codecs at different levels and resolutions, you may need to have several hardware engines on the chip. And somewhere between one and three, that gets uncompetitive. |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |