Ceva-Waves Bluetooth 5.3 Low Energy Baseband Controller, software and profiles
Reconfigurable arrays of processors needed for wireless multimedia
Reconfigurable arrays of processors needed for wireless multimedia
By Ralph Weir, Marketing Manager, Elixent Ltd., Bristol, United Kingdom, EE Times
December 2, 2002 (2:31 p.m. EST)
URL: http://www.eetimes.com/story/OEG20021127S0036
Like a lifeline, networked multimedia is being pitched as the saviour of the electronics business. Just as the downturn is at its worst, demand for new, powerful mobile devices, equipped with video, audio, telephony, PDA facilities and networking, will pull us into a new "golden age". With a very high semiconductor content, these devices will fill all the under-utilised wafer fabs, and profitability will return to the industry. However, there is one small problem with this rose-tinted vision - that "high semiconductor content".
We're at the stage when networked multimedia devices are technically feasible. There's some debate as to how you package these things - do you build a phone with go-faster bits, is it a PDA-format widget or some new shape completely but the technology is all there, the challenge is implementation. Traditionally, systems like this were implemented as far as possible in software on as generic a hardwar e platform as possible. It's a fast, flexible way of developing functionality. If a new feature is required, it can be added with a software patch. This approach serves current PDA and phone devices well.
So what's all this about "one small problem"? For an example, look at the current 3G phones. Limited battery life. Expensive. Compared with a 2G terminal, most are very large. Why is that?
Well, all this whiz-bang functionality is too fast for processors. You can't process Bluetooth, or 802.11, or MPEG-4 video, on a processor alone you need hardware acceleration, or dedicated functional blocks. Each functional block could be a chip (WLAN chipset) or section of a large SoC but it's physically there, costing money and drawing power.
Worse still, a hardware implementation is inflexible. It can't be re-used for other purposes, or adapted when the standards change.
If consumers are going to buy devices with these capabilities, they need them to be sle ek, with long battery life, at least as long as the products they bought last year. It's no good adding extra features if you lose sight of what the user wanted. Battery life is important, making the difference between usability and brick. Device cost is key too - that represents the difference between upgrade now, and upgrade next year. The industry needs these products to be successful now.
For the first time, evolutionary enhancement is not providing the answers. Integrating more hardware blocks at lower geometries helps, but doesn't help reduce the silicon area. It also creates a specialist chip, meaning fewer units per mask set; and with mask costs increasing exponentially as we move to small geometries, that is an unattractive option.
Cost effective software implementation will not be viable for years. This stuff is efficient in hardware, but requires far more computation than a traditional Von Neumann machine can offer.
This is where reconfigurable computing architec tures can help. Several of these, such as D-Fabrix, compute algorithms at hardware speeds, but on a platform that is software programmable.
The D-Fabrix array is made up of 4-bit ALUs. Suppose your algorithm has to do an add-compare-select on 8-bit numbers, as in Viterbi processing. Combine two ALUs for an add operator, and two more for a compare; and yet another two operate as a mux to implement the select. Processors can do one of these operations per clock cycle. A D-Fabrix array, with perhaps thousands of ALUs, could implement many such operators per clock or combine it with something else. Massive performance, achieved without loss of the processor's flexibility.
Now, suppose you build a next-generation hardware platform. Equip it with some basic hardware blocks -RF analog stages, user interfaces and so on. Add a D-Fabrix computing array and a RISC processor. What you have is a system where only the basic interfaces are fixed-function hardware - like today's 2G phone or basic PDA. Everything else is software - either for the RISC, or for the D-Fabrix array.
Complex decisions
So, why not get rid of the RISC? If Reconfigurable Computing is so clever, get it running everything? In fact, it's not so simple. A processing array is awesome at what it's good at algorithm processing. It can support sampling rates of greater than 100MHz if required, or time-share its resources effectively to implement complex algorithms at much lower sampling rates. But, it isn't great at complex decision-making code.
In contrast, RISC processors are great at decision making and branching. For control and decision making code, a RISC is king; it just doesn't scale for algorithmic work.
Parenthetically, there is a compromise: the DSP processor. This isn't as good as a RISC at control code, as it tends to have a lower instruction rate and be poorer at branching, cache management and so forth. It betters the RISC on algorithms, where it employs more p arallelism to enable single-cycle MAC operations; but it doesn't have the performance or area efficiency of the RAP products.
This combination of RISC for control and reconfigurable computing for complex algorithmic processing is a powerful one. Each technology is playing to its strengths in an extremely complementary manner. Most applications are a combination of computing and control - such a platform approach provides a perfect balance.
D-Fabrix allows the hardware required for networked multimedia to be implemented on a soft platform. This "virtual hardware" approach has many benefits. Not only is it the only software-programmable approach to tackling such high throughput data streams, it offers extremely efficient use of the available silicon area.
No dedicated hardware resources exist; instead, the D-Fabrix platform is shared between each task to be performed, minimising the silicon area used. It also minimises power, often showing savings of up to 90% over processor solu tions.
This revolutionary approach offers the networked multimedia market wired and wireless more than just a new technique; it offers it a lifeline. Without some new approach to implementation, networked multimedia terminals will always be too expensive and power hungry. With reconfigurable computing, these problems disappear; and the resulting SOC devices are far more generic processing platforms, applicable to many applications.
By enabling this re-use, the number of devices manufactured per SoC project (and hence mask set) can be increased, bringing savings in development time and cost - on top of the unit cost savings already achieved. With these benefits in place, we should see some compelling new products coming to market and perhaps one of these will be the "kick-start" our industry so badly needs.
Related Articles
- Performance Evaluation of Inter-Processor Communication Mechanisms on the Multi-Core Processors using a Reconfigurable Device
- Reconfigurable computing arrays challenge DSPs
- Wireless home multimedia networks require multiple design strategies
- Optimized system development tools needed for programmable net processors
- DSP hardware extensions speed up 3G wireless multimedia
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Synthesis Methodology & Netlist Qualification
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Demystifying MIPI C-PHY / DPHY Subsystem
E-mail This Article | Printer-Friendly Page |