|
|||||
Coming back to software Coming back to software When an embedded processing architecture evolves in the presence of increasing performance demands and Moore's Law, it tends to follow a fairly defined path. Examples can be found in such applications as dial-up data modems, Ethernet adapters and, lately, network processors. In general these tasks all start out with relatively modest throughput demands and with access to less than adequate levels of integration. In addition, early in the life of the application, the details of the tasks required tend to be a little fluid. The first design teams to approach the problem usually resort to software-only solutions. This gives them the freedom to explore the solution space without too much penalty for engineering changes. It also saves the time and expense of hardware development in an unsure market. But as the application catches on, throughput requirements increase. The software solution begins to show definite code hot spots that become critic al to system performance. And naturally, design teams look at hardware approaches to chilling those hot spots. Just how this is done depends ideally on the amount of dataflow common to both the hot spot and the rest of the application. If all the data used by the hot spot is common to the rest of the application anyway, it makes sense to place hardware acceleration either in or tightly coupled to the CPU executing the rest of the code. If the hot spot consumes a lot of data that isn't used elsewhere, it doesn't make much sense to use load/store cycles to pump it through the CPU, so the accelerator should be loosely coupled to the main code execution units. That's in the ideal, of course. In practice, selection of an accelerator architecture is a very juicy political plumb, and the decision is likely to contain strong elements of influence, favoritism and relationship leverage. It is not all that unusual to see a system architecture with an obvious non-sequitur sitting right in the middle of it, a permane nt monument to some manager's ability to swing what should have been a technical decision. But the CPU plus accelerator approach is not the ultimate expression of the solution. As throughput needs continue to increase, hardware capacity improves and the application becomes more defined, more and more of the task will migrate into hardware. Eventually, what had been a highly specialized VLIW processor, an adaptable CPU with instruction set extensions, or a CPU core with a coprocessor will begin to look like a vestigial generic CPU core surrounded by really interesting hardware. At this point the choice of CPU is purely a matter of convenience for the legacy software, and the core has become so small that it may not even be mentioned in design descriptions. In the end, irony claims its due. By the time the entire application has become mature, CPU cores may have become powerful enough that a software-only solution can handle the majority of applications, leaving the hardware solution for high-end work. So much of the market ends up right back where it started, with software running on an industry-standard embedded processor. It's not that any one design team would intentionally pursue this whole trajectory. The evolution is the result of competition, with each step brought on by a competitor who sees a cost/performance advantage in moving to the next approach. There is reason to believe that we are watching this cycle instantiate itself in the network processor space. Today there is something of a surplus of network processors, most of which are simply CPUs, or even state machines, controlling a handful of specialized accelerators. But there are indications that as designers look harder at the needs of multi-protocol traffic loads at OC-192 speeds, those accelerators are going to grow into the pillars of the solution, and the control element, whether state machine or CPU core, will become decidedly less important. Hence it will make sense to simply implement it with an off-the-shelf CPU core from ARM, MIPS or wherever. This is not exactly the roadmap the NPU vendors had in mind. Ron Wilson is editorial director of ISD Magazine and a contributor to EE Times. He has covered chip-related matters for 15 years for various industry publications, and was once, in the distant past, a designer himself.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |