Overcoming the embedded CPU performance wall
Julio Diez Ruiz
EETimes (January 21, 2013)
The physical limitations of current semiconductor technology have made it increasingly difficult to achieve frequency improvements in embedded processors, and so designers are turning to parallelism in multicore architectures to achieve the high performance required for current designs. This article explains these silicon limitations and how they affect CPU performance, and indicates how engineers are overcoming this situation with multicore design.
Current status of multicore SoC design and use
The last few years there has been an increase in microprocessor architectures featuring multi-threading or multicore CPUs. They are now the rule for desktop computers, and are becoming common even for CPUs in the high-end embedded market. This increase is the result of processor designers desire to achieve higher performance. But silicon technology has reached its limit for performance. The solution to the need for ever increasing processing power depends on architectural solutions like replicating core processors inside microprocessor-based systems-on-chip (SoC's).
E-mail This Article | Printer-Friendly Page |
Related Articles
- Using edge AI processors to boost embedded AI performance
- Meeting Increasing Performance Requirements in Embedded Applications with Scalable Multicore Processors
- Embedded flash process enhances performance: Product how-to
- Optimizing High Performance CPUs, GPUs and DSPs? Use logic and memory IP - Part II
- Building high performance interrupt responses into an embedded SoC design