Putting Multicore Processing in Context: Part 2
Mar 7 2006 (10:00 AM)
Is it a given that utilizing multicores will result in a speedup of an application? Amdahl’s Law is not the only thing that plays a role in the speedup of an application.
In general, if speedup is the sole objective when adding multiprocessors, the following must hold true: (1) the processor is overloaded and is not processing the work available in a satisfactory time frame; (2) the workload contains elements that can be divided and worked on in parallel; and, (3) a suitably faster processor cannot provide the processing power needed to handle the workload in a satisfactory time.
Part 1 in this series examined the “classic” reasons why one does not get a proportional increase in performance by adding additional processors to a computing machine. Most, if not all of them, were based in some form or fashion on Amdahl’s Law.
Basically, Amdahl’s Law states that the upper limit on the speedup gained by adding additional processors is determined by the amount of serial code that is contained in the application. Some of the reasons for serialized code are that it is explicitly written into the code. Another reason why the code becomes serialized is because the code shares resources. This includes data sharing. Only one processor or core can access shared data at a time.
The next step in the exploration of multicore processing and whether or not it will be of benefit in your application is the hardware. Most embedded designs use shared memory (all cores are able to access some or all of the memory on the chip) and they have the capability to communicate with each other in some fashion. For most applications, the addition of more cores does not lead to a proportional increase in performance.
E-mail This Article | Printer-Friendly Page |
Related Articles
- Putting multicore processing in context: Part One
- Evaluating the performance of multi-core processors - Part 2
- Embedded DSP Software Design Using Multicore a System-on-a-Chip (SoC) Architecture: Part 2
- Defining standard Debug Interface Socket requirements for OCP-compliant multicore SoCs: Part 2
- Techniques for debugging an asymmetric multi-core application: Part 2
New Articles
- Quantum Readiness Considerations for Suppliers and Manufacturers
- A Rad Hard ASIC Design Approach: Triple Modular Redundancy (TMR)
- Early Interactive Short Isolation for Faster SoC Verification
- The Ideal Crypto Coprocessor with Root of Trust to Support Customer Complete Full Chip Evaluation: PUFcc gained SESIP and PSA Certified™ Level 3 RoT Component Certification
- Advanced Packaging and Chiplets Can Be for Everyone
Most Popular
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- System Verilog Assertions Simplified
- Smart Tracking of SoC Verification Progress Using Synopsys' Hierarchical Verification Plan (HVP)
- Dynamic Memory Allocation and Fragmentation in C and C++
- Synthesis Methodology & Netlist Qualification