|
|||||||||||||||
Optimizing High Performance CPUs, GPUs and DSPs? Use logic and memory IP - Part II Ken Brock, Synopsys Miss Part I? Click here In Part I of this two-article series we described how the combination of logic libraries and embedded memories within an EDA design flow can be used to optimize area in CPU, GPU or DSP cores. In Part II we explore methods by which logic libraries and embedded memories can be used to optimize performance and power consumption in these processor cores. Maximizing Performance in CPU, GPU and DSP Cores Further complicating matters for consumers of processor IP, real-world applications have critical product goals beyond just performance. Practical tradeoffs in performance, power consumption and die area — to which we refer collectively as “PPA” — must be made in virtually every SoC implementation; rarely does the design team pursue frequency at all costs. Schedule, total cost and other configuration and integration factors are also significant criteria that should be considered when selecting processor IP for an SoC design. Understanding the role common processor implementation parameters have on a core’s PPA and other important criteria such as cost and yield is key to putting IP vendors’ claims in perspective. Table 3 summarizes the effects that a CPU core’s common processor implementation parameters may have on its performance and other key product metrics.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |