|
|||
Architectures Battle for Deep LearningLinley Gwennap, The Linley Group Chip vendors implement new applications in CPUs. If the application is suitable for GPUs and DSPs, it may move to them next. Over time, companies develop ASICs and ASSPs. Is Deep learning is moving through the same sequence? In the brief history of deep neural networks (DNNs), users have tried several hardware architectures to increase their performance. General-purpose CPUs are the easiest to program but are the least efficient in performance per watt. GPUs are optimized for parallel floating-point computation and provide several times better performance than CPUs. As GPU vendors discovered a sizable new customer base, they began to enhance their designs to further improve DNN throughput. For example, Nvidia’s new Volta architecture adds dedicated matrix-multiply units, accelerating a common DNN operation. Even these enhanced GPUs remain burdened by their graphics-specific logic. Furthermore, the recent trend is to use integer math for DNN inference, although most training continues to use floating-point computations. Nvidia also enhanced Volta’s integer performance, but it still recommends using floating point for inference. Chip designers, however, are well aware that integer units are considerably smaller and more power efficient than floating-point units, a benefit that increases when using 8-bit (or smaller) integers instead of 16-bit or 32-bit floating-point values. |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |