Architectures Battle for Deep Learning
Linley Gwennap, The Linley Group
10/31/2017 03:41 PM EDT
Chip vendors implement new applications in CPUs. If the application is suitable for GPUs and DSPs, it may move to them next. Over time, companies develop ASICs and ASSPs. Is Deep learning is moving through the same sequence?
In the brief history of deep neural networks (DNNs), users have tried several hardware architectures to increase their performance. General-purpose CPUs are the easiest to program but are the least efficient in performance per watt. GPUs are optimized for parallel floating-point computation and provide several times better performance than CPUs. As GPU vendors discovered a sizable new customer base, they began to enhance their designs to further improve DNN throughput. For example, Nvidia’s new Volta architecture adds dedicated matrix-multiply units, accelerating a common DNN operation.
Even these enhanced GPUs remain burdened by their graphics-specific logic. Furthermore, the recent trend is to use integer math for DNN inference, although most training continues to use floating-point computations. Nvidia also enhanced Volta’s integer performance, but it still recommends using floating point for inference. Chip designers, however, are well aware that integer units are considerably smaller and more power efficient than floating-point units, a benefit that increases when using 8-bit (or smaller) integers instead of 16-bit or 32-bit floating-point values.
E-mail This Article | Printer-Friendly Page |
Related News
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
- NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
- Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device
- AlphaICs Begins Global Sampling of "Gluon - Deep Learning Co-Processor"
Breaking News
- Baya Systems Raises $36M+ to Propel AI and Chiplet Innovation
- Andes Technology D45-SE Processor Achieves ISO 26262 ASIL-D Certification for Functional Safety
- VeriSilicon and Innobase collaboratively launched second-generation Yunbao series 5G RedCap/4G LTE dual-mode modem IP
- ARM boost in $100bn Stargate data centre project
- MediaTek Adopts AI-Driven Cadence Virtuoso Studio and Spectre Simulation on NVIDIA Accelerated Computing Platform for 2nm Designs
Most Popular
- Alphawave Semi to Lead Chiplet Innovation, Showcase Advanced Technologies at Chiplet Summit
- Arm Chiplet System Architecture Makes New Strides in Accelerating the Evolution of Silicon
- InPsytech Announces Finalization of UCIe IP Design, Driving Breakthroughs in High-Speed Transmission Technology
- Cadence to Acquire Secure-IC, a Leader in Embedded Security IP
- Blue Cheetah Tapes Out Its High-Performance Chiplet Interconnect IP on Samsung Foundry SF4X