Scalable UHD H.264 Encoder - Ultra-High Throughput, Full Motion Estimation engine
Industry Expert Blogs
Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing OptionsInside DSP - BDTiOct. 02, 2017 |
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
Related Blogs
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Ecosystem Collaboration Drives New AMBA Specification for Chiplets
- Shattering the neural network memory wall with Checkmate
- Optimization of AI Performance of SoCs for AD/ADAS
- Making the most of Arm NN for GPU inference: FP16 and FastMath