1.8V/3.3V I2C 5V Failsafe Failtolerant Automotive Grade 1 in GF (12nm)
Industry Expert Blogs
Optimizing AI models for Arm Ethos-U NPUs using the NVIDIA TAO Toolkitarm Blogs - Amogh Dabholkar, ArmOct. 23, 2023 |
Optimizations achieve up to 4X increase in inference throughput with 3X memory reduction
The proliferation of AI at the edge offers several advantages including decreased latency, enhanced privacy, and cost-efficiency. Arm has been at the forefront of this development, with a focus on delivering advanced AI capabilities at the edge across its Cortex-A and Cortex-M CPUs and Ethos-U NPUs. However, this space continues to expand rapidly, presenting challenges for developers looking to enable easy deployment on billions of edge devices.
One such challenge is to develop deep learning models for edge devices, since developers need to work with limited resources such as storage, memory and computing power, and still balance good model accuracy and run-time metrics such as latency or frame rate. An off-the-shelf model designed for a more powerful platform may be slow or not running at all when deployed on a more resource-constraint platform.
Related Blogs
- Extending Arm Total Design Ecosystem to Accelerate Infrastructure Innovation
- Ecosystem Collaboration Drives New AMBA Specification for Chiplets
- Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
- intoPIX TicoRAW improves RAW image workflows and camera designs
- ARM vs RISC-V: Beginning of a new era