NPU IP for Embedded AI
Ceva-NeuPro-Nano is not an AI accelerator and does not require a host CPU/DSP to operate. However, it includes all the processing elements of a standalone NPU, including code execution and memory management. The Ceva-NeuPro-Nano embedded AI NPU architecture is fully programmable and efficiently executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This optimized, self-sufficient architecture enables Ceva-NeuPro-Nano NPUs to deliver superior power efficiency, with a smaller silicon footprint, and optimal performance compared to the existing processor solutions used for TinyML workloads which utilize a combination of CPU or DSP with AI accelerator-based architectures.
View NPU IP for Embedded AI full description to...
- see the entire NPU IP for Embedded AI datasheet
- get in contact with NPU IP for Embedded AI Supplier
Block Diagram of the NPU IP for Embedded AI IP Core
NPU IP IP
- General Purpose Neural Processing Unit (NPU)
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof
- AI accelerator (NPU) IP - 16 to 32 TOPS
- AI accelerator (NPU) IP - 1 to 20 TOPS
- AI accelerator (NPU) IP - 32 to 128 TOPS
- ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications