AI Accelerator: Neural Network-specific Optimized 1 TOPS
The Origin E1 neural engines use Expedera’s unique packet-based architecture, which enables parallel execution across multiple layers, achieving better resource utilization and deterministic performance. This innovative approach significantly increases performance while lowering power, area, and latency.
The Origin E1 family supports combinations of many common neural networks, including ResNet 50 V1, EfficientNet, NanoDet, Tiny YOLOv3, MobileNet V1, MobileNet SSD, BERT, CenterNet, Unet, and many others.
View AI Accelerator: Neural Network-specific Optimized 1 TOPS full description to...
- see the entire AI Accelerator: Neural Network-specific Optimized 1 TOPS datasheet
- get in contact with AI Accelerator: Neural Network-specific Optimized 1 TOPS Supplier
AI accelerator IP
- AI accelerator (NPU) IP - 16 to 32 TOPS
- AI accelerator (NPU) IP - 1 to 20 TOPS
- AI accelerator (NPU) IP - 32 to 128 TOPS
- Deeply Embedded AI Accelerator for Microcontrollers and End-Point IoT Devices
- Performance Efficiency Leading AI Accelerator for Mobile and Edge Devices
- High-Performance Edge AI Accelerator