AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors
The AON1000™ IP is part of the AONVoice™ processor family, AON’s application-specific edge AI processors for deep neural network inferencing at the edge. Unlike general purpose processors, DSPs and dedicated processors that rely on third party AI algorithms, AON’s processors optimize accuracy at super-low power by embedding proprietary, use-case specific neural network architecture and integrating tuned inference algorithms. AON processors also support training with a unique data augmentation tool.
The AON1000™ compact AI processing engine delivers the highest hit rate accuracy per microwatt available in the industry under real-world, noisy conditions.
AON1000™ Hardware IP can be integrated in a standalone chip or in a sensor, such as a microphone, allowing the Application Processor to stay in idle state during the always-on listening state.
AONDevices also offers the SW algorithm AON1000 for porting to a third-party DSP for less power sensitive applications.
View AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors full description to...
- see the entire AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors datasheet
- get in contact with AI processing engine for Wake Word, Voice Commands, Acoustic Event Detection, Speaker ID and Sensors Supplier
AI IP
- RT-630 Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- RT-630-FPGA Hardware Root of Trust Security Processor for Cloud/AI/ML SoC FIPS-140
- NPU IP for Embedded AI
- RISC-V-based AI IP development for enhanced training and inference
- Tessent AI IC debug and optimization
- NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof