CEVA Adds ONNX Support to CDNN Neural Network Compiler
MOUNTAIN VIEW, Calif., Oct. 24, 2018 -- CEVA, Inc. (NASDAQ: CEVA), the leading licensor of signal processing platforms and artificial intelligence processors for smarter, connected devices, today announced that the latest release of its award-winning CEVA Deep Neural Network (CDNN) compiler supports the Open Neural Network Exchange (ONNX) format.
"CEVA is fully committed to ensuring an open, interoperable AI ecosystem, where AI application developers can take advantage of the features and ease-of-use of the various deep learning frameworks most suitable to their specific use case," said Ilan Yona, vice president and general manager of CEVA's Vision Business Unit. "By adding ONNX support to our CDNN compiler technology, we provide our CEVA-XM and NeuPro customers and ecosystem partners with much broader capabilities to train and enrich their neural network-based applications."
ONNX is an open format created by Facebook, Microsoft and AWS to enable interoperability and portability within the AI community, allowing developers to use the right combinations of tools for their project, without being 'locked in' to any one framework or ecosystem. The ONNX standard ensures interoperability between different deep learning frameworks, giving developers complete freedom to train their neural networks using any machine learning framework and then deploy it using another AI framework. Now with support for ONNX, CDNN enables developers to import models generated using any ONNX-compatible framework, and deploy them on the CEVA-XM vision DSPs and NeuPro AI processors.
About CDNN
The CEVA Deep Neural Network (CDNN) is a comprehensive compiler technology that creates fully-optimized runtime software for CEVA-XM Vision DSPs and NeuPro AI processors. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.
CEVA supplies a full development platform for partners and developers based on the CEVA-XM and NeuPro architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network. For more information, please visit https://www.ceva-dsp.com/product/ceva-deep-neural-network-cdnn/.
About CEVA, Inc.
CEVA is the leading licensor of signal processing platforms and artificial intelligence processors for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and cellular IoT enabled devices, advanced imaging and computer vision for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For connectivity, we offer the industry's most widely adopted IPs for Bluetooth (low energy and dual mode) and Wi-Fi (Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax) up to 4x4). Visit us at www.ceva-dsp.com
|
Ceva, Inc. Hot IP
Related News
- CEVA Introduces New AI Inference Processor Architecture for Edge Devices with Co-processing Support for Custom Neural Network Engines
- Synopsys Announces Support for the Open Neural Network Exchange Format in ARC MetaWare EV Development Toolkit
- CEVA's 2nd Generation Neural Network Software Framework Extends Support for Artificial Intelligence Including Google's TensorFlow
- Veriest Solutions and CEVA Collaborate for Neural Network Signal Processing IP Project
- CEVA Introduces Deep Neural Network Framework to Accelerate Machine Learning Deployment in Low-Power Embedded Systems
Breaking News
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- TSMC drives A16, 3D process technology
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
Most Popular
- Cadence Unveils Arm-Based System Chiplet
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Esperanto Technologies and NEC Cooperate on Initiative to Advance Next Generation RISC-V Chips and Software Solutions for HPC
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- Arteris Selected by GigaDevice for Development in Next-Generation Automotive SoC With Enhanced FuSa Standards
E-mail This Article | Printer-Friendly Page |