OPENEDGES Announces the Industry First 4-/8-bit Mixed-Precision Neural Network Processing Unit IP
Seoul, South Korea, January 12, 2022 --- OPENEDGES Technology, Inc., the world’s leading memory system and AI platform IP provider, today announced the first commercial release of mixed-precision (4-/8-bit) computation NPU IP, ENLIGHT™. ENLIGHT™, when used with other OPENEDGES’ IP solutions, provides unparalleled efficiencies in power consumption, area, and DRAM optimization.
ENLIGHT™, a high-performance neural network processor IP, features a highly optimized network model compiler that moves DRAM traffic from intermediate activation data by grouped layer partition and scheduling. In addition, it supports load balancing partition for multi-core NPU. Furthermore, ENLIGHT™'s single-core performance ranges from 0.5 ~5 TOPS at 8-bit mode and 1 ~ 16 TOPS at 4-bit mode. Since this IP core can be scaled with multi-core to reach higher performance requirements, it is highly recommended for AI inference applications.
Related |
4-/8-bit mixed-precision NPU IP |
With the industry's first adoption of 4-/8-bit mixed-quantization, customers can easily customize ENLIGHT™ at different core sizes and performance for their targeted market applications and achieve significant efficiencies in size, power, performance, and DRAM bandwidth. Based on the 4-/8-bit mixed quantization test by YOLO v3, it has been proven that 4-bit quantization can be applied to a maximum of 65% of convolution without affecting accuracy at all.
4-/8-bit Mixed Quantization Test by YOLO v3
ENLIGHT™ is a production-proven IP and has been licensed in a number of verticals, including IP cameras, IoT (Internet of Things), and ADAS. Customers are licensing ENLIGHT™ along with the memory system IP, ORBIT™, as the dual solution, provides highly efficient DRAM bandwidth. OPENEDGES' memory subsystem IP, ORBIT™, consists of NoC (Network On-Chip), DDR controller, and PHY. The DDR controller (OMC™) supports DDR3/4, LPDDR3/4/4x/5/5x, and GDDR6. The DDR PHY (OPHY) supports LPDDR4/4x/5/5x along with GDDR6. As they are built to JEDEC standards, they are easily integrated with each other on an SoC (System-on-Chip).
OPENEDGES AI Computing Platform Architecture
"We are incredibly excited to release the updated NPU in the market," said Sean Lee, CEO of OPENEDGES. "The requirements for NPU rapidly change and are complicated. However, we firmly believe that our industry-leading 4-/8-bit mixed-precision scalable architecture can meet all the dynamic market requirements. ENLIGHT™ is already production-proven, and OPENEDGES has supported a complete stack of SW and SDK with memory subsystem IP, ORBIT™. Consequently, our customers’ efforts to build high-performance and highly optimized SoC exactly fit with what we provide.”
About OPENEDGES
OPENEDGES Technology is the world's only total memory subsystem and AI platform IP solution company that delivers NPU, memory controller, DDR PHY, and on-chip interconnect IPs.
OPENEDGES is recognized for its world-class IPs with the highest level of efficiencies in power consumption, area, and DRAM optimization. The IPs and the proprietary technology shorten the customer's design and verification process by delivering the only market and silicon-proven integrated IP solutions.
The two key technologies of OPENEDGES are memory systems and AI Computing, which together, provide a sorely needed boost in performance, efficiency, and reliability for IoT.
- ORBIT™:
- DDR memory controller IP supports DDR3/4, LPDDR3/4/4x/5/5x, GDDR6
- DDR PHY supports LPDDR4/4x/5/5x, GDDR6
- NoC Bus Interconnect IP Non-coherent NoC available
Learn more about OPENEDGES at www.openedges.com or contact directly to sales@openedges.com
|
Related News
- Rebellions Selects Arteris for Its Next-Generation Neural Processing Unit Aimed at Generative AI
- OPENEDGES To Exhibit its 4-/8-bit mixed-quantization NPU IP at Embedded Vision Summit 2022
- OmniVision Announces World's First Dedicated Driver Monitoring System ASIC With Integrated AI Neural Processing Unit, Image Signal Processor and DDR3 Memory
- Latest NPU adds to Arm's AI Platform performance, applicability, and efficiency
- NXP Announces Lead Partnership for Arm Ethos-U55 Neural Processing Unit for Machine Learning
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Most Popular
E-mail This Article | Printer-Friendly Page |