The benefit of non-volatile memory (NVM) for edge AI
Eran Briman, Weebit Nano
embedded.com (September 15, 2023)
In low-power IoT and edge AI applications, AI models can be small enough to fit into the internal NVM of an SoC. The on-chip NVM could be used for both code storage and to hold the AI weights and CPU firmware.
ReRAM NVM in 130nm CMOS, S130 | Related |
Ongoing innovation in semiconductor technologies, algorithms and data science are making it possible to incorporate some degree of AI inferencing capability in an increasing number of edge devices. Today we see it in computer vision applications like object recognition, facial recognition, and image classification on products from phones and laptops to security cameras. In industrial systems, inferencing enables predictive equipment maintenance and lets robots perform tasks independently. For IoT and smart home products, AI inference makes it possible to monitor and respond in real time to various sensor inputs.
The lowest cost processing solutions that support AI inferencing today are off-the-shelf single-chip microcontrollers used for IoT systems. Such chips combine a general-purpose CPU, SRAM and IO functions with non-volatile memory (NVM). However, these chips implement the AI algorithms in software running on the CPU which can deliver only modest performance and are only practical for basic inference. Scaling a single-chip solution to provide higher performance inference presents a challenge to designers.
E-mail This Article | Printer-Friendly Page |
|
Related Articles
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Synthesis Methodology & Netlist Qualification
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Demystifying MIPI C-PHY / DPHY Subsystem