|
|||||||||||||||||
The benefit of non-volatile memory (NVM) for edge AIEran Briman, Weebit Nano In low-power IoT and edge AI applications, AI models can be small enough to fit into the internal NVM of an SoC. The on-chip NVM could be used for both code storage and to hold the AI weights and CPU firmware.
Ongoing innovation in semiconductor technologies, algorithms and data science are making it possible to incorporate some degree of AI inferencing capability in an increasing number of edge devices. Today we see it in computer vision applications like object recognition, facial recognition, and image classification on products from phones and laptops to security cameras. In industrial systems, inferencing enables predictive equipment maintenance and lets robots perform tasks independently. For IoT and smart home products, AI inference makes it possible to monitor and respond in real time to various sensor inputs. The lowest cost processing solutions that support AI inferencing today are off-the-shelf single-chip microcontrollers used for IoT systems. Such chips combine a general-purpose CPU, SRAM and IO functions with non-volatile memory (NVM). However, these chips implement the AI algorithms in software running on the CPU which can deliver only modest performance and are only practical for basic inference. Scaling a single-chip solution to provide higher performance inference presents a challenge to designers.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |