Industry Expert Blogs
![]() |
ReRAM-Powered Edge AI: A Game-Changer for Energy Efficiency, Cost, and SecurityWeebit Nano Blog - Eran Briman, Weebit NanoMar. 28, 2025 |
In AI inference, trained models apply their knowledge to make predictions and decisions. To achieve lower latency and better security, the world is transitioning steadily towards performing AI inference at the edge – without sending data back and forth to the cloud – for a wide range of applications.
Because edge devices are often small, battery-powered, and resource-constrained, it’s important that the computing resources enabling this process and the associated memories are ultra-low-power and low-cost. This is a challenge for AI workloads, which are known to be power-hungry.
The industry has been making progress towards lower power computation largely by moving to more advanced process nodes. This enables more performance with greater energy efficiency in smaller silicon area. However, non-volatile memories (NVMs) haven’t been able to scale to advanced nodes along with logic. Today we see advanced chips in process nodes of 3nm. At the same time, embedded flash memory is unable to scale below 28nm. This means that NVM and AI engines are often manufactured at very different process nodes and can’t be integrated on the same silicon die.
This is one of many reasons why the industry is exploring new memory technologies like Weebit ReRAM (RRAM).