Industry Expert Blogs
A Complete No-Brainer: ReRAM for Neuromorphic ComputingWeebit Nano Blog - Giuseppe Piccolboni, Weebit NanoJun. 05, 2024 |
In the last 60 years technology has evolved at such an exponentially fast rate that we are now regularly conversing with AI based chatbots, and that same OpenAI technology has been put into a humanoid robot. It’s truly amazing to see this rapid development.
Continued advancement of AI development faces numerous challenges. One of these is computing architecture. Since it was first described in 1945, the von Neumann architecture has been the foundation for most computing. In this architecture, instructions and data are stored together in memory and communicate via a shared bus to the CPU. This has enabled many decades of continuous technological advancement.
However, there are bottlenecks created by such an architecture, in terms of bandwidth, latency, power consumption, and security, to name a few. For continued AI development, we can’t just make brute force adjustments to this architecture. What’s needed is an evolution to a new computing paradigm that bypasses the bottlenecks inherent in the traditional von Neumann architecture and more precisely mimics the system is trying to imitate: the human brain.
To achieve this, memory must be closer to the compute engine for better efficiency and power consumption. Even better, computation should be done directly within the memory itself. This paradigm change requires new technology, and ReRAM (or RRAM) is among the most promising candidates for future in-memory computing architectures.
Related Blogs
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions
- Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
- Ecosystem Collaboration Drives New AMBA Specification for Chiplets
- The design of the NoC is key to the success of large, high-performance compute SoCs
- Why, How and What of Custom SoCs