180nm MTP Non Volatile Memory for Standard CMOS Logic Process
RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
The new embedded accelerator boosts inference speed by 2.4x, combining complete privacy and autonomy with a groundbreaking innovation: it eliminates the need for CPUs.
Spain, January 27, 2024 -- RaiderChip has officially launched the GenAI NPU, a fully hardware-based accelerator that sets new standards for efficiency and scalability in Generative AI. The GenAI NPU retains the key features of its predecessor, the GenAI v1: offline operation and autonomous functionality.
Additionally, it becomes fully stand-alone by embedding all Large Language Models (LLMs) operations directly into its hardware, thereby eliminating the need for CPUs.
RaiderChip GenAI NPU running the Llama 3.2 1B LLM model and streaming its output to a terminal
Thanks to its fully hardware-based design, the GenAI NPU achieves unprecedented levels of efficiency, unattainable by hybrid designs. According to RaiderChip CTO Victor Lopez: “By eliminating latency caused by hardware-software communication, we achieve superior performance while removing external dependencies, such as CPUs. The performance that you see is what you will get, regardless of the target electronic system where the accelerator is integrated. This improves energy efficiency and ensures fully predictable performance—advantages which make the GenAI NPU the ideal solution for embedded systems.”
Furthermore, the new design optimizes token generation speed per available memory bandwidth, multiplying it by 2.4x, while enabling the use of more cost-efficient memories like DDR or LPDDR without relying on expensive options such as HBM to achieve excellent performance. It also delivers equivalent results with fewer components, reducing size, cost, and energy consumption. These features allow for the development of more affordable and sustainable generative AI solutions, with faster return on investment and seamless integration into a variety of products tailored to different needs.
With this innovation, RaiderChip strengthens its strategy of offering optimized solutions based on affordable hardware, designed to bring generative AI to the Edge. These solutions ensure complete privacy and security for applications thanks to their ability to operate entirely offline and on-premises, while eliminating dependence on the cloud and recurring monthly subscriptions.
WANT TO KNOW MORE?
|
Related News
- BOS and Tenstorrent Unveil Eagle-N, Industry's First Automotive AI Accelerator Chiplet SoC
- Synopsys Announces Industry's First Ultra Ethernet and UALink IP Solutions to Connect Massive AI Accelerator Clusters
- DMP Released Next-Generation AI Accelerator IP "ZIA A3000 V2" - Industry-leading PPA efficiency to propel the future of edge AI
- FuriosaAI and GUC Partner on RNGD, the Most Efficient AI Accelerator for LLMs
- Vybium, develops European AI/ML accelerators based on the Stream Computing NPU IP
Breaking News
- RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
- Baya Systems Raises $36M+ to Propel AI and Chiplet Innovation
- Andes Technology D45-SE Processor Achieves ISO 26262 ASIL-D Certification for Functional Safety
- VeriSilicon and Innobase collaboratively launched second-generation Yunbao series 5G RedCap/4G LTE dual-mode modem IP
- ARM boost in $100bn Stargate data centre project
Most Popular
- Andes Technology D45-SE Processor Achieves ISO 26262 ASIL-D Certification for Functional Safety
- Cadence to Acquire Secure-IC, a Leader in Embedded Security IP
- Arm Chiplet System Architecture Makes New Strides in Accelerating the Evolution of Silicon
- ARM boost in $100bn Stargate data centre project
- Alphawave Semi to Lead Chiplet Innovation, Showcase Advanced Technologies at Chiplet Summit
E-mail This Article | Printer-Friendly Page |