Secure-IC's Securyzr™ High-performance AES-XTS accelerator - optional SCA protection
RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
The rise of optimized reasoning models, capable of matching the performance of massive solutions like ChatGPT, strengthens RaiderChip’s commitment to AI acceleration through its affordable and high-performance edge devices.
Spain, February 17th, 2025 -- RaiderChip, a leading fabless semiconductor company specializing in hardware acceleration for Generative Artificial Intelligence, has added the DeepSeek-R1 family of reasoning LLMs to the growing list of models supported on its GenAI NPU accelerator, with the fundamental capability to allow users to swap LLM models on the fly, thanks to the flexibility of its hardware design. This integration marks a significant breakthrough in local Generative AI inference by combining RaiderChip’s optimized architecture for affordable devices with the outstanding computational efficiency of DeepSeek-R1.
The new DeepSeek-R1 LLM family, developed in China, has recently revolutionized the industry, and stands out for its exceptional balance between operational cost and cognitive performance. Despite its compact design, it outperforms larger models in efficiency and capability, challenging the traditional strategy of massive proprietary LLMs relying on cloud-based infrastructure.
The future of Artificial Intelligence is moving toward more compact, optimized, and specialized models that can run at the Edge, reducing the high costs of inference. Víctor López, CTO of RaiderChip, highlights: “By combining our stand-alone hardware NPU semiconductors with all of DeepSeek-R1’s distilled models, we provide our customers with exceptional performance without relying on costly cloud infrastructure. Additionally, we offer greater independence, security and privacy for their solutions, guaranteeing AI-service availability and low-latency, supporting the customization of extraordinarily intelligent models, ultimately enabling the highest-performing AI Agents at the Edge”.
WANT TO KNOW MORE?
SEE WHAT’S INSIDE OUR AI ACCELERATORS
|
Related News
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
- Ceva and Edge Impulse Join Forces to Enable Faster, Easier Development of Edge AI Applications
- RaiderChip launches its Generative AI hardware accelerator for LLM models on low-cost FPGAs
- RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
- Ceva Expands Embedded AI NPU Ecosystem with New Partnerships That Accelerate Time-to-Market for Smart Edge Devices
Breaking News
- Intel in advanced talks to sell Altera to Silverlake
- Logic Fruit Technologies to Showcase Innovations at Embedded World Europe 2025
- S2C Teams Up with Arm, Xylon, and ZC Technology to Drive Software-Defined Vehicle Evolution
- CEA and Quobly Report Simultaneous, Microsecond Qubit-Readout Solution With 10x Power-Use Reduction
- YorChip announces Low latency 100G ULTRA Ethernet ready MAC/PCS IP for Edge AI
Most Popular
- Arteris Revolutionizes Semiconductor Design with FlexGen - Smart Network-on-Chip IP Delivering Unprecedented Productivity Improvements and Quality of Results
- ARM signs Meta as first chip product customer, says report
- TSMC Will Not Take Over Intel Operations, Observers Say
- YorChip announces Low latency 100G ULTRA Ethernet ready MAC/PCS IP for Edge AI
- AccelerComm® announces 5G NR NTN Physical Layer Solution that delivers over 6Gbps, 128 beams and 4,096 user connections per chipset
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |