RaiderChip launches its Generative AI hardware accelerator for LLM models on low-cost FPGAs
The startup pioneers Edge Generative AI inference on small devices, thanks to the efficiency of its AI accelerator IP core: the GenAI v1
Spain, June 4th, 2024 -- The company, which recently announced its first Generative AI Hardware accelerator, goes one step further, offering a turn-key solution for LLM inference now available on a wide range of low-cost FPGA devices.
RaiderChip GenAI v1 running the Phi-2 LLM model on a Versal FPGA with a single Memory Controller
RaiderChip’s v1 design leverages 32-bits floating point arithmetic, which provides full precision, allowing direct usage of original LLM model weights, without any modification or quantization. This preserves the full intelligence and reasoning capabilities of the raw LLM models, as their creators intended them.
This full precision is coupled with real-time AI LLM inference speeds: “Our design’s efficiency edge allows customers to run unquantized LLM models at full interactive speed, on limited memory bandwidths where competitors are more than 20% slower, especially faster than CPU based inference solutions”, explains RaiderChip’s team.
The GenAI v1 IP core is already available for FPGAs of every sub-family in the AMD Versal FPGA line-up, as well as earlier UltraScale Series devices, and more: “Our IP cores are target-agnostic, and can also be implemented on different FPGA vendor devices, following customer’s requirements for logic resources and inference speed.” the team highlights.
A standout feature of RaiderChip’s solutions is the plug’n’play nature of its IP cores, using only the minimal number of industry standard AXI interfaces. With the provided IP blocks the GenAI v1 becomes a simple peripheral: fully controllable from the customer’s Software.
The introduction of FPGAs for Generative AI Acceleration expands the available options for local AI inference of LLM models. Furthermore, their reprogrammable nature makes them ideal in the context of explosive innovation in the AI field, where new models and algorithmic upgrades appear on a weekly basis, where FPGAs allow field updates of already deployed systems.
More information at https://raiderchip.ai/technology/hardware-ai-accelerators
|
Related News
- Imagination launches Open Access program, providing scale-ups with a low-cost path to differentiated silicon
- RaiderChip brings Meta Llama 3.2 LLM HW acceleration to low cost FPGAs
- RaiderChip raises 1 Million Euros in seed capital to market its innovative generative AI accelerator: the GenAI v1.
- Flex Logix Launches EasyVision - Turnkey AI/ML Solution with Ready-to-Use Models and AI Acceleration Hardware
- Intilop delivers on Altera FPGAs, their 7th Gen. industry first, Full TCP, UDP & IGMP Hardware Accelerator System with Dual 10G ports for all Hyper Performance Networking Systems
Breaking News
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- TSMC drives A16, 3D process technology
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
Most Popular
- Cadence Unveils Arm-Based System Chiplet
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Esperanto Technologies and NEC Cooperate on Initiative to Advance Next Generation RISC-V Chips and Software Solutions for HPC
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- Arteris Selected by GigaDevice for Development in Next-Generation Automotive SoC With Enhanced FuSa Standards
E-mail This Article | Printer-Friendly Page |