NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel
SAN JOSE, Calif., June 2, 2020 -- Today, NEUCHIPS Corp., an AI compute company specializing in domain-specific accelerator solutions, announced the world's first recommendation engine - RecAccelTM - that can perform 500,000 inferences per second. Running open-source PyTorch DLRM, RecAccelTM outperforms server-class CPU and inference GPU by 28X and 65X, respectively. It is equipped with an ultra-high-capacity, high-bandwidth memory subsystem for embedding table lookup and a massively parallel compute FPGA for neural network inference. Via a PCIe Gen3 host interface, RecAccelTM is ready for data center adaptation.
RecAccelTM boosts DLRM inference performance through the following innovations:
- Embedding-specific memory architecture, allocation and access scheme.
- Application-specific processing pipeline.
- Scalable Multiply-And-Accumulator (MAC) array.
Most e-commerce, on-line advertisement, and internet service providers employ recommendation systems to expand interests or delivery services, such as search result ranking, friend suggestions, movie recommendations and purchase suggestions. Recommendations usually account for most of the AI inference workload in data centers.
"Fast and accurate recommendation inference is the key to e-commerce business success," said Dr. Youn-Long Lin, CEO of NEUCHIPS. "RecAccelTM powers your business with the lowest latency, highest throughput, and best TCO."
About NEUCHIPS:
NEUCHIPS Corp. is an application-specific compute solution provider based in Hsinchu, Taiwan. Founded by a team of veteran IC design experts in 2019, NEUCHIPS's mission is "Smarten AI computing through innovated IC design to make Intelligence Everywhere." NEUCHIPS management and R&D team has decades of experience in top IC design houses and holds 22 patents in signal processing, neural network, and circuits. As an OCP community member, NEUCHIPS devotes itself to provide the most cost-effective AI inference accelerators for best TCO (Total Cost of Ownership).
For more information, please visit us at www.neuchips.ai.
|
Neuchips Hot IP
Related News
- NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
- Mentor's Catapult HLS enables Chips&Media to deliver deep learning hardware accelerator IP in half the time
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
- NEUCHIPS' Purpose-Built Accelerator Designed to Be Industry's Most Efficient Recommendation Inference Engine
Breaking News
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Cadence Unveils Arm-Based System Chiplet
Most Popular
- Cadence Unveils Arm-Based System Chiplet
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Esperanto Technologies and NEC Cooperate on Initiative to Advance Next Generation RISC-V Chips and Software Solutions for HPC
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- Arteris Selected by GigaDevice for Development in Next-Generation Automotive SoC With Enhanced FuSa Standards
E-mail This Article | Printer-Friendly Page |