2-stage Power Amplifier 14.5GHz ultra-efficient Dual-Drive™ PA
Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
Grenoble, France, October 20, 2023 – Neurxcore, a leading provider of cutting-edge Artificial Intelligence (AI) solutions, today announced the launch of its groundbreaking Neural Processor Unit (NPU) product line for AI inference applications. It is built on an enhanced and extended version of the open-source NVIDIA’s Deep Learning Accelerator (Open NVDLA) technology, combined with patented in-house architectures. The SNVDLA IP series from Neurxcore sets a new standard for energy efficiency, performance, and capability, with a primary focus on image processing, including classification and object detection. SNVDLA also offers versatility for generative AI applications and has already been silicon-proven, operating on a 22nm TSMC platform, and showcased on a demonstration board running a variety of applications.
The innovative IP package also includes the Heracium SDK (Software Development Kit) built by Neurxcore upon the open-source Apache TVM (Tensor-Virtual Machine) framework to configure, optimize and compile neural network applications on SNVDLA products. Neurxcore’s product line caters to a wide range of industries and applications, spanning from ultra-low power to high-performance scenarios, including sensors and IoT, wearables, smartphones, smart homes, surveillance, Set-Top Box and Digital TV (STB/DTV), smart TV, robotics, edge computing, AR/VR, ADAS, servers and more.
Ad |
NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof AI accelerator (NPU) IP - 32 to 128 TOPS |
In addition to this groundbreaking product, Neurxcore offers a complete package allowing the development of customized NPU solutions, including new operators, AI-enabled optimized subsystem design, and optimized model development, covering training and quantization.
Virgile Javerliac, founder and CEO of Neurxcore, commented, « 80% of AI computational tasks involve inference. Achieving energy and cost reduction while maintaining performance is crucial. » He expressed gratitude to the dedicated team that developed this groundbreaking product and emphasized Neurxcore’s commitment to serving customers and exploring collaborative opportunities.
The inference stage, which involves using AI models to make predictions or generate content, is a pivotal aspect of AI. Neurxcore’s innovative solutions address this phase efficiently, making it ideal for various applications, even when serving multiple users simultaneously.
The SNVDLA product line exhibits substantial improvements in energy efficiency, performance, and feature set compared to the original NVIDIA version, while also benefiting from NVIDIA’s industrial-grade development. The product line’s fine-grain tunable capabilities, such as the number of cores and multiply-accumulate (MAC) operations per core, allow for versatile applications across diverse markets. It stands out for its exceptional energy and cost efficiency, making it one of the best in its class. Furthermore, competitive pricing, combined with an open-source software environment thanks to Apache TVM, ensures accessible and adaptable AI solutions.
According to Gartner’s 2023 AI Semiconductors report, titled Forecast: AI Semiconductors, Worldwide, 2021-2027, the use of artificial intelligence techniques in data centers, edge computing and endpoint devices requires the deployment of optimized semiconductor devices. Revenue from these AI semiconductors is forecast to be $111.6 billion by 2027, growing by a five-year CAGR of 20%.
About Neurxcore
Neurxcore is a fabless semiconductor company headquartered in Grenoble, France. The company specializes in neural processors built upon a custom implementation and enriched version of the Open NVDLA microarchitecture from NVIDIA. Neurxcore’s solutions optimize AI processing across the spectrum of performance, power, accuracy, and cost, addressing critical challenges in a wide range of applications, including sensors, wearables, IoT, edge computing, ADAS, datacenters, computer vision and generative AI. With a rich feature set and high configurability, Neurxcore provides the perfect fit for your AI-enabled system.
|
Related News
- NEUCHIPS Secures $20 Million in Series B2 Funding to Deliver AI Inference Platform for Deep Learning Recommendation
- SiFive Announces First Open-Source RISC-V-Based SoC Platform With NVIDIA Deep Learning Accelerator Technology
- Baidu Accelerator Rises in AI
- Vybium, develops European AI/ML accelerators based on the Stream Computing NPU IP
- RaiderChip raises 1 Million Euros in seed capital to market its innovative generative AI accelerator: the GenAI v1.
Breaking News
- Alphawave Semi Partners with PCISig, CXL Consortium, UCIe Consortium, Samtec and Lessengers to Showcase Advances in AI Connectivity at Supercomputing 2024
- Grass Valley Adds JPEG XS Support to AMPP, Powered by intoPIX FastTicoXS Technology, Enhancing Cloud-Based Live Production
- AI Software Startup Moreh Partners with AI Semiconductor Company Tenstorrent to Challenge NVIDIA in AI Data Center Market
- Achronix and BigCat Wireless Collaborate to Deliver Unprecedented Power Efficiency and Performance for 5G/6G Wireless Applications
- Renesas Unveils Industry's First Automotive Multi-Domain SoC Built with 3-nm Process Technology
Most Popular
- LG and Tenstorrent Expand Partnership to Enhance AI Chip Capabilities
- Silicon Creations Celebrates Milestone with Delivery of 1,000th Production License for Fractional-N PLL
- Renesas Unveils Industry's First Automotive Multi-Domain SoC Built with 3-nm Process Technology
- CHERI Alliance Officially Launches, Adds Major Partners including Google, to Tackle Cybersecurity Threats at the Hardware Level
- Flex Logix Acquired By Analog Devices
E-mail This Article | Printer-Friendly Page |