Flex Logix Announces EFLX eFPGA And nnMAX AI Inference IP Model Support For The Veloce Strato Emulation Platform From Mentor
Veloce Strato models used to verify InferX X1 AI Inference Accelerator designs now in fabrication
MOUNTAIN VIEW, Calif., July 21, 2020 -- Flex Logix® Technologies, Inc., a leading supplier of embedded FPGA (eFPGA) and AI Inference IP, architecture and software, today announced support for EFLX eFPGA IP and nnMAX™ AI Inference IP emulation models for use on Mentor's Veloce® Strato™ emulation platform. These models are designed to enable customers to significantly lower development costs, speed time to market, and lower overall risk by enabling real-time testing well ahead of silicon availability.
"Customers want first time success for their SoCs because anything less leads to unnecessary development cost and delays in product availability," said Geoff Tate, CEO and co-founder of Flex Logix. "Compared to software simulation, emulation models, such as those for Veloce Strato, allow SoC architectural exploration and final verification to be done more rapidly and thoroughly while providing software developers a platform to debug their software well in advance of silicon.
The Flex Logix models developed for use with the Veloce emulation platform have been proven in the verification of Flex Logic's own SoC, InferX™ X1, which are now in fab. InferX X1 has a 2x2 nnMAX array and 1x1 EFLX eFPGA.
"Next-generation SoCs for edge computing, such as Flex Logix's InferX X1 edge inference co-processor, require a high-degree of hardware and software programmability and power efficiency tailored to targeted workloads," said Ravi Subramanian, senior vice president, IC Verification, Mentor, a Siemens business. "By using the Veloce emulation platform, our mutual customers can confidently leverage Flex Logix's AI inferencing and eFPGA models designed for Veloce to rapidly verify and tapeout their next-generation SoCs. We are delighted that FlexLogix themselves have proven-in these capabilities with their own InferX SoC built to efficiently handle highly-intensive neural-network inferencing workloads."
About Flex Logix
Flex Logix provides solutions for making flexible chips and accelerating neural network inferencing. Its eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix's second product line, nnMAX, utilizes its eFPGA and interconnect technology to provide modular, scalable neural inferencing from 1 to >100 TOPS using a higher throughput/$ and throughput/watt compared to other architectures. Flex Logix is headquartered in Mountain View, California.
|
Related News
- Mentor's Veloce Strato emulation platform selected by Iluvatar CoreX for verification of AI chips and software
- Flex Logix Announces Upgraded Emulation Models For EFLX™ eFPGA
- Flex Logix Announces nnMAX AI Inference IP In Development On GLOBALFOUNDRIES 12LP Platform
- Flex Logix Announces EFLX eFPGA Emulation Models For The Cadence Palladium Z1 Platform
- Flex Logix Announces InferX™ High Performance IP for DSP and AI Inference
Breaking News
- Breker RISC-V SystemVIP Deployed across 15 Commercial RISC-V Projects for Advanced Core and SoC Verification
- Veriest Solutions Strengthens North American Presence at DVCon US 2025
- Intel in advanced talks to sell Altera to Silverlake
- Logic Fruit Technologies to Showcase Innovations at Embedded World Europe 2025
- S2C Teams Up with Arm, Xylon, and ZC Technology to Drive Software-Defined Vehicle Evolution
Most Popular
- Intel in advanced talks to sell Altera to Silverlake
- Arteris Revolutionizes Semiconductor Design with FlexGen - Smart Network-on-Chip IP Delivering Unprecedented Productivity Improvements and Quality of Results
- RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
- YorChip announces Low latency 100G ULTRA Ethernet ready MAC/PCS IP for Edge AI
- AccelerComm® announces 5G NR NTN Physical Layer Solution that delivers over 6Gbps, 128 beams and 4,096 user connections per chipset
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |