Flex Logix Announces Availability and Roadmap of InferX X1 Boards and Software Tools
InferX X1P1 PCIe board at $399-$499 brings datacenter class throughput to lower price point edge servers; InferX X1P4 in 2021 will increase throughput 4x
MOUNTAIN VIEW, Calif. – October 28, 2020 – Flex Logix Technologies, Inc. today announced the availability and roadmap of PCIe boards powered by the Flex Logix InferX™ X1 accelerator – the industry’s fastest and most efficient AI inference chip for edge systems.
The InferX X1P1 uses a single InferX X1 chip and a single LPDDR4x DRAM on a half-height, half-length PCIe board. Samples are available now to lead customers; general sampling will occur in Q1 and production availability in Q2 2021. The InferX X1P4 will have four InferX X1 chips on the same size board: half-height, half-length and will sample mid-2021 with production by end of 2021. Finally, an InferX M.2 board will be available in the same time frame as X1P4.
The InferX X1P4 will have throughput on YOLOv3, for object detection and recognition, similar to the Nvidia Tesla T4 at volume pricing from $649-$999. Other real customer models run faster on X1P4 than T4. The InferX X1P1 is about 1/3 the performance of Tesla T4 for YOLOv3 but sells for $399-$499. For some customer models, the X1P1 outperforms the T4. InferX M.2 pricing will be similar to the X1P1 and has the same performance.
“The Nvidia Tesla T4 Inference accelerator for edge servers is extremely popular, but customers want to deploy inference in lower price point servers for higher volume applications,” said Geoff Tate, CEO of Flex Logix. “The InferX X1P1 and X1M will enable much lower price point edge inference server-based systems at good performance; while the InferX X1P4 will enable cost reduction of existing edge inference server-based systems.”
The new InferX PCIe boards deliver higher throughput/$ for lower price point servers. Boards announced today include the following.
- InferX X1P1 PCIe board – x4 PCIe GEN3/4 board featuring 19W TDP.
- InferX X1P4 PCIe board – This <75W TDP board is x8 PCIe GEN3/4.
- InferX X1M M.2 board – This 19W TDP board at M.2 22x80mm is a x4 PCIe GEN3/4.
Flex Logix is also unveiling a suite of software tools to accompany the boards. This includes Compiler Flow from TensorFlowLite/ONNX models, and an nnMAX Runtime Application. Also included in the software tools is an InferX X1 driver with external APIs designed for applications to easily configure & deploy models, as well as internal APIs for handling low-level functions designed to control & monitor the X1 board.
Pricing
800MHz | 667MHz | 533MHz | |
X1P1 | $499 | $449 | $399 |
X1P4 | $999 | $829 | $649 |
933MHz will be available in the second half of 2021.
About the InferX X1 Accelerator
The Flex Logix InferX X1 accelerates performance of neural network models such as object detection and recognition, and other neural network models, for robotics, industrial automation, medical imaging, gene sequencing, bank security, retail analytics, autonomous vehicles, aerospace and more. The InferX X1 delivers a 10 to 100 times improvement in inference price/performance versus the current industry leader.
More information is also available at www.flex-logix.com.
About Flex Logix
Flex Logix provides industry-leading solutions for making flexible chips and accelerating neural network inferencing. Its InferX X1 is the industry’s fastest and most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications, surpassing competitor’s performance at 1/7th size and 10x lower price. Flex Logix’s eFPGA platform enables chips to be flexible to handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix is headquartered in Mountain View, California. For more information, visit https://flex-logix.com
|
Flex Logix Technologies, Inc. Hot IP
Related News
- Flex Logix Announces Production Availability Of InferX X1 PCIe Boards for Edge AI Systems
- Flex Logix Announces Production Availability of InferX X1M Boards for Edge AI Vision Systems
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
- Flex Logix Launches InferX X1 Edge Inference Co-Processor That Delivers Near-Data Center Throughput at a Fraction of the Power and Cost
- Flex Logix Announces InferX™ High Performance IP for DSP and AI Inference
Breaking News
- Logic Design Solutions launches Gen4 NVMe host IP
- ULYSS1, Microcontroller (MCU) for Automotive market, designed by Cortus is available
- M31 is partnering with Taiwan Cooperative Bank to launch an Employee Stock Ownership Trust to strengthen talent retention
- Sondrel announces CEO transition to lead next phase of growth
- JEDEC Publishes LPDDR5 CAMM2 Connector Performance Standard
Most Popular
- Arm's power play will backfire
- Alphawave Semi Selected for AI Innovation Research Grant from UK Government's Advanced Research + Invention Agency
- Secure-IC obtains the first worldwide CAVP Certification of Post-Quantum Cryptography algorithms, tested by SERMA Safety & Security
- Weebit Nano continuing to make progress with potential customers and qualifying its technology Moving closer to finalisation of licensing agreements Q1 FY25 Quarterly Activities Report
- PUFsecurity Collaborate with Arm on PSA Certified RoT Component Level 3 Certification for its Crypto Coprocessor to Provide Robust Security Subsystem Essential for the AIoT era
E-mail This Article | Printer-Friendly Page |