|
|||||||||||||||
eFPGA Saved Us Millions of Dollars. It Can Do the Same for YouBy Andy Jaros, Flex Logix For those of you who follow Flex Logix®, you already know that we have an IP business, EFLX® eFGPA, and an edge inferencing co-processor chip and board business, InferX®. InferX came about because we had many customers ask if they can run AI/ML algorithms in EFLX. The answer was and still is, of course you can – EFLX is an FPGA fabric similar to what FPGA chips use. Our co-founder, Cheng Wang, took some time and studied the challenges of AI processing in more detail and came up with a highly efficient edge inferencing processor leveraging Flex Logix proprietary eFPGA technology. When performance, power and area results were shared with our board of directors, they thought it so compelling, they told us to build a chip. Hence, InferX X1 was born. The X1 was specified to be a lean, high performance edge accelerator for AI inference processing incorporating Flex Logix’ proprietary tensor processor, PCIe, DDR, memory and a NoC. When it came time to architect the chip, there was an internal debate about adding EFLX to the X1 chip, mainly because it takes up area and our use case was pretty basic: support a GPIO interface and help with chip debug. Not a strong reason to add one square millimeter in 16nm. We proceeded anyway to demonstrate “Eating our own dog food”, by connecting the eFPGA to both the NoC bus and GPIO to maximize flexibility. Fast forward to chip bring-up.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |