|
||||||||||
LeapMind Announces Efficiera v2 Ultra-Low Power AI Inference Accelerator IPEnhance product for wider-range application based on initial market introduction, results and evaluation November 30, 2021 – Tokyo Japan, LeapMind Co., Ltd., a leading creator of the standard in edge artificial intelligence (AI), today announced version 2 (hereinafter: v2) of the ultra-low power AI inference accelerator IP Efficiera, scheduled to be available in December 2021. Efficiera is highly valued for its power saving, high performance, space saving and performance scalability features. Efficiera v2’s improved features and benefits expand the range of applications by providing wider coverage of performance with broader product availability while maintaining the circuit scale of the minimum configurations. These enhancements are based on learnings from the initial product introduction and market evaluation. "Last year, we officially launched the commercial version of v1, and many companies evaluated Efficiera, and by the end of September 2021, we signed license agreements with eight domestic companies. Our corporate motto is concretely feeling that the "spreading to the world new devices that use machine learning " that we have set as our corporate philosophy is steadily progressing through the provision of v1. We will continue to strive to popularize AI through further technological innovation and product lineup expansion." said Soichi Matsuda, CEO of LeapMind. Efficiera is an ultra-low power AI inference accelerator IP specialized for convolutional neural network (CNN) inference processing that runs as a circuit on FPGA or application-specific integrated circuit (ASIC) devices. The ultra-small quantization technology minimizes the number of quantization bits to 1 - 2 bits, maximizing the power and area efficiency of convolution that accounts for most of inference processing without the need for advanced semiconductor manufacturing processes or special cell libraries. By using this product, deep learning functions can be incorporated into a variety of edge devices, including consumer electronics such as home appliances, industrial equipment such as construction machinery, surveillance cameras, broadcasting equipment, as well as small machines and robots that are constrained by power, cost, and thermals — all of which have been technically difficult in the past. "Since the official release of v1, we have aimed to develop the world's most power efficient DNN accelerator. We have strengthened the design and verification method and development process. We developed v2 so that it can respond to adoption in ASICs and application-specific standard products (ASSPs). We are also developing an inference learning model on the deep learning side to maximize the benefits of ultra-small quantization technology. The greatest strength of LeapMind is that we can provide a technique to master these two wheels,” said Dr. Hiroyuki Tokunaga, Director and CTO of LeapMind. By improving design and verification methodology and reviewing the development process, product quality can be applied not only to FPGA but also to ASIC and ASSP. LeapMind starts to provide a model development environment, Network Development Kit (NDK) which enables users to develop deep learning models for Efficiera, which has not been done previously. Other Notable Specifications and Features of Efficiera V2 Concept of Efficiera v2 Hardware features Integration into SoC Target frequency in FPGA Network Development Kit (NDK) LeapMind is welcoming trials and feedback from all interested parties, including system on chip (SoC) vendors and end-user product designers. To obtain Efficiera v2, please contact us at business@leapmind.io. For more product information, please visit: https://leapmind.io/en/business/ip/. About LeapMind
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |