|
||||||||||
NSITEXE product brand "Akaria", expand portfolioFull lineup processors for Edge-AI application including Autonomous Driving July 28, 2022 -- NSITEXE, Inc. (Head Office: Minato-ku, Tokyo; CEO: Yukihide Niimi; hereinafter “NSITEXE”) is pleased to announce the release of “Akaria”, a new product brand for next-generation embedded systems. Akaria provides processor IPs with optimal configurations for each customer domain and application, and also software and solutions that utilize the processor IPs. Akaria makes a wide range of contributions to the embedded systems. Efficient execution of AI and other computing on edge devices, which are subject to severe heat and cost constraints, has become an important issue in embedded systems to realize a mobility society that connects people with cars, smart cities that connect people with cities and CPS (Cyber Physical System) that more closely links virtual spaces with the real world. NSITEXE addresses this challenge with:
By further enhancing these four strengths, NSITEXE provides processors supporting a wide range of embedded systems, and Domain Specific Accelerators that are optimized for each application by combining RISC-V-based Standard Processors with Extension Units. Figure 1 Akaria Overview NSITEXE deploy the products including these processor IPs under the Akaria brand name. Akaria was named based on the concept that “we want to be a light source that opens up a new era of the embedded systems.” The shape of the logo represents the “light source” and its color expresses the passion of NSITEXE to bring the new products into the world, in the highest temperature blue. Figure 2 Akaria logo Akaria processors are available in the NS family of Standard Processors and the DR family of Domain Specific Accelerators. Figure 3 Akaria roadmap The DR family provides scalable solutions optimized for each application including the next generation of complex AI application by combining versatile MIMD-based accelerators with dedicated AI accelerators (Figure 4). The MIMD-based accelerators leverage task-level parallelism with multi-core Standard Processors and data-level parallelism with Vector Extension that comply with RISC-V Vector Extension version 1.0. The AI accelerators named ML series realize the industry’s most power-efficient neural networks. Figure 4 DR Family
Figure 5 Neural Network execution by DR Family Hideki Sugimoto, CTO, NSITEXE, Inc.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |