|
|||
Does Your AI Chip Have Its Own DNN?By Junko Yoshida, EETimes For AI accelerators in the race to achieve optimum accuracy at minimum latency, especially in autonomous vehicles (AVs), teraflops have become the key element in many so-called brain chips. The contenders include Nvidia’s Xavier SoC, Mobileye’s EyeQ5, Tesla’s Full Self-Driving computer chip and NXP-Kalray chips. In an exclusive interview with EE Times last week, Forrest Iandola, CEO of DeepScale, explained why this sort of brute-force processing approach is unsustainable, and said many of the assumptions common among AI hardware designers are outdated. As AI vendors gain more experience with more AI applications, it's becoming evident to him that different AI tasks are starting to require different technological approaches. If that's true, the way that AI users buy AI technology is going to change, and vendors are going to have to respond. Rapid advancements in neural architecture search (NAS), for example, can make the search for optimized deep neural networks (DNN) faster and much cheaper, Iandola argued. Instead of relying on bigger chips to process all AI tasks, he believes there is a way “to produce the lowest-latency, highest-accuracy DNN on a target task and a target computing platform.” |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |