Does Your AI Chip Have Its Own DNN?
By Junko Yoshida, EETimes
August 25, 2019
For AI accelerators in the race to achieve optimum accuracy at minimum latency, especially in autonomous vehicles (AVs), teraflops have become the key element in many so-called brain chips. The contenders include Nvidia’s Xavier SoC, Mobileye’s EyeQ5, Tesla’s Full Self-Driving computer chip and NXP-Kalray chips.
In an exclusive interview with EE Times last week, Forrest Iandola, CEO of DeepScale, explained why this sort of brute-force processing approach is unsustainable, and said many of the assumptions common among AI hardware designers are outdated. As AI vendors gain more experience with more AI applications, it's becoming evident to him that different AI tasks are starting to require different technological approaches. If that's true, the way that AI users buy AI technology is going to change, and vendors are going to have to respond.
Rapid advancements in neural architecture search (NAS), for example, can make the search for optimized deep neural networks (DNN) faster and much cheaper, Iandola argued. Instead of relying on bigger chips to process all AI tasks, he believes there is a way “to produce the lowest-latency, highest-accuracy DNN on a target task and a target computing platform.”
E-mail This Article | Printer-Friendly Page |
Related News
Breaking News
- Arm loses out in Qualcomm court case, wants a re-trial
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity