|
|||
Machine Learning on DSPs: Enabling Audio AI at the EdgeBy Jim Steele, Knowles Corp. Once confined to cloud servers with practically infinite resources, machine learning is moving into edge devices for various reasons including lower latency, reduced cost, energy efficiency, and enhanced privacy. Once confined to cloud servers with practically infinite resources, machine learning is moving into edge devices for various reasons including lower latency, reduced cost, energy efficiency, and enhanced privacy. The time needed to send data to the cloud for interpretation could be prohibitive, such as pedestrian recognition in a self-driving car. The bandwidth needed to send data to the cloud can be costly, not to mention the cost of the cloud service itself, such as speech recognition for voice commands. Energy is a trade-off between sending data back and forth to server vs. localized processing. Machine learning computations are complex and could easily drain the battery of an edge device if not executed efficiently. Edge decisions also keep the data on-device which is important for user privacy, such as sensitive emails dictated by voice on a smartphone. Audio AI is a rich example of inference at the edge; and a new type of digital signal processor (DSP) specialized for audio machine learning use-cases can enable better performance and new features at the edge of the network. |
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |