Performance Efficiency AI Accelerator for Mobile and Edge Devices
Industry Expert Blogs
NVIDIA Previews Open-source Processor Core for Deep Neural Network InferenceInside DSP - BDTiOct. 31, 2017 |
With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, although it doesn't intend to supply them with chips. Instead, it's decided to open-source the NVDLA deep learning processor core found in its "Xavier" SoC introduced last fall.
Related Blogs
- Decipher the Meaning of Silicon-as-a-Service
- Why, How and What of Custom SoCs
- Digitizing Data Using Optical Character Recognition (OCR)
- Extending Arm Total Design Ecosystem to Accelerate Infrastructure Innovation
- Mitigating Side-Channel Attacks In Post Quantum Cryptography (PQC) With Secure-IC Solutions