How Will Deep Learning Change SoCs?
Junko Yoshida, EETimes
3/30/2015 00:00 AM EDT
MADISON, Wis. – Deep Learning is already changing the way computers see, hear and identify objects in the real world.
However, the bigger -- and perhaps more pertinent -- issues for the semiconductor industry are: Will “deep learning” ever migrate into smartphones, wearable devices, or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimized for neural networks? If so, what does it look like?
E-mail This Article | Printer-Friendly Page |
Related News
- How Will 5G Advanced Change RF Design?
- Harvard Researchers Select Flex Logix's Embedded FPGA Technology To Design Deep Learning SoCs
- Reading the tea leaves: How deep will EDA losses go?
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- Syntiant's Deep Learning Computer Vision Models Deployed on Renesas RZ/V2L Microprocessor
Breaking News
- Electronic System Design Industry Posts $5.1 Billion in Revenue in Q3 2024, ESD Alliance Reports
- BrainChip Provides Low-Power Neuromorphic Processing for Quantum Ventura's Cyberthreat Intelligence Tool
- Qualitas Semiconductor Signs IP Licensing Agreement with Edge AI Leader Ambarella
- High-Performance 16-Bit ADC and DAC IP Cores Ready to licence
- Alchip Opens 3DIC ASIC Design Services
Most Popular
- Ultra Accelerator Link Consortium (UALink) Welcomes Alibaba, Apple and Synopsys to Board of Directors
- Breaking Ground in Post-Quantum Cryptography Real World Implementation Security Research
- CAST to Enter the Post-Quantum Cryptography Era with New KiviPQC-KEM IP Core
- Eighteen New Semiconductor Fabs to Start Construction in 2025, SEMI Reports
- InPsytech Announces Finalization of UCIe IP Design, Driving Breakthroughs in High-Speed Transmission Technology