Startup Preps Big Data Processor
Rick Merritt, EETimes
12/18/2015 10:21 AM EST
SAN JOSE, Calif. — Nervana Systems is on the cusp of rolling out a microprocessor designed for big data analytics. The startup’s work is one of a handful of efforts aiming to accelerate deep neural networks in hardware for a variety of recognition tasks.
Engineers are racing to develop and accelerate algorithms that find patterns in today’s flood of digital data. Nervana believes it has an edge with a novel processor it hopes to have up and running in its own cloud service late next year.
Nervana competes with giants such as Intel and Nvidia whose processors run most of today’s algorithms for training neural nets. Web giants are also in the hunt, snapping up the best researchers in machine learning. Among the leaders, Google is said to be working on an accelerator chip of its own.
![]() |
E-mail This Article | ![]() |
![]() |
Printer-Friendly Page |
Related News
- Lightelligence Revolutionizes Big Data Interconnect with World's First Optical Network-on-Chip Processor
- Startup Preps Neural Network Visual Processor for Mobiles
- AI Software Startup Moreh Partners with AI Semiconductor Company Tenstorrent to Challenge NVIDIA in AI Data Center Market
- VeriSilicon unveils the new VC9800 IP for next generation data centers
- PUFsecurity and Himax Prioritize User Security and Data Protection in Endpoint AI with PUF-based Root of Trust
Breaking News
- Breker RISC-V SystemVIP Deployed across 15 Commercial RISC-V Projects for Advanced Core and SoC Verification
- Veriest Solutions Strengthens North American Presence at DVCon US 2025
- Intel in advanced talks to sell Altera to Silverlake
- Logic Fruit Technologies to Showcase Innovations at Embedded World Europe 2025
- S2C Teams Up with Arm, Xylon, and ZC Technology to Drive Software-Defined Vehicle Evolution
Most Popular
- Intel in advanced talks to sell Altera to Silverlake
- Arteris Revolutionizes Semiconductor Design with FlexGen - Smart Network-on-Chip IP Delivering Unprecedented Productivity Improvements and Quality of Results
- RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
- YorChip announces Low latency 100G ULTRA Ethernet ready MAC/PCS IP for Edge AI
- AccelerComm® announces 5G NR NTN Physical Layer Solution that delivers over 6Gbps, 128 beams and 4,096 user connections per chipset