AI Startup Deep Vision Powers AI Innovation at the Edge
LOS ALTOS, Calif., November 19, 2020 – Deep Vision exits stealth mode and launches its ARA-1 inference processor to enable the creation of new world AI vision applications at the edge. The processors provide the optimal balance of compute, memory, energy efficiency (2W Typical), and ultra-low latency in a compact form factor, making it the definitive choice for endpoints such as cameras, sensors, as well as edge servers where high compute requirements, model flexibility, and energy efficiency is paramount.
“Today’s complex AI workloads require not only low power but also low latency to deliver real-time intelligence at the edge,” said Ravi Annavajjhala, CEO of Deep Vision. “No more making tradeoffs between performance and efficiency. Developers now have access to higher accuracy outcomes and rich data insights, all on one processor.”
Groundbreaking High-Efficiency Architecture
Deep learning models are growing in complexity, and driving increased compute demand for AI at the Edge. The Deep Vision ARA-1 Processor is based on a patented Polymorphic Dataflow Architecture, capable of handling varied dataflows to minimize on-chip data movement. The architecture supports instructions within each of the neural network models, which allows for optimally mapping any dataflow pattern within a deep learning model. Keeping data close to the compute engines minimizes data movement ensuring high inference throughput, low latency, and greater power efficiency. The compiler automatically evaluates multiple data flow patterns for each layer in a neural network and chooses the highest performance and lowest power pattern.
With its simultaneous multi-model processing, The Deep Vision ARA-1 Processor can also effectively run multiple models without a performance penalty, generating results faster and more accurately. With a lower system power consumption than Edge TPU and Movidius MyriadX, Deep Vision ARA-1 processor runs deep learning models such as Resnet-50 at a 6x improved latency than Edge TPU and 4x improved latency than MyriadX.
Software-Centric Approach Breaks Down Complexity Barriers
Deep Vision’s software development kit (SDK) and hardware are tightly intertwined to work seamlessly together,
ensuring optimal model accuracy with the lowest power consumption. With a built-in quantizer, simulator, and profiler, developers have all the tools needed to support computationally complex inference applications’ design and execution. The process of migrating models to production without extensive code development has historically been challenging. Deep Vision’s SDK also allows for a frictionless workflow, which results in a low code, automated, seamless migration process from the training model to the production application. The SDK reduces expensive development time by dramatically increasing productivity and reducing overall time to market.
Paving the Path for New Markets
The Deep Vision ARA-1 processors are designed to accelerate neural network models’ performance for smart retail, robotics, industrial automation, smart cities, autonomous vehicles, and more. Deep Vision is currently in POCs with customers in a variety of these industries.
Pricing and Availability
The processor offers developers great flexibility in hardware integration, with three form factors including high-speed USB and PCIe interface options. The Deep Vision ARA-1 processors are now shipping. For pricing and availability, please contact sales@deepvision.io.
About Deep Vision:
Founded by Dr. Rehan Hameed and Dr. Wajahat Qadeer in 2015, Deep Vision enables rich data insights to better optimize real-time actions at the edge. Our AI inference solutions deliver the optimum balance of compute, memory, low-latency, and energy efficiency for the demands of today’s latency-sensitive AI-based applications. Deep Vision has raised $19 million and backed by multiple investors, including Silicon Motion, Western Digital, Stanford, Exfinity Ventures, and Sinovation Ventures. www.deepvision.io
|
Related News
- Xilinx Introduces Kria Portfolio of Adaptive System-on-Modules for Accelerating Innovation and AI Applications at the Edge
- BrainChip Unveils Edge AI Box Partner Ecosystem for Gestures, Cybersecurity, Image Recognition, and Computer Vision
- Cadence Expands Tensilica IP Portfolio with New HiFi and Vision DSPs for Pervasive Intelligence and Edge AI Inference
- BrainChip and Edge Impulse Offer a Neuromorphic Deep Dive into Next-Gen Edge AI Solutions
- BrainChip Showcases Edge AI Technologies at 2023 Embedded Vision Summit
Breaking News
- Cadence to Acquire Secure-IC, a Leader in Embedded Security IP
- Blue Cheetah Tapes Out Its High-Performance Chiplet Interconnect IP on Samsung Foundry SF4X
- Alphawave Semi to Lead Chiplet Innovation, Showcase Advanced Technologies at Chiplet Summit
- YorChip announces patent-pending Universal PHY for Open Chiplets
- PQShield announces participation in NEDO program to implement post-quantum cryptography across Japan
Most Popular
- Qualitas Semiconductor Signs IP Licensing Agreement with Edge AI Leader Ambarella
- BrainChip Provides Low-Power Neuromorphic Processing for Quantum Ventura's Cyberthreat Intelligence Tool
- Altera Launches New Partner Program to Accelerate FPGA Solutions Development
- Alchip Opens 3DIC ASIC Design Services
- Electronic System Design Industry Posts $5.1 Billion in Revenue in Q3 2024, ESD Alliance Reports
E-mail This Article | Printer-Friendly Page |