Power efficient, high-performance neural network hardware IP for automotive embedded solutions
The aiWare hardware IP Core is highly customizable and developed by engineers working side-by-side with our automated driving teams. It can be deployed within a soc or as a stand-alone nn accelerator. On-chip and external memory sizes are highly configurable to optimize performance for customer requirements. aiWare maximizes host CPU offload using on-chip SRAM and external DRAM to keep execution and dataflow within the core. aiWare was designed for volume production in L2/L2+ and above ADAS systems. The first version of the mature IP core was released over three years ago. Building on this expertise the aiWare IP is more sophisticated than a leading automotive OEM’S recently announced accelerator.
The aiWare IP core is fully synthesizable RTL needing no special libraries, enabling neural network acceleration cores from 0,5 TOPS to 16 TOPS. The IP is layout-friendly thanks to its tile-based modular design. Optimized for efficiency at low clock speeds, the aiWare IP core can operate anywhere from 100 MHz to 1 GHz. The hardware IP core is also highly deterministic to increase safety, removing the complexity of caches or programmable cores. aiWare delivers more than 2 TMAC/s per W (4 TOP/s per Watt – 7nm estimated) while sustaining >95% efficiency under continuous operation. The IP core offers a range of ASIL-B–D compliant implementation options either on-chip with a host CPU SoC or as a dedicated NN-accelerator.
View Power efficient, high-performance neural network hardware IP for automotive embedded solutions full description to...
- see the entire Power efficient, high-performance neural network hardware IP for automotive embedded solutions datasheet
- get in contact with Power efficient, high-performance neural network hardware IP for automotive embedded solutions Supplier
Neural Network IP
- ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications
- Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- PowerVR Neural Network Accelerator
- PowerVR Neural Network Accelerator - cost-sensitive solution for low power and smallest area
- PowerVR Neural Network Accelerator - perfect choice for cost-sensitive devices
- PowerVR Neural Network Accelerator - The ideal choice for mid-range requirements