Imagination announces PowerVR Series3NX Neural Network Accelerator, bringing multi-core scalability to the embedded AI market
Offering single-core performance from 0.6 to 10 TOPS with multi-core scalability beyond 160 TOPS to reach unprecedented levels of compute performance and scalability
London, UK and Shenzhen, China – 4th December 2018 – Imagination Technologies announces PowerVR Series3NX, its latest neural network accelerator (NNA) architecture. Building on the success of its multi award-winning predecessor, Series3NX provides an unrivalled level of scalability, enabling SoC manufacturers to optimise compute power and performance across a range of embedded markets such as automotive, mobile, smart surveillance and IoT edge devices.
A single Series3NX core scales from 0.6 to 10 tera operations per second (TOPS), while multicore implementations can scale beyond 160 TOPS. Thanks to architectural enhancements, including lossless weight compression, the Series3NX architecture benefits from a 40% boost in performance in the same silicon area over the previous generation, giving SoC manufacturers a nearly 60% improvement in performance efficiency and a 35% reduction in bandwidth.
As part of the Series3NX architecture, Imagination is also announcing the PowerVR Series3NX-F (Flexible) IP configuration to provide an unprecedented balance of functionality and flexibility, combined with industry-leading performance. Using Series3NX-F customers can differentiate and add value to their offerings through the OpenCL framework.
“There are tremendous opportunities to apply AI at the edge to create devices that are more capable, more autonomous and easier to use,” said Jeff Bier, founder of the Embedded Vision Alliance. “In many of these applications, a key challenge is achieving the right combination of processing performance, flexibility, cost and power consumption. I applaud Imagination Technologies’ ongoing investment in creating innovative processors to meet these needs.”
Russell James, vice president, Vision & AI, Imagination, says: “The Series3NX architecture and Series3NX-F are built without compromise. Together they bring flexibility and scalability, while near doubling top-line performance. This is a game changer, a true enabler for mass AI adoption in embedded devices.”
To cater for a rapidly developing market, new PowerVR tooling extensions can optimally map emerging network models, offering an ideal mix of flexibility and performance optimisation.
With Imagination’s dedicated DNN (Deep Neural Network) API, developers can easily write AI applications targeting Series3NX architecture as well as existing PowerVR GPUs. The API works across multiple SoC configurations for easy prototyping on existing devices.
Imagination launched the previous generation of its NNA, the PowerVR Series2NX, in 2017. To date it has been licensed by multiple customers, predominantly focused in the mobile and automotive markets.
Availability
The PowerVR Series3NX is available for licensing now and PowerVR Series3NX-F will be available in Q1 2019.
|
Imagination Technologies Group plc Hot IP
Related News
- UNISOC and Imagination carry out strategic cooperation on AI based on IMG Series3NX neural network accelerator
- Dolphin Design wins an Embedded Award for Tiny Raptor, its Energy-Efficient Neural Network AI Accelerator
- Imagination's Series3NX neural network accelerator helps UNISOC to create 5G smartphone platform
- Imagination launches multi-core IMG Series4 NNA - the ultimate AI accelerator delivering industry-disruptive performance for ADAS and autonomous driving
- PowerVR Series2NX neural network accelerator cores set the standard for performance and cost-efficiency
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Most Popular
E-mail This Article | Printer-Friendly Page |