ArterisIP Drives Artificial Intelligence & Machine Learning Innovation for 15 Chip Companies
Interconnect IP enables fast and efficient integration of tens or hundreds of heterogeneous neural network hardware accelerators
CAMPBELL, Calif. — November 14, 2017 — ArterisIP, the innovative supplier of silicon-proven commercial system-on-chip (SoC) interconnect IP, today announced that in the past two years, 15 companies have licensed ArterisIP’s FlexNoC Interconnect or Ncore Cache Coherent Interconnect IP as critical components in new artificial intelligence (AI) and machine learning SoCs.
These nine (9) publicly-announced ArterisIP customers have created or are developing machine learning and AI SoCs for data center, automotive, consumer and mobile applications:
- Movidius (Intel) – Myriad™ ultra-low power machine learning vision processing units (VPU)
- Mobileye (Intel) – Since 2010; EyeQ®3, EyeQ®4 and EyeQ®5 advanced driver assistance systems (ADAS) using multiple heterogeneous processing elements for vision processing and machine learning
- NXP – Multiple ADAS and autonomous driving SoCs implementing machine learning, based on cache coherency and functional safety mechanisms
- Toshiba – Automotive ADAS SoC using cache coherence and functional safety mechanisms
- HiSilicon (Huawei) – Since 2013; new Kirin 970 Mobile AI Processor with Neural Processing Unit (NPU)
- Cambricon – Neural network processor with multiple processing elements
- Dream Chip Technologies – ADAS image sensor processor with multiple digital signal processor (DSP) and single instruction multiple data (SIMD) hardware accelerators
- Nextchip – Vision ADAS SoC with multiple processing elements
- Intellifusion – Machine learning visual intelligence with multiple heterogeneous on-chip hardware engines
In addition to the nine publicly-announced customers listed above, the following six (6) companies are also using ArterisIP to implement new AI and machine learning hardware architectures:
- Two (2) major semiconductor and systems vendors targeting autonomous driving
- A major semiconductor vendor targeting consumer electronics
- A major autonomous flying vehicle vendor
- A leader in new automotive sensor technologies
- An innovator in data center analytics
All of these innovation leaders create SoCs that accelerate machine learning and neural network algorithms using multiple instances of heterogeneous processing elements. Each SoC architecture is tailored to its target market requirements based on an on-chip interconnect configured specifically for the task. They have all licensed ArterisIP interconnect technology because it:
- Eases the on-chip integration of these different processing engines while allowing design teams to finely tune power management and quality-of-service (QoS) characteristics, like path latency and bandwidth;
- Simplifies software development and enables customized dataflow processing by supporting cache coherence in key parts of a system. This allows the system to take advantage of data reuse and local accumulation in shared caches, which reduces die area and can increase memory bandwidth while reducing processing latency and power consumption;
- Protects data in transit and at rest to increase functional safety diagnostic coverage, allowing large supercomputer-like SoCs to meet the stringent requirements of the automotive ISO 26262 specification.
“Efficiently implementing machine learning and visual computing in commercially viable systems requires hardware teams to accelerate neural network functions using many types of hardware accelerators, with the types and number of accelerators based on performance, power and area/cost requirements,” said Ty Garibay, Chief Technology Officer at ArterisIP. “ArterisIP technology gives these teams the means to integrate these processing elements into their systems quickly and efficiently, ensuring that they meet their schedule and functional safety requirements.”
“Machine learning has become the ‘killer app’ for our advanced interconnect IP, with a perfect match between the QoS, power consumption and performance required by AI and what the FlexNoC and Ncore interconnects deliver,” said K. Charles Janac, President and CEO of ArterisIP. “Our team is excited to be such a critical enabler to the new generation of neural network, machine learning and artificial intelligence chips.”
Presentation Download
For more information, please download this presentation titled, “Implementing Machine Learning and Neural Network Chip Architectures using Network-on-Chip Interconnect IP.”
About ArterisIP
ArterisIP provides system-on-chip (SoC) interconnect IP to accelerate SoC semiconductor assembly for a wide range of applications from automobiles to mobile phones, IoT, cameras, SSD controllers, and servers for customers such as Samsung, Huawei / HiSilicon, Mobileye (Intel), Altera (Intel), and Texas Instruments. ArterisIP products include the Ncore cache coherent and FlexNoC non-coherent interconnect IP, as well as optional Resilience Package (ISO 26262 functional safety) and PIANO automated timing closure capabilities. Customer results obtained by using the ArterisIP product line include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit www.arteris.com or find us on LinkedIn at www.linkedin.com/company/arteris.
|
Arteris Hot IP
Related News
- Moving AI Processing to the Edge Will Shake Up the Semiconductor Industry
- UltraSoC extends on-chip analytics architecture for the age of machine learning, artificial intelligence and parallel computing
- Esperanto Technologies Plans Energy-Efficient Chips for Artificial Intelligence and Machine Learning, based on the open RISC-V standard
- Intellifusion Licenses ArterisIP FlexNoC Interconnect IP for Machine Learning and Visual Intelligence Systems-on-Chip
- Cambricon employs Moortec's embedded PVT Monitoring Subsystem IP to their Artificial Intelligence (AI) and Machine Learning Chips
Breaking News
- HPC customer engages Sondrel for high end chip design
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- TSMC drives A16, 3D process technology
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
Most Popular
- Cadence Unveils Arm-Based System Chiplet
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- TSMC drives A16, 3D process technology
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
E-mail This Article | Printer-Friendly Page |