Arteris IP and Wave Computing Collaborate on Reference Architecture for Enterprise Dataflow Platform
The Arteris FlexNoC Artificial Intelligence (AI) Package Coupled with Wave Computing’s AI Systems and IP Technology Create a Unified Platform Optimized for AI Data Processing
CAMPBELL, Calif. – May 21, 2019 – Arteris IP, the world’s leading supplier of innovative silicon-proven network-on-chip (NoC) interconnect intellectual property (IP), and Wave Computing®, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, are collaborating to create a blueprint that can help customers overcome compute-to-memory design challenges. Additionally, Wave Computing is licensing Arteris IP’s Ncore Cache Coherent Interconnect, FlexNoC interconnect IP, and its accompanying FlexNoC AI Package for use in the AI-enabled chips that fuel Wave Computing’s data center systems products. By working together to assimilate each other’s technology attributes, Wave Computing and Arteris can ensure the seamless flow of information enterprise-wide, helping speed time-to-insight.
“Wave and Arteris have complementary compute and networking technologies that, when packaged together, address some of the key challenges facing system-on-chip designers today such as shorter product cycles and rapidly increasing product complexity,” said Steve Brightfield, senior director, Strategic AI IP Marketing, Wave Computing. “The world of AI demands greater compute power. Working with Arteris allows us to design a scalable data platform with blazing-fast performance at a cost-effective price that helps customers accelerate insight from the edge to the data center.”
The key to a successful AI-enabled, system-on-chip (SoC) design is effectively managing the flow of information across the chip. By linking Arteris’ NoC interconnect and AI package IP technology with Wave Computing’s TritonAI 64 dataflow processing elements and cores, customers can successfully reduce latency and optimize the flow of information across their SoC platforms.
“Arteris IP has developed unique on-chip interconnect capabilities that facilitate the rapid assembly of complex machine learning SoCs with cache coherent, non-coherent and regular AI structures to provide a competitive advantage to engineering teams designing the next generation of AI and machine learning chips,” said K. Charles Janac, President and CEO of Arteris IP. “The combination of the TritonAI 64 IP platform and Arteris IP’s portfolio of interconnect technologies helps customers significantly boost performance and enable the seamless flow of data across a wide variety of compute-intensive, AI-enabled automotive, enterprise and networking applications.”
For more information on Wave Computing’s complete portfolio of IP and systems products visit www.wavecomp.ai. Additional details on Arteris IP’s line of AI-enabled network computing solutions visit www.arteris.com.
About Wave Computing
Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based systems and solutions. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave Computing is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. Wave Computing received Frost & Sullivan’s 2018 “Machine Learning Industry Technology Innovation Leader” award and recognized as one of the “Top 25 Artificial Intelligence Providers” by CIO Applications magazine. More information about Wave Computing can be found at https://wavecomp.ai.
Wave Computing, the Wave Computing logo, MIPS Open, MIPS32, microAptiv, TritonAI 64 and MIPS are trademarks of Wave Computing, Inc. and its applicable affiliates. All other trademarks are used for identification purposes only and are the property of their respective owners.
About Arteris IP
Arteris IP provides network-on-chip (NoC) interconnect IP to accelerate system-on-chip (SoC) semiconductor assembly for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers for customers such as Baidu, Mobileye, Samsung, Huawei / HiSilicon, Toshiba and NXP. Arteris IP products include the Ncore® cache coherent and FlexNoC® non-coherent interconnect IP, the CodaCache® standalone last level cache, and optional Resilience Package (ISO 26262 functional safety), FlexNoC AI Package, and PIANO® automated timing closure capabilities. Customer results obtained by using Arteris IP products include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit www.arteris.com.
|
Arteris Hot IP
Related News
- Silex Insight and Wave Computing Collaborate to Deliver Security-Conscious Artificial Intelligence (AI) Platforms Across Enterprise and Automotive Markets
- UltraSoC embedded analytics selected to support Wave Computing's TritonAI 64 IP platform
- Wave Computing Unveils New Licensable 64-Bit AI IP Platform to Enable High-Speed Inferencing and Training in Edge Applications
- Wave Computing Launches the MIPS Open Initiative To Accelerate Innovation for the Renowned MIPS Architecture
- Andes Technology Provides System Control Processor IP for Wave Computing's Revolutionary Dataflow Processing Unit Design
Breaking News
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
Most Popular
E-mail This Article | Printer-Friendly Page |