Wave Computing Unveils New Licensable 64-Bit AI IP Platform to Enable High-Speed Inferencing and Training in Edge Applications
Wave’s TritonAI™ 64 Platform Provides a Scalable, Programmable Solution for AI System on Chip (SoC) Designers Targeting Automotive, Enterprise and Other High-Growth AI Edge Markets
CAMPBELL, Calif., April 10, 2019 – Wave Computing®, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, today announced its new TritonAI™ 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.
Wave’s TritonAI 64 platform is an industry-first solution, enabling customers to address a broad range of AI use cases with a single platform. The platform delivers efficient edge inferencing and training performance to support today’s AI algorithms, while providing customers with the flexibility to future-proof their investment for emerging AI algorithms. Features of the TritonAI 64 platform include a leading-edge MIPS® 64-bit SIMD engine that is integrated with Wave’s unique approach to dataflow and tensor-based configurable technology. Additional features include access to Wave’s MIPS integrated developer environment (IDE), as well as a Linux-based TensorFlow programming environment.
The global market for AI products is projected to dramatically increase to over $170B by 2025, according to technology analyst firm Tractica. The total addressable market (TAM) for AI at the edge comprises over $100B of this market and is being driven primarily by the needs for more efficient inferencing, and new AI workloads and use cases, as well as the need for training at the edge.
“Wave Computing is achieving another industry first by delivering a licensable IP platform that enables both AI inferencing and training at the edge,” said Derek Meyer, Chief Executive Officer of Wave Computing. “The tremendous growth of edge-based AI use cases is exacerbating the challenges of SoC designers who continue to struggle with legacy IP products that were not designed for efficient AI processing. Our TritonAI solution provides them with the investment protection of a programmable platform that can scale to support the AI applications of both today and tomorrow. TritonAI 64 enhances our overall AI offerings that span datacenter to edge and is another company milestone enabled by our acquisition of MIPS last year.”
Details of Wave’s TritonAI 64 Platform:
- MIPS 64-bit + SIMD Technology: Offering an open instruction set architecture (MIPS Open™), coupled with a mature integrated development environment (IDE), provides an ideal software platform for developing AI applications, stacks and use cases. The MIPS IP subsystem in the TritonAI 64 platform enables SoCs to be configured with up to six MIPS 64 CPUs, each with up to four hardware-threads. The MIPS subsystem hosts the execution of Google’s TensorFlow framework on a debian-based Linux operating system, enabling the development of both inferencing and edge learning applications. Additional AI frameworks such as Caffe2, can be ported to the MIPS subsystem, as well as support a wide variety of AI networks using ONNX conversion.
- WaveTensor™ Technology: The WaveTensor subsystem can scale up to a PetaOP of 8-bit integer operations on a single core instantiation by combining extensible slices of 4×4 or 8×8 kernel matrix multiplier engines for the highly efficient execution of today’s key Convolutional Neural Network (CNN) algorithms. The CNN execution performance can scale up to 8 TOPS/watt and over 10 TOPS/mm2 in industry standard 7nm process nodes with libraries using typical voltage and processes.
- WaveFlow™ Technology: Wave Computing’s highly flexible, linearly scalable fabric is adaptable for any number of complex AI algorithms, as well as conventional signal processing and vision algorithms. The WaveFlow subsystem features low latency, single batch size AI network execution and reconfigurability to address concurrent AI network execution. This patented WaveFlow architecture also supports algorithm execution without intervention or support from the MIPS subsystem.
Additional information about Wave Computing’s new TritonAI™ 64 Platform, in addition to details on Wave’s complete portfolio of IP solutions, can be found at https://wavecomp.ai.
About Wave Computing
Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based systems and solutions that deliver orders of magnitude performance improvements over legacy architectures. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. Wave Computing was named Frost & Sullivan’s 2018 “Machine Learning Industry Technology Innovation Leader” and is recognized by CIO Applications magazine as one of the “Top 25 Artificial Intelligence Providers.” Wave now has over 400 granted and pending patents and hundreds of customers worldwide. More information about Wave Computing can be found at https://wavecomp.ai.
|
Related News
- Faraday Unveils HiSpeedKit™-HS Platform for High-speed Interface IP Verification in SoCs
- Ceva and Edge Impulse Join Forces to Enable Faster, Easier Development of Edge AI Applications
- SiFive and ArchiTek Enable Secure, Private, Flexible Edge AI Computing With AiOnIc Processor
- Dolphin unveils two break-through DSP and AI digital platforms dedicated to edge computing applications
- DMP adopted for NEDO project of "Survey of issues for finding ideas regarding Technology Development for AI Chip and Next-generation Computing for High-efficiency and High-speed Processing"
Breaking News
- TSMC drives A16, 3D process technology
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
Most Popular
- Cadence Unveils Arm-Based System Chiplet
- CXL Fabless Startup Panmnesia Secures Over $60M in Series A Funding, Aiming to Lead the CXL Switch Silicon Chip and CXL IP
- Esperanto Technologies and NEC Cooperate on Initiative to Advance Next Generation RISC-V Chips and Software Solutions for HPC
- Eliyan Ports Industry's Highest Performing PHY to Samsung Foundry SF4X Process Node, Achieving up to 40 Gbps Bandwidth at Unprecedented Power Levels with UCIe-Compliant Chiplet Interconnect Technology
- Arteris Selected by GigaDevice for Development in Next-Generation Automotive SoC With Enhanced FuSa Standards
E-mail This Article | Printer-Friendly Page |