Tachyum Offers Its TPU Inference IP to Edge and Embedded Markets
LAS VEGAS, September 12, 2023 – Tachyum® today announced that it is expanding the unique value proposition of its Tachyum Prodigy by offering its Tachyum TPU® (Tachyum Processing Unit) intellectual property as a licensable core, allowing developers to take full advantage of intelligent, datacenter-trained AI when making IoT and Edge devices.
Tachyum’s Prodigy is the first Universal Processor combining General Purpose Processors, High Performance Computing (HPC), Artificial Intelligence (AI), Deep Machine Learning, Explainable AI, Bio AI and other AI disciplines with a single chip. With the tremendous growth of the AI chipset market for edge inference, Tachyum is looking to extend its proprietary Tachyum AI data type beyond the datacenter by providing its internationally registered and trademarked IP to outside developers.
Key features of the TPU inference and generative AI/ML IP architecture include architectural transactional and cycle accurate simulators; tools and compilers support; and hardware licensable IP, including RTL in Verilog, UVM Testbench and synthesis constraints. Tachyum has 4b per weight working for AI training and 2b per weight as part of the proprietary Tachyum AI (TAI) data type, which will be announced later this year.
“Inference and generative AI is coming to almost every consumer product and we believe that licensing TPU is a key avenue for Tachyum to proliferate our world-leading AI into this marketplace for models trained on Tachyum’s Prodigy Universal Processor chip,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “As Tachyum is the only owner of the TPU trademark within the AI space, it is a valuable corporate asset to not only Tachyum but to all the vendors who respect that trademark and ensure that they properly license its use as part of their products.”
As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains (such as AI/ML, HPC, and cloud) on a single architecture. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4,5x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.
For licensing opportunities of Tachyum’s TPU IP, interested vendors are invited to contact the company.
About Tachyum
Tachyum is transforming the economics of AI, HPC, public and private cloud workloads with Prodigy, the world’s first Universal Processor. Prodigy unifies the functionality of a CPU, a GPGPU, and a TPU in a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When hyperscale data centers are provisioned with Prodigy, all AI, HPC, and general-purpose applications can run on the same infrastructure, saving companies billions of dollars in hardware, footprint, and operational expenses. As global data center emissions contribute to a changing climate, and consume more than four percent of the world’s electricity—projected to be 10 percent by 2030—the ultra-low power Prodigy Universal Processor is a potential breakthrough for satisfying the world’s appetite for computing at a lower environmental cost. Prodigy, now in its final stages of testing and integration before volume manufacturing, is being adopted in prototype form by a rapidly growing customer base, and robust purchase orders signal a likely IPO in late 2024. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.
|
Related News
- Synopsys and SiMa.ai Collaborate to Bring Machine Learning Inference at Scale to the Embedded Edge
- Ceva and Edge Impulse Join Forces to Enable Faster, Easier Development of Edge AI Applications
- AI Edge Inference IP Leader Expedera Opens R&D Office in India
- MIPS To Showcase New Embedded and Edge AI Innovations At Computex 2024
- Cadence Expands Tensilica IP Portfolio with New HiFi and Vision DSPs for Pervasive Intelligence and Edge AI Inference
Breaking News
- Arm loses out in Qualcomm court case, wants a re-trial
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
Most Popular
E-mail This Article | Printer-Friendly Page |