UPMEM Puts CPUs Inside Memory to Allow Applications to Run 20 Times Faster
Company CTO to Discuss Processing-in-Memory Approaches at HOT CHIPS Conference
STANFORD, CALIFORNIA – August 19, 2019 – UPMEM announced today a Processing-in-Memory (PIM) acceleration solution that allows big data and AI applications to run 20 times faster and with 10 times less energy. Instead of moving massive amounts of data to CPUs, the silicon-based technology from UPMEM puts CPUs right in the middle of data, saving time and improving efficiency. By allowing compute to take place directly in the memory chips where data already resides, data-intensive applications can be substantially accelerated. UPMEM reduces data movement while leveraging existing server architecture and memory technologies.
UPMEM CTO and Co-Founder Fabrice Devaux will discuss this new approach along with user case studies in a session titled “True Processing in Memory with DRAM Accelerator” at the HOT CHIPS Conference in Stanford, Calif., on August 19, 2019.
“Today, applications in the data center and at the edge are becoming increasingly data-intensive and processing them becomes constrained by the energy cost of the data movement between the memory and the processing cores, as well as the limited bandwidth between them,” said Devaux. “In my session, I will explain how PIM technology can address those challenges and bring unprecedented benefits to organizations of all sizes. Here at UPMEM, we think that making in-situ processing a practical reality is a major advance in computing.”
“Offloading most of the processing in the memory chips while leveraging existing computing technologies is directly benefiting our target customers running critical software applications in data centers,” says Gilles Hamou, CEO and co-founder of UPMEM. “The level of interest we have been experiencing clearly demonstrates the market need and we are looking forward to sharing more details about customer adoption in the upcoming months.”
The PIM chip, embedding UPMEM’s proprietary processors (DRAM Processing Units, DPUs) and main memory (DRAM) on a memory chip, is the low-cost, ultra-efficient building block of this technology. Together with its Software Development Kit (SDK), delivered on standard DIMM modules, the UPMEM PIM solution accelerates data-intensive applications with a seamless integration into standard servers.
“Today’s AI- and ML-driven applications are rapidly increasing the volume, velocity and variety of data, while simultaneously increasing the need to process data in real-time,” said Steffen Hellmold, vice president of corporate business development at Western Digital, an investor in UPMEM through the company’s strategic investment fund, Western Digital Capital. “UPMEM’s innovative PIM acceleration solution intelligently integrates processing with DRAM memory, providing the flexibility to create the purpose-built, data-centric compute architectures that will be essential to meet the demands of the zettabyte age.”
Current use cases include genomics companies where mapping or comparing DNA fragments against a reference genome involves tens of GBs of data. The UPMEM PIM modules are installed in existing servers to replace the regular DRAM memory modules and the the UPMEM PIM accelerator then reduces operations from hours to minutes, delivering an unprecedented level of efficiency and performance.
About UPMEM
UPMEM is bringing to market an ultra-efficient, scalable and programmable PIM technology that allows drastic reduction of data movement in the computing node for data-intensive applications in the data center and at the edge. UPMEM was founded in 2015, with headquarters in Grenoble, France, and a network of partners from Asia to the U.S. The team combines both entrepreneurial and technical expertise, ranging from processor architecture, software design, and low-level application workloads. Among UPMEM investors are Western Digital, Partech, C4 Ventures, Supernova Invest, and the French tech innovation agency.
|
Related News
- UPMEM Announces the First Processing In-Memory Chip Accelerating Big Data Applications
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy
- 4DS Unveils New Interface Switching ReRAM Technology for Faster and Energy Efficient Memory for AI Processing
- Rambus Advances AI 2.0 with GDDR7 Memory Controller IP
- Ferroelectric Memory GmbH (FMC) Raises $20 Million to Accelerate Next-Generation Memory for AI, IoT, Edge Computing, and Data Center Applications
Breaking News
- Arm loses out in Qualcomm court case, wants a re-trial
- Jury is out in the Arm vs Qualcomm trial
- Ceva Seeks To Exploit Synergies in Portfolio with Nano NPU
- Synopsys Responds to U.K. Competition and Markets Authority's Phase 1 Announcement Regarding Ansys Acquisition
- Alphawave Semi Scales UCIe™ to 64 Gbps Enabling >20 Tbps/mm Bandwidth Density for Die-to-Die Chiplet Connectivity
Most Popular
E-mail This Article | Printer-Friendly Page |