Industry Expert Blogs
HBM2E targets AI/ML trainingRambus BlogMar. 22, 2021 |
Frank Ferro, Senior Director Product Management at Rambus, has written a detailed article for Semiconductor Engineering that explains why HBM2E is a perfect fit for Artificial Intelligence/Machine Learning (AI/ML) training. As Ferro points out, AI/ML growth and development are proceeding at a lighting pace. Indeed, AI training capabilities have jumped by a factor of 300,000 (10X annually) over the past 8 years. This trend continues to drive rapid improvements in nearly every area of computing, including memory bandwidth capabilities.
HBM: A Need for Speed
Introduced in 2013, High Bandwidth Memory (HBM) is a high-performance 3D-stacked SDRAM architecture.
“Like its predecessor, the second generation HBM2 specifies up to 8 memory die per stack, while doubling pin transfer rates to 2 Gbps,” Ferro explains. “HBM2 achieves 256 GB/s of memory bandwidth per package (DRAM stack), with the HBM2 specification supporting up to 8 GB of capacity per package.”
As Ferro notes, JEDEC announced the HBM2E specification in late 2018 to support increased bandwidth and capacity.
“With transfer rates rising to 3.2 Gbps per pin, HBM2E can achieve 410 GB/s of memory bandwidth per stack,” he explains. “In addition, HBM2E supports 12‑high stacks with memory capacities of up to 24 GB per stack.”