Industry Expert Blogs
![]() |
HBM4 Boosts Memory Performance for AI TrainingCadence Blog - Frank Ferro, CadenceApr. 21, 2025 |
The recent HBM4 specification announced by JEDEC is great news for developers of AI training hardware. HBM4 is the latest specification in the rapidly evolving High Bandwidth Memory (HBM) DRAM standard, providing 2TB/s of memory performance and higher density up to 64GB (32Gb 16-high), according to JEDEC. "The advancements introduced by HBM4 are vital for applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing, high-end graphics cards, and servers," said the JEDEC release.
Large language model (LLM) data sets are growing exponentially, and current CPU and GPU performance is often limited by the available memory bandwidth. Because of this "memory wall," HBM has become the memory of choice for generative AI training due to its superior bandwidth, capacity, and memory efficiency.