MIPI C-PHY v1.2 D-PHY v2.1 TX 3 trios/4 Lanes in TSMC (16nm, 12nm, N7, N6, N5, N3E)
Industry Expert Blogs
Delivering Terabyte-Scale Bandwidth with HBM3-Ready Memory SubsystemRambus Blog - Andreas Mouschoulis, RambusAug. 18, 2021 |
An exponential rise in data volume, and the meteoric rise of advanced workloads like AI/ML training, requires constant innovation in all aspects of computing. Memory bandwidth is a critical enabler of unleashing the power of processors and accelerators, and the High Bandwidth Memory (HBM) standard has evolved rapidly to deliver the performance required by the most demanding applications.
For current generation HBM2E, Rambus introduced the industry’s fastest memory subsystem capable of 4 gigabits per second (Gbps) operation. With a 1024-bit wide interface, 4 Gbps signaling delivers 512 gigabytes per second (GB/s) of bandwidth. In accelerator architectures with 4-6 HBM2E DRAM devices (each device being a 3D stack of DRAM chips), there’s the capability for 2-3 Terabytes per second (TB/s) of memory bandwidth. That’s enormous, but the appetite for bandwidth is insatiable, so the wheel of innovation needs to keep spinning.
Related Blogs
- Ecosystem Collaboration Drives New AMBA Specification for Chiplets
- Intel Embraces the RISC-V Ecosystem: Implications as the Other Shoe Drops
- Extending Arm Total Design Ecosystem to Accelerate Infrastructure Innovation
- Complete Memory Interface Solution for HBM2E Launched
- Alphawave Semi Elevates AI with Cutting-Edge HBM4 Technology