arzh-CNenfrdejakoplptesuk
Search find 4120  disqus socia  tg2 f2 lin2 in2 X icon 3 y2  p2 tik steam2

HBM8 to Deliver Up to 64TB/s of Bandwidth — Memory Roadmap to 2038 Published

South Korean KAIST and Tera laboratory presented in detail HBM memory roadmap through 2038, including HBM4, HBM5, HBM6, HBM7, and HBM8. Each standard offers significant increases in throughput, density, and power efficiency for data centers and AI clusters.

hdm1

HBM4, which will debut in 2026, will offer up to 2 TB/s throughput per stackTo 48 GB per module, and will already be used in NVIDIA Rubin и AMD Instinct MI400The HBM4e version will increase speed to 10 Gbps and up to 64 GB per stack at 80 W.

Followed by HBM5, designed for 2029: 4 TB/s per stack, 80 GB of capacity and 100 W of power. The first to receive it will be the architecture NVIDIA Feynman. It is followed by HBM6, which will double the throughput to 8 TB/s and increase the capacity to 120 GB per stack, using immersion cooling and stacks up to 20 crystals.

hdm2

HBM7 and HBM8, expected in 2034–2038, will bring the characteristics to the extreme. HBM7 will offer 24 TB/s per stack, 192 GB of memory and up to 160 W of current, and HBM8 to reach 64TB/s and 240GB per module, using double-sided interposers and coaxial TSVs.

Also presented was HBF (High Bandwidth Flash) — a NAND-based stack for LLM that can work in tandem with HBM7 to provide up to 1 TB of additional memory per stack. It connects directly to HBM with 2 TB/s of throughput.

HBM8 will use integrated cooling via a glass interposer, paving the way for systems with 5–6 TB of memory per package. The adoption of these solutions will begin with Rubin, MI400, and Feynman, with a peak expected in the mid-2030s.