arzh-CNenfrdejakoplptesuk
Search find 4120  disqus socia  tg2 f2 lin2 in2 X icon 3 y2  p2 tik steam2

Micron Starts Shipping HBM4 — Up to 2TB/s Per Stack and 60% Higher Performance Than HBM3E

Micron has officially launched Deliveries of new generation HBM4 memory key customers developing AI platforms. The new 36GB stacks in a 12-layer layout demonstrate significant gains in throughput and energy efficiency, which is especially relevant for data centers and large-scale AI systems.

HBM4

Developed on the basis of the 1β (1-beta) DRAM process technology, HBM4 delivers over 2,0 TB/s of throughput per stack thanks to the 2048-bit interface. This is on more than 60% higher, than the previous generation HBM3E. This leap allows for efficient processing of tasks related to training and inference of large language models, as well as supporting the operation of AI accelerators in critically loaded sectors.

Micron also announced improvements energy efficiency by more than 20%, which allows reducing overall energy consumption in data centers while maintaining high computing density. The company emphasizes that the release of HBM4 is strictly synchronized with the readiness schedules of customers, including developers of advanced AI platforms. Among them, the following have already been named NVIDIA Vera Rubin and AMD Instinct MI400, where the new memory will be used.

Micron reports that The products are already being delivered to customers and are being actively tested., and the first full-scale deliveries will begin as production scales up. This is another step towards strengthening the company's position in the AI ​​memory market. HBM4 to be a key component of future AI accelerators, especially in the medical, financial and transport segments, where both speed and energy efficiency are critical.