close
close
Nvidia will be thrilled – Samsung’s archrival announces it has started production of the HBM3E that will be used in Blackwell Ultra GPUs

Nvidia will be thrilled – Samsung’s archrival announces it has started production of the HBM3E that will be used in Blackwell Ultra GPUs

When you buy through links in our articles, Future and its distribution partners may earn a commission.

    SK Hynix HBM3E.     SK Hynix HBM3E.

Credit: SK Hynix

South Korean memory giant SK Hynix announced that it has started mass production of the world’s first 12-layer HBM3E with a total memory capacity of 36GB, a huge increase from the previous capacity of 24GB in 8-layer configuration .

This new design was made possible by reducing the thickness of each DRAM chip by 40%, allowing more layers to be stacked while maintaining the same overall size. The company plans to begin volume shipments by the end of 2024.

HBM3E memory supports a bandwidth of 9,600 MT/s, translating to an effective speed of 1.22 TB/s if used in an eight-stack configuration. The improvement makes it ideal for handling LLMs and AI workloads that require speed and high capacity. The ability to process more data at faster rates allows AI models to run more efficiently.

Nvidia and AMD hardware

For advanced memory stacking, SK Hynix employs innovative packaging technologies, including the Through Silicon Via (TSV) process and the Mass Reflow Molded Underfill (MR-MUF) process. These methods are essential to maintaining the structural integrity and heat dissipation necessary for stable, high-performance operation in the new HBM3E. Improvements in heat dissipation performance are particularly important for maintaining reliability during intensive AI processing tasks.

In addition to its increased speed and capacity, the HBM3E is designed to offer greater stability, with SK Hynix’s proprietary packaging processes ensuring minimal warping during stacking. The company’s MR-MUF technology allows for better management of internal pressure, reducing the chances of mechanical failures and ensuring long-term durability.

Initial sampling for this 12-layer HBM3E product began in March 2024, with Nvidia’s Blackwell Ultra GPUs and AMD’s Instinct MI325X accelerators expected to be among the first to use this improved memory, leveraging up to 288 GB of HBM3E to support computations. AI complexes. SK Hynix recently rejected a $374 million upfront payment from an unknown company to ensure it could provide Nvidia with enough HMB for its highly sought-after AI hardware.

“SK Hynix has once again broken technological boundaries, demonstrating our industry leadership in AI memory,” said Justin Kim, President (Head of AI Infrastructure) at SK Hynix. “We will continue our position as the #1 global AI memory provider as we constantly prepare next-generation memory products to meet the challenges of the AI ​​era.”

More from TechRadar Pro

Back To Top