Micron Unveils HBM3 Gen2 Memory: 1.2 TB/sec Memory Stacks For HPC and AI Processors

Micron today is introducing its first HBM3 memory products, becoming the latest of the major memory manufacturers to start building the high bandwidth memory that’s widely used in server-grade GPUs and other high-end processors. Aiming to make up for lost time against its Korean rivals, Micron intends to essentially skip “vanilla” HBM3 and move straight on to even higher bandwidth versions of the memory they’re dubbing “HBM3 Gen2”, developing 24 GB stacks that run at over 9 GigaTransfers-per-second. These new HBM3 memory stacks from Micron will target primarily AI and HPC datacenter, with mass production kicking off for Micron in early 2024.

Micron’s 24 GB HBM3 Gen2 modules are based on stacking eight 24Gbit memory dies made using the company’s 1β (1-beta) fabrication process. Notably, Micron is the first of the memory vendors to announce plans to build HBM3 memory with these higher-density dies, while SK hynix offers their own 24 GB stacks, the company is using a 12-Hi configuration of 16Gbit dies. So Micron is on track to be the first vendor to offer 24 GB HBM3 modules in the more typical 8-Hi configuration. And Micron is not going to stop at 8-Hi 24Gbit-based HBM3 Gen2 modules, either, with the company saying that they plan to introduce even higher capacity class-leading 36 GB 12-Hi HBM3 Gen2 stacks next year.

Besides taking the lead in density, Micron is also looking to take a lead in speed. The company expects its HBM3 Gen2 parts to hit date rates as high as 9.2 GT/second, 44% higher than the top speed grade of the base HBM3 specification, and 15% faster than the 8 GT/second target for SK hynix’s rival HBM3E memory. The increased data transfer rate enables each 24 GB memory module to offer peak bandwidth of 1.2 TB/sec per stack.

Micron says that 24GB HBM3 Gen2 stacks will enable 4096-bit HBM3 memory subsystems with a bandwidth of 4.8 TB/s and 6096-bit HBM3 memory subsystems with a bandwidth of 7.2 TB/s. To put the numbers into context, Nvidia’s H100 SXM features a peak memory bandwidth of 3.35 TB/s.

HBM Memory Comparison
 “HBM3 Gen2”HBM3HBM2EHBM2
Max Capacity24 GB24 GB16 GB8 GB
Max Bandwidth Per Pin9.2 GT/s6.4 GT/s3.6 GT/s2.0 GT/s
Number of DRAM ICs per Stack81288
Effective Bus Width1024-bit
Voltage1.1 V?1.1 V1.2 V1.2 V
Bandwidth per Stack1.2 TB/s819.2 GB/s460.8 GB/s256 GB/s

High frequencies aside, Micron’s HBM3 Gen2 stacks are otherwise drop-in compatible with current HBM3-compliant applications (e.g., compute GPUs, CPUs, FPGAs, accelerators). So device manufacturers will finally have the option of tapping Micron as an HBM3 memory supplier as well, pending the usual qualification checks.

Under the hood, Micron’s goal to jump into an immediate performance leadership position within the HBM3 market means that they need to one-up their competition from a technical level. Among other changes and innovations to accomplish that, the company increased the number of through-silicon vias (TSVs) by two times compared to shipping HBM3 products. In addition, Micron shrunk the distance between DRAM devices in its HBM3 Gen2 stacks. These two changes to packaging reduced thermal impendence of these memory modules and made it easier to cool them down. Yet, the increased number of TSVs can bring other advantages too.

Given that Micron uses 24 Gb memory devices (rather than 16 Gb memory devices) for its HBM3 Gen2 stacks, it is inevitable that it had to increase the number of TSVs to ensure proper connectivity. Yet, doubling the number of TSVs in an HBM stack can enhance overall bandwidth (and shrink latency), power efficiency, and scalability by facilitating more parallel data transfers. It also improves reliability by mitigating the impact of single TSV failures through data rerouting. However, these benefits come with challenges such as increased manufacturing complexity and increased potential for higher defect rates (already an ongoing concern for HBM), which can translate to higher costs.

Just like other HBM3 memory modules, Micron’s HBM3 Gen2 stacks feature Reed-Solomon on-die ECC, soft repair of memory cells, hard-repair of memory cells as well as auto error check and scrub support.

Micron says it will mass produce its 24 GB HBM3 modules starting in Q1 2024, and will start sampling its 12-Hi 36GB HBM3 stacks around this time as well. The latter will enter high volume production in the second half of 2024.

To date, the JEDEC has yet to approve a post-6.4GT/second HBM3 standard. So Micron’s HBM3 Gen2 memory, as well as SK hynix’s rival HBM3E memory, are both off-roadmap standards for the moment. Given the interest in higher bandwidth HBM memory and the need for standardization, we’d be surprised if the group didn’t eventually release an updated version of the HBM3 standard that Micron’s devices will conform to. Though as the group tends to shy away from naming battles (“HBM2E” was never a canonical product name for faster HBM2, despite its wide use), it’s anyone’s guess how this latest kerfuffle over naming will play out.

Beyond their forthcoming HBM3 Gen2 products, Micron is also making it known that the company already working on HBMNext (HBM4?) memory. That iteration of HBM will provide 1.5 TB/s – 2+ TB/s of bandwidth per stack with capacities ranging from 36 GB to 64 GB.

Source

Shopping Cart
Shopping cart0
There are no products in the cart!
Continue shopping
0
Scroll to Top