SemiconductorX > Fab & Assembly > Manufacturing Flow > Module Integration > Memory Modules



Memory Modules



Memory modules deliver capacity and bandwidth to processors. A memory die alone — a DRAM chip, a NAND flash chip — cannot provide the aggregate capacity or bandwidth that modern CPUs and AI accelerators require. Modules assemble multiple dies into a unified package or PCB form factor that matches the host processor's memory controller. The module level is where the memory product is defined from the system perspective: an HBM3 stack, a DDR5 DIMM, an LPDDR5X package, a 3D NAND package. These modules then plug into subsystem boards alongside the compute they serve.

Memory module supply concentrates at three companies for DRAM — Samsung, SK hynix, and Micron — which between them produce essentially all of the world's DRAM. NAND flash supply is slightly wider, with Samsung, SK hynix (including the acquired Intel NAND business), Kioxia, Western Digital (Kioxia partnership), and Micron. Module assembly for DRAM DIMMs and SSDs runs through a broader base of assemblers and memory system integrators — Kingston, ADATA, Corsair, and many others — but the die supply that feeds those assemblers traces back to the DRAM and NAND concentration. HBM is the most concentrated portion of the memory module landscape and the one most under supply pressure in the current AI buildout.


Module Types

Memory modules split by the type of memory they deliver. Each category has its own form factor, performance envelope, and application segment.

Module TypeConstructionPrimary Applications
High-Bandwidth Memory (HBM)8 to 16 stacked DRAM dies connected by through-silicon vias (TSVs), sitting on a logic base die; integrated with GPUs or AI accelerators on a silicon interposerAI training and inference accelerators (NVIDIA H100/H200/Blackwell, AMD MI300/MI350, Intel Gaudi); HPC GPUs
DDR5 DIMMs (RDIMM / LRDIMM)Discrete DRAM packages mounted on a printed circuit board in a DIMM form factor; buffered variants for higher capacityServer and workstation main memory; enterprise and datacenter
DDR5 UDIMMs / SODIMMsUnbuffered DRAM packages on DIMM PCB (desktop) or compact SODIMM (laptop)Consumer PCs, laptops, workstations
LPDDR packages (LPDDR5 / LPDDR5X)Stacked low-power DRAM in a compact package, often package-on-package (PoP) with the host SoCMobile SoCs, automotive ADAS compute, edge AI devices
GDDR packages (GDDR6 / GDDR6X / GDDR7)High-speed DRAM packages soldered directly to GPU PCBsConsumer and workstation GPUs; game consoles
CXL memory modulesDDR5 DRAM on a CXL-native module (E3.S, EDSFF); speaks CXL protocol rather than native DDRMemory pooling and expansion in datacenter; emerging deployment
3D NAND packagesStacked 3D NAND dies (currently 200-to-400+ layer) in multi-die packagesSolid-state drives; datacenter storage modules; embedded storage (eMMC, UFS)
SSD modulesNAND packages + controller + DRAM buffer on SSD form factors (M.2, U.2, E1.S, E3.S)Datacenter and consumer storage; enterprise NVMe deployments

HBM

High-bandwidth memory is the most strategic memory module in current semiconductor manufacturing because it is the rate-limiting component for AI accelerators. Every NVIDIA H100 and H200, every AMD MI300 and MI350, every major AI accelerator ships with HBM. HBM provides more than 1 TB/s of memory bandwidth per stack and integrates directly with the compute die on a silicon interposer through CoWoS packaging. The combination of HBM supply and CoWoS capacity is the primary supply bottleneck on the AI buildout through 2026 and 2027.

HBM supply concentrates at three companies. SK hynix leads production and has the largest share of NVIDIA's HBM supply, particularly for HBM3 and HBM3E. Samsung has qualified HBM capacity but has faced repeated qualification delays at NVIDIA for the latest generations. Micron is the third supplier, producing HBM from US-based manufacturing and supplying NVIDIA and others. Among the three, SK hynix's strong position has been reinforced by the tight coupling between its HBM roadmap and NVIDIA's GPU roadmap; multi-year supply agreements effectively lock a large fraction of SK hynix's advanced HBM capacity to NVIDIA.

HBM module assembly combines multiple process stages: the base logic die is manufactured in its own process, DRAM dies are manufactured in the DRAM fab, dies are stacked using TSVs (and increasingly hybrid bonding for HBM4 and beyond), and the stack is integrated with the host processor die on a silicon interposer at the foundry (TSMC runs most HBM-plus-compute integration through CoWoS). This is why HBM supply is not simply a memory manufacturer capacity question — it is a question about the full CoWoS-plus-HBM pipeline, where any of the stages can be the bottleneck. See HBM for the device-level view and CoWoS for the packaging integration.


DRAM DIMMs

DDR5 DIMMs are the primary memory module for servers, workstations, and PCs. A DIMM is a printed circuit board carrying multiple DRAM packages, plus supporting components (register chips for RDIMMs, buffer chips for LRDIMMs) that allow multiple DIMMs to be stacked on a single memory channel without signal integrity issues. DRAM packages themselves are assembled by the DRAM manufacturers; the final DIMM assembly runs through either the DRAM makers' captive DIMM lines or through a broader base of memory module specialists.

The DDR5 generation has seen substantial complexity growth at the module level. Per-DIMM speeds have risen to DDR5-6400 and beyond, requiring better signal integrity engineering on the DIMM PCB. Power management has moved onto the DIMM itself through integrated power management ICs (PMICs) — a change from prior generations where power was delivered by the motherboard. These PMICs have themselves been a supply constraint; early DDR5 PMIC shortages delayed DDR5 ramp in 2021-2022. CXL memory modules are an emerging alternative form factor that replaces the native DDR interface with the CXL protocol, enabling capacity expansion beyond what a processor's native memory controller can address.


NAND / SSD Modules

3D NAND packages assemble stacked NAND dies into multi-die packages, which then combine with a controller and DRAM buffer on an SSD form factor. SSD form factor diversity has expanded substantially over the past five years: M.2 2280 for consumer and client workstations, U.2 and U.3 for traditional datacenter, and the EDSFF family (E1.S, E1.L, E3.S, E3.L) for newer datacenter deployments optimized for density and cooling. Enterprise SSDs routinely pack over 30 TB of NAND per module, with the largest modules approaching 100 TB.

NAND supply is broader than DRAM but still concentrated. Samsung and SK hynix (including the acquired Solidigm / Intel NAND business) lead; Kioxia and its manufacturing partner Western Digital are the combined second-largest position; Micron is the third major supplier. The NAND industry has gone through a long period of oversupply and pricing pressure that intensified through 2022-2023 before tightening substantially through 2024-2025 as demand from AI storage grew. Module-level SSD assembly runs at the NAND makers (captive SSD lines), at dedicated SSD suppliers (Solidigm, Kioxia, Samsung enterprise SSD lines), and at a broader base of third-party assemblers.


Representative Modules

ModuleSuppliersApplications
HBM3 / HBM3E stacksSK hynix (leader), Samsung, MicronNVIDIA H100, H200, Blackwell; AMD MI300/MI350; Intel Gaudi
HBM4 (ramping)SK hynix, Samsung, Micron (qualification)Next-generation AI accelerators from late 2026
DDR5 RDIMMs/LRDIMMsSamsung, Micron, SK hynix; assembly through OEM and third-party module makersServer main memory; Intel Xeon, AMD EPYC platforms
LPDDR5X packagesMicron, Samsung, SK hynixMobile SoCs (Qualcomm Snapdragon, Apple A- and M-series), automotive ADAS, edge AI
GDDR6X / GDDR7Micron (GDDR6X partnership with NVIDIA); Samsung and SK hynix for GDDR7Consumer and workstation GPUs; gaming
Enterprise NVMe SSDsSamsung, Solidigm (SK hynix), Kioxia, Micron, Western DigitalDatacenter storage; enterprise compute platforms
Consumer SSDsSamsung, SK hynix, Kioxia, Western Digital, Micron, plus third-party assemblersClient laptops, desktops, external storage

Module-Level Supply Chain Observations

Two observations shape how memory module supply behaves at the system level. First, the module level is not typically where supply chokepoints occur — the chokepoints are earlier, at the DRAM fab, at the HBM stacking and CoWoS integration line, at the NAND fab. Module assembly capacity is not binding. This means memory supply analysis typically reduces to die-level supply analysis. Second, HBM is the exception. HBM module supply genuinely is tight because the integration with compute through CoWoS creates a coupled pipeline where any stage's capacity limits the whole. HBM3E and HBM4 allocations are managed at executive level between memory suppliers and AI accelerator customers.

DIMM and SSD module supply elasticity is high; the underlying die supply is where capacity decisions get made. Supply allocation across market segments — enterprise versus consumer, HBM versus standard DRAM, datacenter versus mobile — is a more interesting strategic question than assembly capacity. The three big DRAM makers allocate wafer starts across HBM, DDR5, LPDDR, and GDDR based on pricing and long-term contracts. Through 2024-2025, the industry has allocated increasing wafer share to HBM at the expense of standard DRAM, tightening DDR5 supply and lifting DRAM average prices broadly.


Related Coverage

Parent: Module Integration

Sibling modules: Multi-Chip Modules (MCMs) · CPU/GPU Boards

Memory device types: Memory & Storage · HBM

Enabling packaging: CoWoS · 3D IC · Advanced Interconnects (hybrid bonding for HBM4)