SemiconductorX > Fab & Assembly > Fab Facilities > Wafer Fabs > DRAM
DRAM Fabs
DRAM (Dynamic Random Access Memory) fabrication is one of the most commercially concentrated archetypes in semiconductors. The global industry consolidated through three decades of cyclical shakeouts from approximately twenty producers in the 1980s to three Western operators today: Samsung Memory (dominant by volume), SK hynix (second, HBM leader), and Micron Technology (third, US-headquartered). ChangXin Memory Technologies (CXMT) is the scaling Chinese entrant, having transitioned from DDR4 production since 2019 to DDR5 and LPDDR5 at increasing scale with substantial state industrial policy support. A handful of specialty Taiwanese producers (Nanya, Winbond, PSMC legacy lines) serve specific market segments at mature nodes but do not compete at leading-edge DRAM with the three-plus-one global structure.
The DRAM industry has been restructured over the past three years by the emergence of HBM (High-Bandwidth Memory) as the high-value growth segment within DRAM. HBM stacks 8, 12, or 16 DRAM dies on a base die to deliver the memory bandwidth that AI accelerators require, and HBM demand has been doubling annually from 2023 onward as the AI accelerator market has scaled. The result is that DRAM revenue growth and capacity allocation are being shaped primarily by HBM dynamics rather than by commodity DRAM cyclicality, creating a structural shift in an industry that had been characterized by commodity memory economics for decades.
What Makes DRAM Fabrication Distinctive
DRAM fabrication uses a process flow that is recognizably related to logic manufacturing but uses distinctive architectures and specialty process steps that logic fabs do not require. The DRAM cell consists of one transistor and one capacitor (1T1C) that together store a single bit. The transistor uses a buried wordline (BWL) architecture — the gate is recessed into the silicon substrate rather than sitting above it as in planar or FinFET logic — to maximize cell density at the smallest possible footprint. The capacitor uses a capacitor-over-bitline (COB) architecture, with the storage capacitor formed above the bitline wiring in a tall vertical structure with aspect ratios now exceeding 50:1 at leading DRAM nodes. These are specialty process flows: the buried wordline requires precision etch and fill of deep trenches; the capacitor requires high-aspect-ratio etch plus specialty dielectric deposition in the trenches.
The DRAM periphery — the address decoders, sense amplifiers, I/O interface, and control logic surrounding the cell array — uses logic-like processes at the same wafer. A modern DRAM wafer therefore has two distinct process zones: the cell array with specialty memory processes, and the periphery with logic-like processes. This dual-process-zone structure makes DRAM fabs structurally different from pure logic fabs and contributes to the specialty operator landscape — logic fab operators do not casually enter DRAM production, and DRAM operators do not casually enter leading-edge logic, despite using some shared equipment categories.
DRAM Node Nomenclature
DRAM node naming uses generation-based nomenclature distinct from logic naming, and the generations across operators do not correspond precisely. Understanding the node landscape requires separating the operator-specific generation names from the actual capability level.
| Generation | Approximate Equivalent Feature | Industry Status |
|---|---|---|
| D1x / 1x nm | ~18–19nm class | Legacy DRAM production; phased out at leading operators; still in production at specialty and Chinese operations |
| D1y / 1y nm | ~16–17nm class | Mature DRAM node; last generation before EUV adoption; workhorse for much DDR4 and early DDR5 |
| D1z / 1z nm | ~14nm class | First EUV-enabled DRAM generation at Samsung (2020); SK hynix follow; DDR5 volume production |
| D1a / 1α (alpha) | ~13nm class | Current volume-production leading-edge at the three Western operators; DDR5, LPDDR5, HBM3 generation |
| D1b / 1β (beta) | ~12nm class | Ramping at leading operators; HBM3e and HBM4 base generation; DDR5 density extensions |
| D1c / 1γ (gamma) and beyond | ~10–11nm class projected | Development to ramp timeline 2025–2027; Micron first to sample 1γ; SK hynix and Samsung parallel development |
DRAM scaling has slowed compared to historical pace. Each new DRAM generation delivers approximately 15–25% bit density improvement — less than the doubling per generation that characterized the industry through the 2000s. The scaling challenges are physical: DRAM cell capacitance must remain large enough to reliably store charge against leakage, which limits how small the cell capacitor can become. Industry-level density improvements increasingly come from capacitor engineering (taller capacitors, new dielectric materials), not from transistor shrink alone. The 4F² cell architecture (where F is the minimum feature dimension) defines a physical limit that DRAM approaches asymptotically.
EUV in DRAM
EUV lithography reached DRAM production at Samsung Memory's D1z (1z) generation in 2020, making Samsung the first DRAM operator to use EUV in volume production. SK hynix followed with EUV adoption at comparable generations. Micron has been the last of the three Western operators to adopt EUV, initially scaling 1α and 1β on DUV multi-patterning before introducing EUV at the 1γ generation. CXMT has not publicly disclosed EUV adoption at production scale and would face substantial export control barriers to acquiring EUV systems given US BIS restrictions.
EUV in DRAM uses fewer exposure layers than EUV in leading-edge logic — typically 2–5 EUV layers per DRAM wafer compared to 20 or more at advanced logic — but is becoming the defining lithographic technology for the leading DRAM generations. The layers using EUV are typically the most aggressive-pitch patterns in the cell array and select periphery interconnects where multi-patterning costs exceed EUV costs. This limited EUV usage at DRAM makes EUV scanners more productive per wafer (fewer passes per wafer) than at logic, partially offsetting the smaller number of scanners a DRAM fab operates.
HBM — The High-Value Growth Segment
HBM (High-Bandwidth Memory) has become the structural growth segment within DRAM and the defining industry dynamic since 2023. HBM stacks multiple DRAM dies on top of a base die, interconnected vertically via TSVs (through-silicon vias) and micro-bumps (transitioning to hybrid bonding at HBM4), delivering aggregate bandwidth of hundreds of gigabytes per second per stack — far beyond what conventional DRAM memory interfaces can provide. This bandwidth is essential for AI accelerators where memory bandwidth rather than compute is often the binding constraint on model training and inference throughput.
| HBM Generation | Stack Architecture | Industry Status |
|---|---|---|
| HBM2E | 8-high stacks; 410 GB/s per stack; micro-bump interconnect | Legacy AI accelerator memory; NVIDIA A100 generation; production continuing at reduced volumes as HBM3e dominates |
| HBM3 | 8-high or 12-high stacks; 819 GB/s per stack; micro-bump interconnect | NVIDIA H100 generation; SK hynix established dominant supply position with NVIDIA; HBM3 volume-peaked in 2024 |
| HBM3e | 8-high or 12-high stacks; 1.2 TB/s per stack; micro-bump interconnect; qualified for NVIDIA H200 / B200 generation | Current volume-production HBM; SK hynix leader; Samsung qualification issues at NVIDIA; Micron growing position |
| HBM4 | 12-high or 16-high stacks; 1.5+ TB/s per stack; transition to hybrid bonding from micro-bumps; TSMC-produced base die option | Ramping 2025–2026; generational architecture inflection (hybrid bonding, logic base die); defines AI accelerator bandwidth for NVIDIA Rubin, AMD MI400, and hyperscaler custom programs |
| HBM4E and beyond | Future generations extending bandwidth and stack height | Development toward 2027–2028 deployment; architecture under industry standards body (JEDEC) finalization |
The HBM supplier split is the critical industry structural fact. SK hynix established dominant HBM3 and HBM3e supply position with NVIDIA beginning in 2023, reaching approximately two-thirds or more of HBM supply to NVIDIA's flagship AI accelerators through the H100/H200/B200 generations. Samsung Memory has faced qualification challenges with NVIDIA on HBM3 and HBM3e — well-documented industry dynamics where Samsung HBM did not achieve NVIDIA qualification at expected timelines, ceding market position to SK hynix. Micron has scaled HBM3e production with customer qualifications at multiple AI accelerator programs and has emerged as the third credible HBM supplier. HBM4 qualification at each operator is the current industry focus.
HBM pricing is substantially higher than commodity DRAM pricing — HBM3e sells at multiples of equivalent DRAM gigabyte-for-gigabyte due to the assembly complexity, known-good-stack test requirements, and constrained supply. HBM margins have been structurally higher than commodity DRAM margins, which has driven the three Western DRAM operators to aggressively reallocate capacity toward HBM production. Wafer-level HBM capacity requires dedicated DRAM wafer output (HBM uses the same DRAM wafer base as DDR5 but with different post-wafer processing), so HBM capacity expansion constrains commodity DRAM wafer availability.
HBM Base Die — The Bridge to Leading-Edge Logic
The bottom die in an HBM stack — the "base die" — is a logic die that handles I/O interfacing, test logic, power management, and the boundary between the stacked DRAM cells and the external memory interface. Base dies are historically fabricated on mature logic processes at the memory IDM's own fabs. A significant industry shift is underway where TSMC is producing HBM4 base dies on advanced logic nodes (N5 or below) for memory IDM customers. This approach uses leading-edge logic capacity to provide higher-performance base dies than the memory operator's own mature logic process could deliver.
The HBM4 base die shift creates a structural bridge between the DRAM archetype and the Leading-Edge Logic archetype. It means HBM4 production capacity is no longer bounded only by memory IDM DRAM and HBM assembly capacity — it is also bounded by TSMC leading-edge logic capacity allocated to base die production. Industry observers have tracked HBM4 base die allocations at TSMC closely because they represent another draw on already-constrained leading-edge logic capacity. The bridge also creates a new competitive dimension: memory operators with strong TSMC partnerships for base die production have a production-capability advantage over operators producing base dies on their own mature logic processes.
Operator Landscape
| Operator (HQ) | DRAM Position | Primary Fabs |
|---|---|---|
| Samsung Memory (Suwon, South Korea) | Largest global DRAM operator by volume; ~40–45% market share; first to EUV in DRAM; faced HBM3/HBM3e qualification challenges with NVIDIA; HBM4 focus | Pyeongtaek P1/P2 (DRAM/NAND); Hwaseong (DRAM); Xi'an China (DRAM legacy); Austin Texas (specialty) |
| SK hynix (Icheon, South Korea) | Second-largest global DRAM operator; ~30% market share; dominant HBM3/HBM3e supplier to NVIDIA; HBM leadership is defining competitive position | Icheon M16 (HBM/DRAM, leading-edge); Cheongju M15 (DRAM); Wuxi China (DRAM); Purdue/Indiana (HBM packaging, emerging) |
| Micron Technology (Boise, ID) | Third-largest global DRAM operator; ~20–22% market share; US-headquartered; growing HBM3e position; CHIPS Act Clay NY expansion | Boise ID (DRAM); Taichung Taiwan (DRAM, largest single Micron DRAM facility); Hiroshima Japan (DRAM, inherited from Elpida); Clay NY (planned megafab expansion); Manassas VA (specialty); Singapore (NAND/DRAM) |
| ChangXin Memory Technologies / CXMT (Hefei, China) | Scaling Chinese DRAM entrant; DDR4 production since 2019; DDR5 and LPDDR5 expansion; capability gap at HBM and leading DRAM generations | Hefei Phase 1 (DDR4); Hefei Phase 2 (LPDDR5); Shenzhen (LPDDR expansion); capacity scaling aggressively with state support |
| Nanya Technology (New Taipei, Taiwan) | Taiwan specialty DRAM; mature-node DRAM and specialty applications; does not compete at leading-edge commodity DRAM or HBM | Taipei operations; mature DRAM for specialty customer base |
| Winbond Electronics (Taichung, Taiwan) | Taiwan specialty memory; niche DRAM and flash; automotive and industrial memory specialty | Taichung Taiwan operations; specialty memory customer base |
Geographic Concentration
South Korea hosts approximately 50–55% of global DRAM production — Samsung Memory's Pyeongtaek and Hwaseong megafab campuses plus SK hynix's Icheon and Cheongju operations together constitute the center of gravity of the global DRAM industry. This concentration is structurally comparable to Taiwan's concentration in leading-edge logic: a sustained disruption to Korean DRAM production would affect global memory supply with no short-term substitution path. Korean DRAM concentration extends into HBM specifically — both Samsung HBM and SK hynix HBM are produced almost entirely at Korean facilities, making HBM supply for AI accelerators effectively Korean-concentrated.
The US hosts Micron DRAM operations at Boise with substantial expansion underway — Micron Clay NY (a multi-phase DRAM megafab planned as the largest US semiconductor manufacturing investment) and Micron Boise expansion together represent the most ambitious US memory reshoring program. The Clay NY facility is expected to reach initial production in the late 2020s with full buildout over multiple phases through the 2030s. Japan hosts Micron Hiroshima (inherited from the Elpida acquisition) and Kioxia memory operations (NAND-focused but adjacent). Taiwan hosts Micron Taichung as Micron's largest single DRAM facility globally.
China hosts CXMT's Hefei operations (Phase 1 and Phase 2) plus the Shenzhen LPDDR expansion, representing a growing Chinese domestic DRAM position. The capability gap between CXMT and the three Western operators has narrowed meaningfully at mature DDR nodes but remains substantial at leading-edge DDR5 scaling, LPDDR5X, and particularly at HBM production. HBM specifically represents the largest capability gap — CXMT does not produce HBM at volume, and Chinese HBM capability development is subject to export control constraints on advanced DRAM equipment and on the advanced packaging equipment (hybrid bonders, advanced TSV tools) required for HBM assembly.
The CXMT Capability Trajectory
CXMT's scaling has been one of the most closely watched industry stories over the past three years. The company transitioned from DDR4 production in 2019 through DDR5 qualification and LPDDR5 production, closing a significant portion of the technology gap that existed between Chinese DRAM capability and the three Western operators. CXMT's production volumes have grown substantially and the company serves both Chinese domestic DRAM demand (smartphone OEMs, Chinese server market, consumer electronics) and emerging export positions.
Two structural gaps remain between CXMT and the leading Western DRAM operators. First, leading-edge DRAM generation capability: CXMT production is concentrated at generations behind the D1a / D1b leading edge at Samsung, SK hynix, and Micron. Closing this gap requires access to EUV lithography (subject to US/EU export restrictions for Chinese leading-edge DRAM), advanced process equipment, and substantial process development time. Second, HBM capability: CXMT does not produce HBM at volume, and Chinese HBM development is constrained by equipment access barriers for the specialty process flows (advanced TSVs, hybrid bonding, known-good-stack test) that HBM assembly requires.
The net effect is that CXMT is scaling commodity DRAM output aggressively while the leading Western operators concentrate on HBM and leading-edge DRAM. This segmentation is uncomfortable for the Western operators on commodity DRAM pricing (where CXMT's growing output creates downward pricing pressure) but advantageous at leading-edge and HBM where CXMT has not closed the capability gap. How Chinese semiconductor policy, export control evolution, and CXMT's internal investment trajectory resolve over the next 3–5 years will determine whether the Chinese DRAM position remains commodity-focused or extends to leading-edge and HBM.
DRAM Cyclicality and Capacity Decisions
The DRAM market has historically been the most cyclical segment in semiconductors, characterized by multi-year boom-and-bust cycles driven by the lag between capacity investment decisions and production output. Memory fabs take 2–3 years to construct plus 1–2 years to ramp to volume production, meaning capacity investments made during boom periods often come online during subsequent downturns. The 2022–2023 period was a severe DRAM downturn with substantial industry losses; 2024 began a recovery driven primarily by HBM demand; 2025–2026 has been characterized by HBM-led strength coexisting with mixed commodity DRAM conditions.
The HBM-led structural shift is changing DRAM cyclicality. HBM demand is tied to AI accelerator production and hyperscaler capital investment cycles, which have different cyclical dynamics than traditional PC and smartphone DRAM demand. HBM pricing is higher and less volatile than commodity DRAM pricing. The three Western DRAM operators have substantial HBM exposure that partially insulates revenue from commodity DRAM cycles. Whether this structural shift fundamentally reduces DRAM industry cyclicality or simply creates a new HBM-driven cycle remains to be seen — but the industry economics through 2026 are materially different from pre-AI-boom DRAM economics.
Cross-Network Connection to AI Compute
DRAM — specifically HBM — is the primary SX connection point to the datacenter intelligence (DX) network via AI accelerator memory. Every NVIDIA, AMD, Google, Amazon, Microsoft, and hyperscaler custom AI accelerator shipped to datacenters incorporates HBM stacks from SK hynix, Samsung Memory, or Micron. HBM capacity growth and pricing dynamics are therefore directly relevant to AI datacenter capital planning, AI model training throughput forecasts, and the broader scaling of AI compute infrastructure. See HBM for the chip-type view and AI Accelerators for the accelerator customer view.
DRAM also connects to other pillars of the SiliconPlans network. Automotive DRAM (for infotainment, ADAS memory, AV compute) is a growing segment connecting to ElectronsX vehicle coverage. Mobile DRAM (LPDDR5/LPDDR5X) serves the smartphone market. Graphics DRAM (GDDR6/GDDR7) serves gaming and workstation GPUs. Each has its own market dynamics, but HBM has become the largest and most strategically important DRAM segment.
Fabs in This Archetype
Notable DRAM fabs include: Samsung Pyeongtaek P1/P2 (DRAM/NAND); Samsung Hwaseong (DRAM); Samsung Xi'an (DRAM legacy); SK hynix Icheon M16 (HBM/DRAM, leading-edge); SK hynix Cheongju M15 (DRAM); SK hynix Wuxi (DRAM China); SK hynix Purdue/Indiana (HBM packaging); Micron Boise (DRAM); Micron Taichung Taiwan (DRAM, largest single Micron DRAM facility); Micron Hiroshima Japan (DRAM, ex-Elpida); Micron Clay NY (planned megafab); Micron Manassas VA (specialty); Micron Singapore (NAND/DRAM); CXMT Hefei Phase 1/Phase 2; CXMT Shenzhen (LPDDR expansion); Nanya Taiwan operations; Winbond Taichung. See Fab Facilities for the full inventory.
Related Coverage
Parent: Wafer Fabs
Peer archetype pages: Leading-Edge Logic · Mature Logic · 3D NAND · SiC Power · GaN Power & RF · Analog & Mixed-Signal · CMOS Image Sensor · MEMS · III-V Compound Semiconductor · Silicon Photonics · Rad-Hard & Rad-Tolerant
Related process and equipment: Process Nodes · Wafer Fab Equipment · Lithography (EUV)
Advanced packaging partners: IDM Captive Packaging (HBM stack assembly) · Foundry Captive Packaging (HBM integration into CoWoS)