SemiconductorX > Chip Types > Memory & Storage > DRAM
DRAM Memory Chips
Dynamic Random Access Memory is the working memory of every computing system — servers, smartphones, AI accelerators, automotive ADAS platforms, and robotics compute nodes all depend on it. DRAM holds the data and instructions a processor is actively using; its bandwidth and capacity define system performance ceilings more directly than any component below the processor itself. The market is a three-company oligopoly: Samsung, SK Hynix, and Micron together control roughly 95% of global output, and all three are vertically integrated IDMs with no foundry equivalent. A process generation delay at any one of them, or a capacity reallocation decision driven by HBM demand, propagates into server, smartphone, and automotive supply within months.
DRAM Chip Families — Products, Specs & Sector Deployment
| Family / standard | Flagship products | Peak bandwidth | Voltage | Focus sector deployment | Supply chain notes |
|---|---|---|---|---|---|
| DDR5 | Samsung DDR5-4800/6400; SK Hynix DDR5-5600/6400; Micron DDR5-4800/5600 (RDIMM/LRDIMM for servers; UDIMM for desktop) | ~38–89 GB/s (dual channel) | 1.1V | Datacenter server main memory (Intel Xeon Sapphire Rapids+, AMD EPYC Genoa+); AI inference server memory; cloud compute node memory | Current leading edge for server and new PC platforms; supply tightening as wafer starts redirect to HBM; DDR6 in early development |
| LPDDR5 / LPDDR5X | Samsung LPDDR5X-8533; SK Hynix LPDDR5X; Micron LPDDR5X (package-on-package for mobile and automotive SoC) | ~17–68 GB/s | 0.5–1.05V (low-power modes) | ADAS compute platforms (NVIDIA DRIVE AGX, Qualcomm Snapdragon Ride); AV inference SoCs; robotics edge compute; smartphone on-device AI | Dominant mobile standard; automotive LPDDR5X growth driven by ADAS compute density increase; AEC-Q100 automotive qualification creates long design-in cycles |
| GDDR6 / GDDR6X | Micron GDDR6X (NVIDIA GeForce RTX 4090, 5090); Samsung GDDR6 (AMD RX 7900); SK Hynix GDDR6 (workstation and inference GPU frame buffers) | ~512–768 GB/s (full GPU complement) | 1.35V | Discrete GPU frame buffers (gaming, workstation, some inference-at-scale); GDDR6X on NVIDIA consumer and some inference GPU variants where HBM is not used | AI training GPUs migrating fully to HBM; GDDR6X remains in consumer gaming and mid-range inference; GDDR7 sampling for next-gen gaming |
| LPDDR4 / LPDDR4X | Samsung LPDDR4X; SK Hynix LPDDR4X; Micron LPDDR4X (broadly produced; AEC-Q100 automotive variants widely qualified) | ~8–34 GB/s | 0.6–1.1V | Legacy ADAS and infotainment platforms; current-generation EV BCM and domain controller memory; IoT and smart infrastructure edge nodes | Automotive LPDDR4X qualifications still dominant for vehicles on current platforms; qualification lock-in extends life well beyond consumer equivalents; transitioning to LPDDR5X on new platforms |
| DDR4 | Samsung DDR4-2666/3200; SK Hynix DDR4; Micron DDR4 (still in volume production for legacy server refresh) | ~25–51 GB/s (dual channel) | 1.2V | Legacy server installed base; edge inference nodes with older Xeon/EPYC platforms; industrial compute systems | Transitioning to DDR5; large installed base sustains demand; pricing pressure as DDR5 adoption accelerates |
| HBM2E / HBM3 / HBM3E | SK Hynix HBM3E (H200/B200 primary); Samsung HBM3E; Micron HBM3E — see HBM page for full detail | 460 GB/s (HBM2E) → 1.2 TB/s (HBM3E per stack) | 1.2V (HBM3) | AI training GPU clusters; HPC accelerators; inference cloud GPU nodes — the gating memory technology for AI infrastructure | Full HBM supply chain coverage on the HBM page |
Cell Architecture & Process Node Roadmap
Every DRAM bit is stored in a one-transistor, one-capacitor (1T1C) cell. The transistor controls read/write access; the capacitor holds charge representing a stored value. Scaling DRAM means shrinking this cell while maintaining sufficient capacitance to reliably sense the stored charge — a task that grows harder at each node because capacitance decreases as physical dimensions shrink. The buried wordline (BWL) architecture, now universal at leading nodes, buries the access transistor beneath the cell surface to reduce adjacent-cell interference and enable tighter pitch.
At the 1α node (approximately 13–14nm half-pitch), multiple patterning using DUV ArF immersion reaches its practical density limit. The 1β and 1γ nodes require EUV for critical layers. All three IDMs are now on EUV DRAM roadmaps, creating simultaneous demand on ASML scanner delivery that adds supply pressure at each node transition.
| Node | Half-pitch | Lithography | Key structural change | Samsung | SK Hynix | Micron |
|---|---|---|---|---|---|---|
| 1z | ~16–17nm | DUV ArF immersion; SADP | Buried wordline; high-k capacitor dielectric | Transitioning out | Transitioning out | Transitioning out |
| 1α (alpha) | ~13–14nm | DUV SAQP; Samsung introduced EUV at select layers | Samsung EUV at select critical layers; buried wordline refined | Volume production | Volume production | Volume production |
| 1β (beta) | ~12nm | EUV for critical layers at all three vendors | EUV reduces overlay error; tighter cell pitch; improved capacitor aspect ratio | Ramp | Ramp | Ramp |
| 1γ (gamma) | ~10nm class | EUV multi-layer; High-NA EUV evaluation | Novel high-k capacitor materials (ZrO2/TiO2); GAA transistor evaluation for DRAM | Development | Development | Development |
Vendor Competitive Position
Samsung leads by volume and was first to introduce EUV patterning at DRAM nodes, but its HBM3E qualification delays with NVIDIA — reportedly tied to thermal performance under sustained AI workloads — allowed SK Hynix to establish a durable supply lock with the most important AI GPU customer. Samsung's competitive position in commodity DRAM and NAND remains strong; its HBM position is a near-term vulnerability relative to SK Hynix.
SK Hynix's strategy has been to trade commodity DRAM share for a premium position in HBM — accepting near-term margin compression to build the TSV stacking and process capability that no other DRAM supplier had matched when NVIDIA was designing the H100. The result is a structural NVIDIA-SK Hynix coupling that resembles the TSMC-Apple relationship in logic: the customer's product is physically designed around the supplier's technology.
Micron is the only US-headquartered DRAM supplier and has positioned itself as the CHIPS Act beneficiary in memory. Its Boise and Manassas fabs provide geographic diversification that neither Samsung nor SK Hynix can offer to US customers. Micron was first to 232-layer NAND in production and is qualifying HBM3E as a third-source diversification option for hyperscalers seeking to reduce SK Hynix concentration.
Automotive DRAM — Qualification Lock-In
Automotive DRAM is a distinct supply chain segment. Vehicles require DRAM qualified to AEC-Q100 Grade 2 (−40°C to +105°C) or Grade 1 (−40°C to +125°C), with long-term availability commitments that commodity DRAM SKUs cannot satisfy. Once an OEM qualifies a specific DRAM device and supplier in a platform, changing it requires full re-qualification — a 12–24 month process. Automotive ADAS platforms running LPDDR5X for neural network inference are the fastest-growing automotive DRAM segment; the growing sensor and compute density of each vehicle generation increases the working memory required to run camera/radar fusion pipelines. This lock-in sustains price premiums and extends product lifetimes far beyond consumer equivalents — the same mechanism that defines the $2 Chip Paradox for automotive MCUs.
Supply Chain Bottlenecks
| Bottleneck | Mechanism | Severity | Affected products |
|---|---|---|---|
| HBM wafer diversion | HBM requires more wafer area per bit; AI GPU margin premium pulls capacity from DDR5/LPDDR5X | Medium-High | Server DDR5, PC DRAM, smartphone LPDDR5X |
| EUV scanner allocation | All three IDMs transitioning to EUV DRAM simultaneously; ASML delivery cadence is the pacing constraint | Medium | 1β/1γ DDR5, LPDDR5X, HBM base die |
| Korea geographic concentration | Samsung + SK Hynix Korea fabs produce ~70%+ of global DRAM; no geographic redundancy at scale | Structural | All DRAM categories globally |
| Automotive qualification lock-in | AEC-Q100 re-qualification takes 12–24 months; OEMs locked to specific SKUs per vehicle platform generation | Medium — structural rigidity, not acute shortage | LPDDR5X, LPDDR4X, DDR4 automotive variants |
Related Coverage
Memory & Storage Overview | NAND Flash Supply Chain | HBM Supply Chain | Mature Node MCUs — The $2 Chip Paradox | AI Inference & Edge Compute SoCs | Semiconductor Bottleneck Atlas
Cross-Network — ElectronsX Demand Side
Automotive LPDDR5X demand grows with every additional ADAS camera, radar, or LiDAR channel requiring real-time neural network inference. EV platform DDR5 demand at the compute domain controller level scales with software-defined vehicle feature density. AI training cluster DDR5 demand is the datacenter-side signal for the same infrastructure buildout driving CoWoS and HBM constraints.
EX: ADAS/AV Compute Architecture | EX: EV Semiconductor Dependencies | EX: Supply Chain Convergence Map