SemiconductorX > Chip Types > Memory & Storage


Memory & Storage Chips Overview



Memory semiconductors are the single largest revenue segment of the global chip industry — approximately 28% of total sales — and the most cyclically volatile. Three companies control the market: Samsung, SK Hynix, and Micron hold roughly 95% of DRAM output and a dominant share of NAND. Unlike logic, memory fabs are vertically integrated IDMs with no foundry equivalent; capacity decisions made in Seoul or Boise propagate directly into global supply within months. The emergence of HBM as the gating component for AI training infrastructure has added a structural supply constraint on top of the traditional commodity cycle — one that is not resolvable by capital investment alone, because CoWoS packaging capacity at TSMC is the binding limit, not wafer starts.

Memory Device Categories — Chip Families & Supply Chain Character

Category Flagship families & products Process node Leading suppliers Focus sector relevance Supply chain stress
DRAM Samsung DDR5, LPDDR5X; SK Hynix DDR5, LPDDR5X; Micron DDR5, LPDDR5X; Micron GDDR6X (GPU frame buffers) 1α/1β/1γ specialty DRAM node (10–14nm class); EUV at 1β and beyond Samsung (~43%), SK Hynix (~31%), Micron (~23%) AI inference server DDR5; ADAS/AV LPDDR5X; robotics edge compute; datacenter server memory Medium-High — HBM wafer diversion tightens DDR5; Korea concentration; EUV node yield risk
HBM SK Hynix HBM3E (H200/B200 primary); Samsung HBM3E; Micron HBM3E; SK Hynix HBM4 (2026+ roadmap); Samsung HBM4 1α/1β DRAM base die + TSV stacking + CoWoS integration at TSMC SK Hynix (dominant, >50% HBM3E); Samsung; Micron (third-source) AI training GPU memory (NVIDIA H100/H200/B200, AMD MI300X); HPC; inference cloud GPU clusters Critical — SK Hynix concentration; CoWoS is the binding AI GPU shipment constraint; multi-year forward allocation model
NAND Flash Samsung V-NAND (236-layer); SK Hynix 4D NAND (238-layer); Kioxia BiCS8 (218-layer); Micron 232-layer RG NAND; YMTC Xtacking 3.0 (232-layer) 3D NAND; 200–238 layers current; 300+ layer multi-deck roadmap; etch-limited, not litho-limited Samsung (~32%), Kioxia/WDC JV (~35% combined), SK Hynix (~20%), Micron (~18%), YMTC (~6%) AI training dataset storage (enterprise NVMe SSD); inference model weight storage; AV event logging; EV OTA storage Medium — pricing cyclicality; YMTC equipment uncertainty at 300+ layers; Kioxia-WDC JV dependency
SRAM Embedded in every logic die — not a standalone market; standalone: Renesas SRAM, Infineon CY-series, ISSI IS61/IS64 Co-fabricated on host logic node; standalone at mature node (28–130nm) Embedded: TSMC/Samsung logic; standalone: Renesas, Infineon (Cypress), ISSI, GSI Technology CPU/GPU L1–L3 cache; NPU on-chip buffer for inference edge SoCs; automotive safety MCU; networking ASIC packet buffer Low standalone; TSMC N3/N5 logic constraints are the relevant risk for embedded SRAM, not SRAM itself
NOR Flash Winbond W25Q series (dominant SPI NOR); Macronix MX25 series; Micron MT25Q; Infineon (Cypress) S25FL series Mature node (65–130nm); 2D planar; stable — no 3D transition underway Winbond (~35%), Macronix (~25%), Micron, Infineon (Cypress) Automotive ECU firmware and boot code (AEC-Q100); EV BMS firmware; IoT boot; smart infrastructure control firmware Medium — AEC-Q100 qualification lock-in mirrors $2 Chip Paradox; mature fab capacity pressure
MRAM Everspin MR4A/MR2A discrete MRAM; GlobalFoundries eMRAM (22nm FDX); Samsung eMRAM (28nm) Standalone: mature node; embedded: 22–28nm BEOL integration Everspin (discrete); GlobalFoundries, Samsung (embedded process) Automotive safety MCU non-volatile working memory; industrial robot state retention; smart grid IED firmware cache Low volume — niche; foundry eMRAM process availability is primary constraint
ReRAM / PCM Weebit Nano ReRAM (pilot); Fujitsu MB85 FRAM (ferroelectric, adjacent technology); Intel Optane 3D XPoint (PCM — discontinued 2022) Pilot/early production; Optane discontinued — no volume PCM supplier active Weebit Nano, Crossbar (ReRAM); Fujitsu (FRAM); no active volume PCM supplier Storage-class memory for AI near-memory compute (R&D); inference edge latency (long-term); industrial IoT state storage Pre-commercial — cost vs. SRAM/NOR is the barrier; Optane discontinuation removed the only volume PCM precedent

Vendor Market Position

Vendor DRAM share NAND share HBM position Key fabs Strategic position
Samsung ~43% ~32% HBM3E qualifying with NVIDIA after delays; HBM4 in development; Samsung I-Cube packaging for own customers Hwaseong, Pyeongtaek (Korea); Xi'an (China, NAND) Largest memory supplier by volume; V-NAND layer count leader; EUV at DRAM 1α ahead of peers; HBM trailing SK Hynix for AI GPU supply
SK Hynix ~31% ~20% Dominant HBM3/HBM3E supplier; primary NVIDIA AI GPU HBM source; HBM4 sampling 2025–2026 Icheon, Cheongju (Korea); Wuxi (China); Purdue IN (announced) Most strategically positioned memory company for AI infrastructure; NVIDIA HBM lock creates structural moat through HBM4
Micron ~23% ~18% HBM3E ramping; third-source diversification for hyperscalers; US domestic fab as differentiator Boise ID, Manassas VA (US); Hiroshima (Japan); Singapore; Taichung (Taiwan) Only US-headquartered DRAM/NAND supplier; CHIPS Act beneficiary; first to 232-layer NAND in production
Kioxia / WDC None ~35% combined None Yokkaichi, Kitakami (Japan) — shared JV production NAND-only; BiCS architecture; JV creates shared capacity and shared financial risk; enterprise SSD focus
YMTC None ~6% (growing) None Wuhan (China) China domestic NAND champion; 232-layer Xtacking in production; Entity List restricts advanced equipment access for 300+ layer transition

The HBM Diversion Effect

HBM production consumes significantly more wafer area per bit than standard DDR5. A 12-high HBM3E stack requires twelve individually tested DRAM dies, through-silicon via processing at each layer, and stacking yield loss at each bonding step — all of which reduce effective bit output per wafer start compared to commodity DRAM. When Samsung, SK Hynix, and Micron redirect capacity to HBM to serve AI GPU demand, commodity DDR5 and LPDDR5X supply contracts as a direct consequence. AI infrastructure buildout therefore creates memory tightness across the entire server and PC ecosystem — not because total wafer capacity is insufficient, but because the marginal wafer is worth more as HBM than as DDR5.

CoWoS packaging at TSMC adds a second constraint layer. HBM stacks cannot reach an AI GPU without CoWoS, and CoWoS capacity is physically separate from wafer fab capacity. During 2023–2024, CoWoS — not HBM production and not TSMC N4 wafer starts — was the gating constraint on NVIDIA GPU shipments. TSMC is expanding CoWoS aggressively, but the lead time means this remains a structural bottleneck through the Blackwell and early Rubin generations.

Related Coverage

DRAM Supply Chain | NAND Flash Supply Chain | HBM Supply Chain | AI Inference & Edge Compute SoCs | Semiconductor Bottleneck Atlas | CoWoS Advanced Packaging | SK Hynix Spotlight | Micron Spotlight

Cross-Network — ElectronsX Demand Side

HBM availability gates AI GPU shipment schedules; GPU shipments determine the pace of AI training cluster buildout; training cluster buildout drives the data center power and cooling demand covered across ElectronsX infrastructure pages. LPDDR5X in ADAS compute platforms and NAND in automotive OTA storage are direct EV and AV supply chain dependencies.

EX: ADAS/AV Compute Architecture | EX: EV Semiconductor Dependencies | EX: Humanoid Robots