SemiconductorX > Chip Types > Compute & Logic > CPUs


CPUs



Central processing units execute instructions across every tier of the computing stack — from cloud server racks to laptop clients to embedded edge nodes. CPUs are the orchestration layer of AI infrastructure: every GPU training cluster requires CPU host processors to manage data pipelines, run distributed training frameworks, and coordinate storage I/O. Server CPUs are among the most complex semiconductor products manufactured, integrating dozens of compute cores, large L3 caches, memory controllers, PCIe and CXL interfaces, and increasingly, dedicated matrix accelerator units — all at leading-edge process nodes where yield and packaging complexity directly constrain supply.

The server CPU market has consolidated to two credible x86 vendors — AMD and Intel — with ARM-based alternatives gaining meaningful datacenter share through hyperscaler custom silicon (AWS Graviton, Microsoft Cobalt, Ampere Altra/AmpereOne) and Apple's captive M-series dominating client compute. RISC-V is emerging at the embedded and microserver tier but remains pre-commercial at scale server workloads.

CPU Families — Products & Process

Family / platform Flagship products Process node Supplier / architecture
AMD EPYC (server) EPYC Genoa (9004 series, Zen 4); EPYC Turin (9005 series, Zen 5); EPYC Bergamo (Zen 4c, cloud-density variant) TSMC N5 (Genoa); TSMC N3 (Turin); chiplet architecture — compute dies + I/O die on organic substrate AMD (fabless); TSMC foundry; x86-64 ISA; chiplet-first since Rome (2019)
Intel Xeon (server) Xeon Sapphire Rapids (4th gen); Xeon Granite Rapids (5th gen, Intel 3); Xeon Sierra Forest (E-core density variant) Intel 7 (Sapphire Rapids); Intel 3 (Granite Rapids); EMIB and Foveros packaging for multi-die integration Intel IDM; x86-64 ISA; AMX matrix extensions for AI workloads; EMIB advanced packaging
Apple M-series (client / workstation) M3 / M3 Pro / M3 Max / M3 Ultra; M4 / M4 Pro / M4 Max; M4 Ultra (2025) TSMC N3E (M3); TSMC N3E / N2 (M4); SoIC 3D stacking for Ultra variants (two dies bonded) Apple (captive ARM); TSMC foundry; unified memory architecture; tightly integrated CPU + GPU + NPU on single die
AMD Ryzen (client) Ryzen 9000 series (Zen 5, desktop); Ryzen AI 300 series (Strix Point, laptop + NPU); Ryzen 7000 (Zen 4, previous gen) TSMC N4 / N3 (Zen 5 compute die); TSMC N6 (I/O die); 3D V-Cache variants add SRAM stack on compute die AMD (fabless); TSMC foundry; x86-64 ISA; integrated NPU in Ryzen AI series for Windows AI PC platform
Intel Core (client) Core Ultra 200 series (Arrow Lake, Intel 20A / TSMC N5 hybrid); Core Ultra 100 series (Meteor Lake); Lunar Lake (2024, TSMC N2 for compute tile) Hybrid — Intel 20A or TSMC N3/N5 compute tile + Intel 22nm I/O tile; tiled architecture (disaggregated compute, graphics, SoC, I/O tiles) Intel IDM + TSMC hybrid; x86-64 ISA; Foveros 3D stacking for tile integration; first Intel client CPU to use TSMC compute tile (Meteor Lake)
AWS Graviton (cloud ARM) Graviton3 (Arm Neoverse V1 core, 64-core); Graviton4 (Arm Neoverse V2, 96-core); Graviton4E (high-memory bandwidth variant) TSMC N7 (Graviton3); TSMC N5 (Graviton4); custom Arm Neoverse core with AWS-specific memory and I/O design AWS (captive ARM, fabless); TSMC foundry; deployed exclusively in AWS EC2; cost/perf optimized for cloud-native workloads
Qualcomm Oryon (client ARM) Snapdragon X Elite / X Plus (Oryon CPU core, Windows on ARM); Snapdragon X Series for Copilot+ PC TSMC N4P; custom Oryon core derived from Nuvia acquisition (2021); integrated Hexagon NPU for Windows AI workloads Qualcomm (fabless); TSMC foundry; ARM ISA; competing directly with Apple M-series on performance-per-watt for thin-and-light laptops
Ampere Altra / AmpereOne (cloud ARM) Altra Max (128-core, Arm Neoverse N1); AmpereOne (192-core, custom Arm core); AmpereOne-3 (in development) TSMC N7 (Altra); TSMC N5 (AmpereOne); cloud-native design — single-threaded cores, high core count, optimized for cloud workload density Ampere Computing (fabless); TSMC foundry; ARM ISA; deployed at Oracle Cloud, Microsoft Azure, Google Cloud
RISC-V (emerging) SiFive P870 (server-class RISC-V); Ventana Veyron V2 (datacenter RISC-V); SpacemiT X60 (China RISC-V SoC) Varies — TSMC N5/N7 for high-performance RISC-V; mature node for embedded RISC-V cores SiFive; Ventana; SpacemiT; RISC-V International open ISA; China investing heavily (~$2.1B) as x86/ARM export control hedge

Deployment & Supply Chain Risk

Platform Focus sector deployment Primary supply chain risk
AMD EPYC Hyperscaler server (AWS, Google, Microsoft, Meta); HPC clusters; AI training host CPU TSMC N3 allocation shared with GPU and AI accelerator; chiplet I/O die at TSMC N6 is a separate supply dependency
Intel Xeon Enterprise server and datacenter; AI inference host; telco and network infrastructure Intel process transition execution risk (Intel 3 → Intel 18A); EMIB packaging yield; competitive share loss to AMD EPYC accelerating customer diversification pressure
Apple M-series MacBook, Mac Studio, Mac Pro (client and workstation compute); ML research and inference on-device TSMC N3E concentration; SoIC bonding yield for Ultra variants; Apple captive — no third-party licensing or foundry alternative
Hyperscaler ARM (Graviton, Cobalt, Ampere) Cloud-native server workloads; cost-optimized compute instances; containerized microservices TSMC N5/N7 concentration; ARM architecture license dependency; each hyperscaler's custom silicon is captive — no merchant market
Client CPUs (Ryzen, Core Ultra, Oryon) AI PC (Windows Copilot+, Apple Intelligence); edge inference host for on-device LLM; enterprise laptop fleet TSMC N3/N4 shared allocation; Intel hybrid (TSMC + Intel fab) coordination risk; PC market cyclicality
RISC-V Embedded MCU replacement; China domestic compute (export control hedge); edge IoT and microcontroller Immature software ecosystem at server scale; toolchain gaps vs x86/ARM; China RISC-V investment driven by export control pressure, not organic performance demand

Chiplet Architecture — The Structural Shift

AMD's chiplet strategy — separating compute dies from I/O dies and assembling them on a common organic substrate — redefined server CPU economics starting with the Rome EPYC generation in 2019. By fabricating compute dies at the leading edge (N5, then N3) while keeping the I/O die at a mature node (N7, then N6), AMD achieves leading-edge compute density without paying leading-edge cost for every transistor on the package. Each compute die is also smaller than a monolithic die, improving wafer yield. Intel followed with its tiled architecture in Sapphire Rapids (EMIB interconnect) and Meteor Lake (Foveros 3D stacking), and Apple uses SoIC face-to-face bonding to stack two M-series dies for its Ultra variants.

The supply chain implication of chiplet architecture is that a single CPU product now has multiple supply dependencies at different process nodes. An AMD EPYC Turin has compute dies at TSMC N3 and an I/O die at TSMC N6 — a supply disruption at either node affects the final product. Intel's hybrid approach adds a third dependency: TSMC compute tile plus Intel fab I/O tile on the same package. Chiplet designs also increase packaging complexity and substrate demand, creating ABF substrate and advanced packaging capacity requirements that add to the already-constrained supply pool shared with AI GPU production.

ARM Server Momentum & x86 Share Dynamics

AWS Graviton has demonstrated that ARM-based server CPUs can deliver competitive performance-per-watt for cloud-native workloads, and AWS has been deploying Graviton instances across its EC2 fleet since 2018. Microsoft's Azure Cobalt 100, Google's Axion, and Ampere's commercial Altra/AmpereOne platforms have extended ARM's server presence beyond a single hyperscaler. The cumulative effect is that the x86 server CPU duopoly — Intel and AMD — now faces credible ARM competition specifically in the high-volume, cost-sensitive cloud instance market where workloads are containerized and ISA portability is achievable.

Intel's competitive position in servers has been pressured by AMD EPYC's core count and memory bandwidth advantages, and by its own process transition delays. The Intel 18A node — expected to be the basis for future Panther Lake client and Clearwater Forest server products — is the critical execution test for Intel's IDM 2.0 strategy. If Intel 18A yields competitively, it restores Intel's process parity with TSMC N2. If it delays, Intel's server market share erosion to AMD accelerates and its foundry business faces credibility risk simultaneously.

Supply Chain Bottlenecks

Bottleneck Affects Severity
TSMC N3/N5 allocation shared with GPU and AI accelerator AMD EPYC Turin, Apple M4, Graviton4, Qualcomm Oryon — all competing for the same wafer pool Medium-High — AI GPU demand dominates allocation priority; CPU supply manageable but margin-constrained
Intel process transition execution risk Intel Xeon server roadmap; Intel client CPU competitiveness; Intel Foundry Services credibility High — Intel 18A is the critical node; delays would extend AMD EPYC share gains and reduce IFS customer confidence
ABF substrate and advanced packaging capacity Chiplet-based CPUs (AMD EPYC, Intel Xeon tiled) competing with GPU and AI ASIC packaging demand Medium — shared constraint with AI GPU CoWoS; CPU packaging less complex than CoWoS but draws from same substrate supply pool
ARM architecture license dependency All ARM-based CPU designs (Apple, Qualcomm, AWS, Ampere, MediaTek) Structural — ARM Holdings (SoftBank / partial Nasdaq float) controls ISA licensing; RISC-V is the long-term hedge but not a near-term substitute at server scale

Related Coverage

Compute & Logic Hub | GPUs | AI Accelerators | Mature Node MCUs — The $2 Chip Paradox | AI Inference & Edge Compute SoCs | EDA Supply Chain | Semiconductor Bottleneck Atlas | Apple Silicon Spotlight

Cross-Network — ElectronsX Demand Side

AI training cluster host CPUs (AMD EPYC, Intel Xeon) determine the compute fabric architecture of the datacenters driving electrification intelligence. Automotive-grade ARM SoCs and RISC-V cores are appearing in EV domain controllers and smart infrastructure edge nodes as the embedded compute tier upgrades from legacy MCU architectures to higher-performance application processors.

EX: ADAS/AV Compute Architecture | EX: EV Semiconductor Dependencies