SemiconductorX > Chip Types > Compute & Logic


Compute & Logic Chips



Compute and logic semiconductors execute instructions, run inference, switch packets, control actuators, and implement programmable logic across every tier of the digital economy. The cluster spans a wider process node range than any other chip category — from sub-2nm leading-edge CMOS for AI training GPUs to 180nm mature node for automotive safety MCUs — and contains two supply chain populations with almost nothing in common except their silicon substrate.

The leading-edge population (CPUs, GPUs, AI accelerators, inference SoCs, advanced ASICs) is defined by TSMC concentration at N3/N5, CoWoS advanced packaging as a second constraint layer, and EDA lock-in through Synopsys and Cadence. The mature-node population (embedded MCUs, security silicon) is defined by 200mm fab capacity, AEC-Q100 qualification lock-in, and the $2 Chip Paradox dynamic where a low-cost device commands structural supply chain rigidity far beyond its unit price. FPGAs span both populations depending on device tier. Memory and storage — DRAM, NAND, HBM — are a peer cluster with their own supply chain character and are covered separately under Memory & Storage.

Compute & Logic Device Families — Products & Process

Device type Flagship families & products Process node Leading suppliers
CPUs AMD EPYC Genoa/Turin (server); Intel Xeon Sapphire/Granite Rapids; Apple M3/M4 (client); AMD Ryzen 9000; Intel Core Ultra (Meteor Lake) TSMC N3/N5 (AMD, Apple); Intel 4/3 (Intel); leading-edge required for competitive server and client performance Intel; AMD; Apple (ARM-based, captive); Ampere (cloud-native ARM); Qualcomm (Oryon, ARM)
GPUs NVIDIA H100/H200/B200 Blackwell (AI training); NVIDIA GeForce RTX 4090/5090 (consumer/inference); AMD Instinct MI300X/MI325X (AI); AMD Radeon RX 7900/9000 series TSMC N4/N3 (NVIDIA Blackwell, AMD RDNA4); leading-edge required for compute density and memory bandwidth NVIDIA (~80% AI GPU); AMD; Intel Arc (discrete, niche)
AI Accelerators Google TPU v4/v5p; AWS Trainium2/Inferentia2; Microsoft Maia 100; Meta MTIA v2; Tesla AI5/AI6/AI7 (Dojo + vehicle inference); Groq LPU; Cerebras WSE-3 TSMC N5/N3 for hyperscaler custom silicon; Tesla AI5 at Samsung Taylor (captive) + TSMC Arizona Google; AWS; Microsoft; Meta; Tesla (captive); Groq; Cerebras — all fabless, TSMC or Samsung dependent
Edge Inference SoCs NVIDIA DRIVE AGX Orin/Thor (AV); Qualcomm Snapdragon Ride Elite; Mobileye EyeQ6 Ultra; Hailo-8/15 (edge AI module); Kneron KL730 TSMC N5/N7 (NVIDIA Thor, Qualcomm); Samsung 5nm (Mobileye EyeQ6); 12–16nm for cost-optimized edge inference NVIDIA; Qualcomm; Mobileye; Hailo; Kneron; Ambarella
SoCs Apple A18 Pro / M4 (mobile + client); Qualcomm Snapdragon 8 Elite; MediaTek Dimensity 9400; Samsung Exynos 2500; NVIDIA Tegra / Jetson Orin (embedded AI) TSMC N3E (Apple A18 Pro, M4); TSMC N4 (Snapdragon 8 Elite, Dimensity 9400); Samsung 3GAP (Exynos 2500) Apple (captive ARM); Qualcomm; MediaTek; Samsung LSI; NVIDIA (Jetson)
ASICs Broadcom Tomahawk 5 / Jericho3-AI (networking); Marvell Prestera / OCTEON 10; Google TPU (custom AI ASIC); Amazon Graviton4 (custom ARM server); Microsoft Azure Cobalt 100 TSMC N5/N3 for networking and hyperscaler compute ASICs; mature node for embedded control ASICs Broadcom; Marvell; hyperscaler captive programs (Google, Amazon, Microsoft, Meta)
FPGAs AMD Xilinx Versal ACAP / Virtex UltraScale+; Intel Agilex 7/9; Lattice CrossLink-NX / Certus-NX (low-power edge); Microchip PolarFire (low-power, radiation-tolerant) TSMC N7/N5 (AMD Versal, Intel Agilex high-end); 28nm (Lattice, Microchip low-power tier) AMD (Xilinx acquisition); Intel (Altera); Lattice Semiconductor; Microchip (Microsemi)
Embedded MCU / MPUs Infineon AURIX TC3xx/TC4xx (automotive safety); Renesas RH850 / RA series; NXP S32K / S32G (automotive); STMicro STM32 F/H/U series; TI TMS570 (safety); Microchip PIC32 / SAM series 28–180nm; 200mm fab dominant; mature node by design — stability and long-term supply over density Infineon; Renesas; NXP; STMicro; Texas Instruments; Microchip
Security Silicon Infineon SLB9670 TPM 2.0; NXP SE050 secure element; STMicro ST33 secure element; Microchip ATECC608 (IoT); Apple Secure Enclave (captive, embedded in A/M-series) Mature node (40–90nm); security silicon prioritizes side-channel resistance and certification over process density Infineon (TPM dominant); NXP; STMicro; Microchip; Apple (captive embedded)
Neuromorphic Intel Loihi 2 (research); IBM NorthPole (inference, no weight fetch); BrainScaleS (Heidelberg, analog neuromorphic); SpiNNaker 2 (Manchester) Intel 4 (Loihi 2); IBM 12nm (NorthPole); largely research-fab volumes Intel (Loihi research platform); IBM; academic consortia — no commercial volume supplier
Quantum Compute IBM Heron / Eagle / Osprey QPU; Google Sycamore / Willow; IonQ Forte (trapped ion); Quantinuum H2 (trapped ion); Microsoft Azure Quantum (topological, R&D) Not conventional CMOS — superconducting qubits require dilution refrigerator environments (~15mK); trapped ion on standard fab; topological pre-commercial IBM; Google; IonQ; Quantinuum; Microsoft — all pre-commercial at scale

Deployment & Supply Chain Risk by Device Type

Device type Focus sector deployment Primary supply chain risk
CPUs Datacenter server compute (AMD EPYC, Intel Xeon); AI inference host CPU; AV domain controller; edge server TSMC N3/N5 allocation shared with GPU and AI accelerator; Intel process transition execution risk (IDM 2.0)
GPUs AI training clusters (H100/B200); inference cloud (H200, MI300X); AV simulation; robotics sim-to-real Stacked bottleneck: TSMC N3/N5 + CoWoS packaging + HBM3E supply — three simultaneous constraints on a single product
AI Accelerators Hyperscaler AI training (TPU, Trainium, Maia); inference at scale (Inferentia, MTIA); AV onboard compute (Tesla AI5/AI6) TSMC N5/N3 allocation; CoWoS for chiplet-based designs; EDA lock-in (Synopsys/Cadence) for custom silicon NRE
Edge Inference SoCs ADAS and AV compute (NVIDIA DRIVE, Mobileye EyeQ); robot central inference; industrial edge AI NVIDIA ~80% AV program concentration; TSMC N5/N7; AEC-Q100 qualification lock-in for automotive variants — see AI Inference SoC page
SoCs Smartphone compute (Apple A18, Snapdragon 8 Elite); automotive SDV compute; NVIDIA Jetson for robotics edge inference TSMC N3E concentration for Apple; Samsung 3GAP yield risk for Exynos; automotive SoC AEC-Q100 qualification pipeline
ASICs Datacenter switching fabric (Tomahawk, Jericho); hyperscaler custom compute (Graviton, Cobalt, TPU); storage controllers; baseband Broadcom ~70–75% merchant networking ASIC share; TSMC N3/N5 allocation shared with GPU; ABF substrate competition with AI GPU packaging
FPGAs 5G base station accelerator (Versal, Agilex); datacenter SmartNIC offload; aerospace/defense; low-power edge AI (Lattice) High-end: TSMC N7/N5 shared with GPU; ABF substrate and interposer lead times; low-power tier: 28nm mature fab capacity pressure
Embedded MCU / MPUs Automotive ECU (AURIX, RH850, S32K); EV BMS and motor control; robot joint control; smart grid IED; industrial PLC AEC-Q100 / ISO 26262 qualification lock-in; 200mm fab capacity ceiling; $2 Chip Paradox — see MCU page
Security Silicon Platform root-of-trust (TPM 2.0 in every server and PC); EV secure boot and OTA update authentication; IoT device identity; automotive HSM Certification-driven lock-in (CC EAL5+, FIPS 140-3) mirrors automotive qualification paradox; Infineon TPM near-monopoly in server platform
Neuromorphic Research and pre-commercial; ultra-low-power edge inference potential; robotics sensorimotor processing (R&D horizon) No commercial supply chain — research fab volumes only; IBM NorthPole is the closest to production-relevant architecture
Quantum Compute Cloud-accessed quantum compute (IBM Quantum, Google, IonQ via AWS/Azure); cryptography research; optimization problems (logistics, drug discovery) Pre-commercial — dilution refrigerator supply (Bluefors, Oxford Instruments) and cryogenic control electronics are the near-term physical constraints, not semiconductor fab capacity

Two Supply Chain Populations

Leading-edge CMOS population. CPUs, GPUs, AI accelerators, inference SoCs, advanced ASICs, and high-end FPGAs all compete for TSMC N3/N5 wafer allocation. This is a zero-sum pool: a surge in AI GPU demand — as occurred in 2023–2024 — directly compresses wafer availability for server CPUs, networking ASICs, and high-end FPGAs. CoWoS advanced packaging at TSMC adds a second constraint layer that is physically separate from wafer starts; it was the binding bottleneck on NVIDIA GPU shipments during the Hopper generation ramp. EDA lock-in through Synopsys and Cadence is a third structural dependency — every fabless company designing at N3/N5 requires both tools, creating a duopoly chokepoint upstream of the foundry.

Mature-node population. Embedded MCUs, security silicon, and low-power FPGAs operate at 28–180nm on 200mm fabs. Their supply chain risk is not process density — it is qualification lock-in. AEC-Q100, ISO 26262, and CC EAL security certifications create 12–24 month re-qualification cycles that make device substitution nearly impossible within a platform generation. The $2 Chip Paradox — where a sub-$5 MCU halts a $55,000 vehicle — is the defining editorial thesis for this population. Mature-node capacity investment has historically lagged leading-edge investment, creating periodic shortage events that expose the qualification lock-in problem acutely.

The AI GPU Stacked Bottleneck

The NVIDIA H100/H200/B200 supply chain is the most structurally constrained in the Compute & Logic cluster — and arguably in all of semiconductors. Three independent constraints apply simultaneously: TSMC N4/N3 wafer starts for the GPU die, CoWoS packaging capacity at TSMC to integrate the GPU die with HBM stacks on a silicon interposer, and SK Hynix HBM3E supply for the memory stacks themselves. Each constraint is on a different physical resource with a different expansion timeline. Relieving one does not relieve the others. This stacked bottleneck structure means that GPU supply cannot be increased by addressing any single constraint — all three must expand in parallel, and all three are on multi-year lead times.

Related Coverage

Memory & Storage — Peer Cluster | AI Inference & Edge Compute SoCs | Mature Node MCUs — The $2 Chip Paradox | RF & Networking | Semiconductor Bottleneck Atlas | CoWoS Advanced Packaging | NVIDIA Spotlight | Apple Silicon Spotlight

Cross-Network — ElectronsX Demand Side

Every electrified vehicle, autonomous platform, and humanoid robot contains devices from both supply chain populations simultaneously — a leading-edge inference SoC for perception and decision-making, and dozens of mature-node MCUs for actuator control, safety monitoring, and sensor interfacing. The AI training infrastructure that generates the models deployed in those vehicles and robots depends entirely on the leading-edge GPU and accelerator supply chain covered here.

EX: ADAS/AV Compute Architecture | EX: EV Semiconductor Dependencies | EX: Humanoid Robots | EX: Supply Chain Convergence Map