Logic semiconductors are the core chips that execute instructions and control digital systems. This category covers CPUs, MCUs, FPGAs, and non-AI ASICs — devices that provide general-purpose or application-specific computation. Together they form the core compute substrate across cloud, edge, automotive, industrial, and consumer devices.
Compare at a Glance
Type |
Flexibility |
Performance / Watt |
Latency |
Cost Profile |
Primary Use Cases |
CPU |
Highest (general-purpose) |
Moderate |
Low–medium |
Medium, scales with cores/sockets |
OS, orchestration, general compute, databases |
GPU |
Medium (parallel workloads) |
High for ML training/inference |
Moderate |
High; premium pricing, high TDP |
AI/ML training, HPC, graphics rendering |
AI Accelerator |
Low–medium (domain-specific) |
Very high |
Low–deterministic (inference) or high throughput (training) |
High NRE; efficient at scale |
AI training clusters, inference appliances, hyperscaler deployments |
FPGA |
High (post-fab programmable) |
Lower than ASICs/GPUs |
Low (deterministic pipelines) |
High per-unit, no NRE |
Prototyping, networking, aerospace/defense, low-volume AI |
ASIC |
None (fixed function) |
Highest efficiency |
Very low / deterministic |
High upfront NRE, lowest at scale |
Networking, baseband, vision, storage, security |
SoC |
Medium (integrated subsystems) |
Balanced |
Moderate |
Medium; economies of integration |
Mobile devices, automotive, edge compute |
MCU / MPU |
High (embedded control) |
Low (optimized for efficiency) |
Very low |
Low; commodity pricing |
Automotive ECUs, industrial control, IoT devices |
Security Silicon |
N/A (dedicated security functions) |
Optimized for crypto ops |
Deterministic |
Low to medium |
Root of trust, HSM, TPM, secure elements |
CPU Roadmap
Vendor |
Current Gen |
Next Gen |
Process Node |
Approx. Price Range |
Notes |
Intel |
Raptor Lake (Core), Sapphire Rapids (Xeon) |
Meteor Lake, Granite Rapids |
Intel 7 ? Intel 4 ? 20A/18A |
$300–$12,000 |
Pivoting to IDM 2.0 foundry model |
AMD |
Zen 4 (Ryzen, EPYC Genoa) |
Zen 5 (Ryzen 9000, EPYC Turin) |
TSMC 5nm ? 3nm |
$250–$11,000 |
Aggressive datacenter share growth |
Apple |
M3 |
M4 (expected 2025) |
TSMC N3E ? N2 |
$200–$400 |
Leader in Arm-based client compute |
MCU Roadmap
Vendor |
Current Families |
Next Gen Direction |
Process Node |
Approx. Price Range |
Notes |
STMicro |
STM32 F/L/H/G series |
STM32N (AI/ML enhanced) |
90nm ? 40nm |
$1–$10 |
Breadth makes STM a top MCU supplier |
NXP |
i.MX RT, S32 Automotive |
Automotive safety-focused MCUs |
90nm ? 28nm |
$2–$15 |
Leading in auto electrification MCUs |
Renesas |
RX, RA, RH850 |
Next-gen automotive MCUs |
65nm ? 28nm |
$2–$12 |
Large legacy auto installed base |
FPGA Roadmap
Vendor |
Current Families |
Next Gen |
Process Node |
Approx. Price Range |
Notes |
AMD (Xilinx) |
Versal ACAP |
Versal Next |
TSMC 7nm ? 5nm |
$50–$10,000 |
Hybrid FPGA + AI acceleration |
Intel (Altera) |
Agilex |
Agilex 3, Agilex Next |
Intel 10nm ? Intel 7 |
$100–$8,000 |
Optimized for cloud + telecom acceleration |
Lattice |
CrossLink-NX, Certus-NX |
Next-gen low-power FPGA |
28nm ? 22nm |
$5–$50 |
Focus on low-power edge compute |
ASIC Roadmap (non-AI)
Vendor |
Current Products |
Next Gen / Direction |
Process Node |
Approx. Price Range |
Notes |
Broadcom |
Trident 4, Tomahawk 5 |
Jericho 3, Tomahawk Next |
TSMC 7nm ? 5nm |
$500–$5,000 |
Backbone of datacenter networking |
Marvell |
Prestera, Octeon Fusion |
Custom hyperscaler ASICs |
TSMC 5nm ? 3nm |
$400–$3,000 |
Co-development with hyperscalers |
Supply Chain Bottlenecks
Logic and compute chips face distinct supply chain constraints depending on the category:
- CPUs: Dependence on advanced EUV lithography and advanced packaging (e.g., EMIB, CoWoS) creates capacity bottlenecks at TSMC and Intel Foundry Services.
- MCUs: Produced mostly at mature nodes (28nm–90nm); capacity was severely constrained during 2020–2022 due to limited foundry investment.
- FPGAs: Sensitive to substrate shortages and long lead times for high-layer organic substrates and interposers.
- ASICs: Custom ASICs for networking/datacenter depend on TSMC advanced-node allocation and advanced packaging resources, both of which are finite and under pressure from AI GPU demand.
Market Outlook
The Logic & Compute segment (excluding SoCs and AI/GPU) was valued at ~$160B in 2023 and is projected to reach ~$240B by 2030 (~5% CAGR). CPUs continue to anchor datacenter and PC markets, MCUs grow with automotive and IoT proliferation, and FPGAs/ASICs sustain specialized infrastructure needs. Scaling pressures beyond 2 nm and chiplet-based architectures will redefine product roadmaps.