SemiconductorX > Chip Types > Compute & Logic > ASICs


ASICs



Application-Specific Integrated Circuits trade programmability for efficiency. Logic and interconnect are fixed at tape-out — the chip does exactly one thing, does it at maximum transistor efficiency, and cannot be reprogrammed. This makes ASICs the highest-performance, lowest-power option for any workload that is stable enough at volume to justify the non-recurring engineering cost of custom silicon design. At advanced nodes, that NRE cost is $300–500 million at 5nm and rising — which concentrates ASIC design activity at hyperscalers, established networking companies, and large OEMs that can amortize NRE across sufficient volume.

The ASIC market divides cleanly into merchant ASICs (Broadcom and Marvell selling networking silicon to anyone who will buy) and captive ASICs (Google, AWS, Microsoft, Meta, Apple, Tesla designing chips exclusively for their own deployment). Both depend on TSMC at advanced nodes, both use the same Synopsys/Cadence EDA toolchain, and both compete for the same CoWoS packaging capacity. The structural difference is that merchant ASIC supply can be tracked through public revenues; captive ASIC wafer consumption is invisible to the market but draws from the same constrained foundry pool.

ASIC Families — Products & Process

Domain / family Flagship products Process node Supplier & market position
Broadcom Networking ASICs Tomahawk 5 (51.2 Tbps datacenter switch); Jericho3-AI (AI fabric routing, 10.8 Tbps with AI-optimized traffic patterns); Trident 4 (enterprise switching); Qumran (service provider routing) TSMC N5 (Tomahawk 5, Jericho3-AI); TSMC N7 (Trident 4); leading edge required for port density and power targets at 51.2Tbps+ Broadcom (fabless); TSMC foundry; ~70–75% merchant switch silicon market; Tomahawk is the de facto hyperscale datacenter switch ASIC; Jericho3-AI targeting AI cluster fabric as distinct from general switching
Marvell Networking & Storage ASICs Prestera (enterprise Ethernet switching); Teralynx 10 (51.2Tbps, competing with Tomahawk 5); OCTEON 10 DPU (infrastructure processing, Arm cores + network acceleration); Alaska C SerDes (800G) TSMC N5/N3 (Teralynx 10, OCTEON 10); TSMC N7 (Prestera); competing directly with Broadcom at leading-edge nodes Marvell (fabless); TSMC foundry; second-largest merchant networking ASIC supplier; strong in custom co-development with hyperscalers (Amazon, Google) for purpose-built network silicon
Intel Tofino (Programmable Switch ASIC) Tofino 2 (12.8 Tbps, P4-programmable); Tofino 3 (25.6 Tbps); Intel P4 Studio toolchain for packet processing pipeline programming Intel 7nm class (Tofino 2); P4-programmable pipeline differentiates from fixed-function Broadcom/Marvell ASICs — closer to FPGA programmability at ASIC power efficiency Intel (Barefoot Networks acquisition 2019); Intel foundry; niche — P4 programmability valued by hyperscalers and telcos for custom packet processing; lower throughput ceiling than Tomahawk 5
Google TPU (Custom AI ASIC) TPU v5p (training, 8,960-chip pods); TPU v5e (inference-optimized); Trillium / TPU v6 (2024, next-gen); Edge TPU (inference edge, discrete module) TSMC N5/N4 (TPU v5p, Trillium); Google designs internally (Google Brain / DeepMind silicon team); fabricated at TSMC; captive — not sold externally Google (captive design); TSMC foundry; largest deployed non-GPU AI training fleet; TPU architecture optimized for TensorFlow/JAX bfloat16 and int8; Google Cloud offers TPU VM access commercially
AWS Custom Silicon (Graviton, Trainium, Inferentia) Graviton4 (96-core ARM server CPU); Trainium2 (AI training, 16-chip NeuronLink clusters); Inferentia2 (inference, deployed in Inf2 EC2 instances); Nitro (security and network offload ASIC) TSMC N5 (Graviton4, Trainium2, Inferentia2); AWS Annapurna Labs internal design team; captive — deployed exclusively in AWS infrastructure AWS (Annapurna Labs); TSMC foundry; AWS has the broadest captive silicon portfolio of any hyperscaler — CPU, AI training, AI inference, security, and network offload all custom; CHIPS Act supply chain strategic relevance
Microsoft / Meta Custom ASICs Microsoft Maia 100 (AI training ASIC); Microsoft Cobalt 100 (ARM server CPU ASIC); Meta MTIA v2 (recommendation inference ASIC); Meta MSVD (video transcoding ASIC) TSMC N5 (Maia 100, Cobalt 100, MTIA v2); same design and foundry pipeline as Google and AWS captive programs Microsoft and Meta (captive); TSMC foundry; Maia targets OpenAI workload cost reduction; MTIA targets Meta's largest inference workload (ranking and recommendation) which is too large for GPU economics at scale
Ambarella CVflow (Vision / ADAS ASICs) CV3-AD (automotive ADAS domain controller SoC/ASIC, 8nm); CV52 (robotics vision, edge AI); CV5 (4K video AI, security cameras); S6Lm (low-power surveillance ASIC) TSMC N6/N7 (CV3-AD, CV52); TSMC N5 targeted for next-gen; CVflow neural processing architecture specialized for computer vision workloads Ambarella (fabless); TSMC foundry; dominant in security camera AI SoC; growing in ADAS domain controller against NVIDIA DRIVE; CV3-AD targets the same vehicle compute platform market as Orin/Thor
Baseband / Modem ASICs Qualcomm Snapdragon X75 5G modem (near-monopoly for iOS and Android premium); MediaTek M90 5G modem; Samsung Exynos Modem 5400; Huawei Balong 5000 (China domestic) TSMC N4/N5 (Snapdragon X75, MediaTek M90); Samsung 4nm (Exynos Modem); modem ASICs are among the most complex mixed-signal designs — PHY, MAC, and RF digital all on one die Qualcomm (~80% premium 5G modem market, sole supplier for iPhone); MediaTek; Samsung LSI; Huawei (China domestic, HiSilicon design + SMIC fab for restricted export market)

Deployment & Supply Chain Risk

Domain Focus sector deployment Primary supply chain risk
Broadcom / Marvell Networking Hyperscale datacenter spine/leaf switching; AI cluster fabric (Jericho3-AI); cloud compute network; service provider core routing Broadcom ~70–75% merchant share; TSMC N5 shared with AI GPU; ABF substrate competition; hyperscaler custom ASIC programs partially displacing merchant silicon
Hyperscaler captive AI ASICs Internal AI training (Google, AWS, Microsoft); inference at scale (Meta MTIA, AWS Inferentia); captive — not externally procurable TSMC N5/N3 wafer allocation invisible to market but consuming constrained capacity; EDA NRE $300–500M+ per tape-out; design team talent concentration risk
Vision / ADAS ASICs (Ambarella) Security camera AI inference (dominant market); ADAS domain controller (growing vs NVIDIA DRIVE); robotics vision processing TSMC N6/N7 capacity; automotive AEC-Q100 qualification for CV3-AD; competing against NVIDIA's DRIVE platform which has much larger software ecosystem
5G Baseband / Modem Every 5G smartphone (Qualcomm near-monopoly for premium); 5G CPE; V2X automotive modem; smart infrastructure 5G connectivity Qualcomm sole-source for iPhone modem (Apple developing internal modem to reduce dependency — multi-year program); TSMC N4/N5 shared with GPU; HiSilicon/Huawei bifurcation in China domestic market

ASIC NRE Economics & the Build vs. Buy Decision

Custom ASIC design at advanced nodes is one of the highest-NRE activities in engineering. A full tape-out at TSMC N5 — including EDA tool licensing, IP core licensing (SerDes, PCIe, DDR, security), mask set costs, and verification engineering time — costs $300–500 million and takes 18–24 months from design start to first silicon. At N3, the cost escalates further. This economics profile concentrates ASIC development at companies with either the volume to amortize NRE (hyperscalers with millions of unit deployments) or the revenue to fund it (Broadcom and Marvell with large merchant ASIC businesses).

The decision framework is straightforward: ASICs win when workloads are stable, volumes are high, and the performance-per-watt improvement over GPU or FPGA alternatives generates enough TCO savings to justify NRE. They lose when algorithms change rapidly (making fixed function logic a liability), volumes are modest (making NRE non-recoverable), or time-to-market pressure exceeds the 18–24 month ASIC design cycle. FPGAs prototype the function; ASICs productize it at scale.

Supply Chain Bottlenecks

Bottleneck Affects Severity
TSMC N5/N3 wafer pool — merchant + captive + GPU all competing Broadcom Tomahawk, Marvell Teralynx, all hyperscaler captive ASICs, NVIDIA GPU — same allocation pool Critical — foundational constraint; captive ASIC demand is underestimated because it is not publicly visible
EDA duopoly NRE — Synopsys and Cadence Every custom ASIC design at advanced node; EDA tool licensing is a fixed cost per tape-out regardless of volume Structural — EDA is a non-negotiable input; BIS export control on Synopsys/Cadence to China (May 2023 letter) created acute disruption for Chinese ASIC programs
ABF substrate and CoWoS for multi-die ASICs Chiplet-based networking and AI ASICs requiring advanced packaging Medium — shared constraint with GPU; packaging lead times 6–12 months at peak demand
Qualcomm modem near-monopoly (iPhone) Apple iPhone supply chain; Apple internal modem development program is the mitigation path Medium — Apple modem program ongoing; until qualified, Qualcomm X-series modem is sole source for all premium iPhone models

Related Coverage

Compute & Logic Hub | GPUs | AI Accelerators | FPGAs | RF & Networking | EDA Supply Chain | PDK & Foundry Ecosystem | Semiconductor Bottleneck Atlas

Cross-Network — ElectronsX Demand Side

Ambarella CV3-AD and NVIDIA DRIVE Thor compete for the same automotive ADAS domain controller design win — that design win locks in ASIC supply chain for the vehicle's production lifetime. Hyperscaler AI training ASICs (TPU, Trainium, Maia) are the infrastructure generating the models deployed in EVs, AVs, and robots. Qualcomm's modem ASIC near-monopoly means every connected vehicle and smart infrastructure node with 5G telematics depends on Qualcomm supply continuity.

EX: ADAS/AV Compute Architecture | EX: EV Semiconductor Dependencies | EX: Supply Chain Convergence Map