SemiconductorX > Chip Types > Compute & Logic > Neuromorphic


>Neuromorphic Chips



Neuromorphic chips implement compute architectures inspired by biological neural systems — using spiking neurons, event-driven communication, and local plasticity rather than the clock-synchronous, von Neumann memory hierarchy of conventional processors. Their defining property is that computation only occurs when neurons fire, rather than continuously across a fixed clock cycle, which produces orders-of-magnitude lower power consumption on sparse, temporal workloads. For always-on edge sensing, robotics motor control, and adaptive signal processing, this efficiency profile is theoretically compelling. In practice, neuromorphic computing remains in research and early commercialization — the software ecosystem is immature, training methods for spiking neural networks (SNNs) lag far behind standard backpropagation-trained ANNs, and GPU/accelerator efficiency improvements continue narrowing the gap that neuromorphic claims to own.

The supply chain for neuromorphic devices is not a conventional semiconductor supply chain in the sense that DRAM or GPU supply chains are. Volumes are research-scale. No merchant foundry runs a dedicated neuromorphic process. Devices are fabricated on standard CMOS nodes using conventional process equipment — the neuromorphic character is entirely in the architecture and circuit design, not in the fabrication process. Supply chain risk in this category is primarily about R&D program continuity and commercialization trajectory, not wafer allocation or packaging capacity.

Neuromorphic Platforms — Products & Status

Platform / vendor Flagship products Process node Status & supply character
Intel Loihi 2 Loihi 2 (2021): 1M neurons, 120M synapses, Intel 4 process; Intel Neuromorphic Research Community (INRC) cloud access program; Oheo Gulch board (8× Loihi 2) Intel 4 (7nm-class FinFET); significant process improvement over Loihi 1 (Intel 14nm); standard CMOS — neuromorphic architecture in digital circuit design, not process-specific Research program — available to INRC members via cloud API; not sold commercially; Intel's most advanced neuromorphic platform; 1,000× improvement in energy-per-synaptic-operation vs Loihi 1; SNN workloads including constraint optimization, sparse coding, and sensorimotor control demonstrated
IBM NorthPole NorthPole (2023): 256-core neural inference chip, 12nm; eliminates off-chip memory access — all weights stored on-chip; 22× better energy efficiency than GPU for ResNet-50 inference benchmark IBM 12nm (GlobalFoundries); 800M transistors; on-chip SRAM eliminates DRAM bandwidth bottleneck; designed for ANN inference (not SNN) — closer to a specialized inference accelerator than classical neuromorphic Research prototype — IBM Science paper (October 2023); not commercially available; NorthPole is the most production-relevant neuromorphic-adjacent architecture as of this writing; ANN inference focus makes software compatibility more tractable than SNN-first designs
BrainChip Akida Akida NSoC AKD1000 (2022, commercial edge AI SoC); Akida 2.0 (2023, improved efficiency, on-chip learning); Akida PCIe board; Akida IP core for SoC integration licensing TSMC 28nm (AKD1000); event-driven SNN + ANN hybrid architecture; MetaTF framework for PyTorch/TensorFlow model conversion to Akida SNN Commercial — available for purchase; ASX-listed company; IP licensing model targeting SoC integration (STMicro partnership announced); earliest commercially available neuromorphic SoC; volume deployments in industrial and defense edge AI applications starting; MetaTF conversion toolchain is the key adoption enabler
SynSense (China) Speck (ultra-low-power SNN chip, event camera processing); Dynap-CNN (SNN inference); XYLO (audio SNN processor); Speck2E (2023 update) 28nm and below; event-driven design optimized for dynamic vision sensor (DVS/event camera) input — natural pairing with neuromorphic processing paradigm Commercial — available; China-headquartered; strongest position in event camera + neuromorphic co-processing for robotics vision and IoT; Rockpool Python framework for SNN development; collaboration with INI Zürich (academic origin)
SpiNNaker 2 (Manchester) SpiNNaker (SpiNNaker 1: 1M ARM cores, Human Brain Project supercomputer); SpiNNaker 2 (TSMC 22nm, on-chip learning, 5× efficiency improvement over SpiNNaker 1) TSMC 22nm FFL (SpiNNaker 2); 144 ARM Cortex-M4F cores per die; designed for large-scale neural simulation, not edge deployment Academic / research — University of Manchester; European Human Brain Project funding; SpiNNaker 2 sampling to research partners; primary use case is large-scale biological neural network simulation rather than edge AI inference
BrainScaleS (Heidelberg) BrainScaleS-2 (analog neuromorphic; mixed-signal CMOS; 512 analog neuron circuits per chip; 1,000× faster than real-time biological simulation) 65nm mixed-signal CMOS (IHP foundry, Germany); analog neuron circuits — uses continuous analog dynamics rather than digital spiking; distinct approach from Intel Loihi / IBM NorthPole Academic / research — Heidelberg University; European Human Brain Project; analog approach enables faster-than-real-time simulation but is harder to program and less noise-tolerant than digital neuromorphic; research tool, not commercial product

Deployment & Sector Relevance

Platform Focus sector deployment Adoption barrier
Intel Loihi 2 Research — constraint satisfaction, optimization, robotics sensorimotor research; not in production deployment No commercial availability; SNN programming model requires bespoke expertise; no path to volume supply chain currently
IBM NorthPole Research prototype for edge inference; potential relevance to robotics edge compute and always-on perception if commercialized Not commercially available; IBM has not announced commercialization timeline; ANN-compatible design reduces software barrier but production path undefined
BrainChip Akida Industrial edge AI (always-on anomaly detection); defense sensor processing; IoT event detection; robotics low-power perception preprocessing Small volume; MetaTF conversion toolchain limits model compatibility; competing against mature GPU/DSP inference accelerators with larger ecosystems
SynSense Speck Event camera preprocessing for robot vision; IoT always-on gesture/motion detection; wearable biosignal processing Event camera adoption is itself a niche market; SNN + event camera co-processing requires full sensor stack redesign vs conventional frame-based camera + CNN pipeline

The Commercialization Gap

Neuromorphic computing faces a gap between demonstrated theoretical efficiency advantages and commercially deployable products. The efficiency case is real — Intel has demonstrated that Loihi 2 solves certain constraint optimization and sparse coding problems at orders-of-magnitude lower energy than GPU alternatives on the same workload. IBM NorthPole's on-chip SRAM architecture eliminates the DRAM bandwidth bottleneck that limits inference accelerator efficiency. BrainChip Akida demonstrates that SNN-based event detection can run at sub-milliwatt power levels suitable for always-on edge sensing.

The barriers are equally real. Standard deep learning — transformer models, convolutional neural networks, diffusion models — is trained using backpropagation on continuous activations. Spiking neural networks use discrete spike events and local plasticity rules that are mathematically incompatible with standard backpropagation. Converting a trained ANN to an SNN preserves approximate behavior but typically incurs an accuracy penalty. The result is that neuromorphic chips cannot simply run existing AI models; they require either purpose-built SNN models or conversion pipelines that introduce accuracy loss. Meanwhile, GPU inference efficiency is improving rapidly — NVIDIA's H100 and B200 are dramatically more power-efficient than A100 for inference workloads, narrowing the gap that neuromorphic claims to own.

IBM NorthPole is the most strategically interesting platform in this context because it targets ANN inference rather than SNN — making it software-compatible with existing models while claiming the on-chip memory efficiency advantage. If IBM commercializes NorthPole or licenses the architecture, it represents the most credible near-term neuromorphic-adjacent deployment path.

Related Coverage

Compute & Logic Hub | AI Accelerators | GPUs | Quantum Compute | Semiconductor Bottleneck Atlas

Cross-Network — ElectronsX Demand Side

Neuromorphic computing's most credible near-term deployment in the EX focus sectors is robotics edge perception — always-on, ultra-low-power event-driven sensing for robot proprioception and environmental monitoring. If BrainChip Akida or a successor platform achieves the sub-milliwatt always-on inference needed for humanoid robot peripheral sensing, it would address a genuine power budget constraint at the robot joint and sensor layer that conventional GPU inference cannot serve economically.

EX: Humanoid Robots | EX: ADAS/AV Compute Architecture