AI Hardware & Chips Companies

AI hardware companies design specialised processors — GPUs, TPUs, NPUs, and custom ASICs — optimised for the matrix multiplication and tensor operations at the heart of deep learning. NVIDIA dominates training, but a wave of startups is targeting inference efficiency, edge deployment, and next-generation training architectures.

44 Companies $11.2B Total Raised 9 Countries
NVIDIA logo

NVIDIA

Santa Clara, United States

NVIDIA is the world leader in AI computing hardware and software. Creator of CUDA, cuDNN, and the dominant GPU platform for AI training and inference.

enterprise $3000.0B
AMD logo

AMD

Santa Clara, United States

AMD develops high-performance computing and AI hardware including MI300 accelerators, ROCm software stack, and Ryzen AI processors.

enterprise $220.0B
Intel logo

Intel

Santa Clara, United States

Intel develops AI accelerators including Gaudi, Xeon processors with AI acceleration, and neuromorphic chips. Major player in edge AI.

enterprise $110.0B
Vicor logo

Vicor

Andover, United States

Vicor designs and manufactures modular power systems, specializing in high-density power conversion for demanding applications. Their core technology focuses on optimized 48V power delivery networks, crucial for efficient operation of AI data centers and GPU computing infrastructure. Vicor targets companies building high-performance computing systems, offering solutions that improve power efficiency and scalability in areas like AI, eMobility, and high-performance computing.

enterprise $3.0B
Horizon Robotics logo

Horizon Robotics

Beijing, China

Horizon Robotics is a China-based AI hardware company specializing in edge AI processing units. They design and manufacture automotive-grade AI chips – the Journey series – specifically for advanced driver-assistance systems (ADAS) and autonomous driving capabilities. Targeting automotive OEMs and Tier 1 suppliers, Horizon Robotics provides a full-stack solution enabling on-device AI processing for improved safety and efficiency in vehicles.

scaleup $2.0B
SambaNova Systems logo

SambaNova Systems

Palo Alto, United States

SambaNova Systems develops a full-stack AI platform, including DataScale processors (RDUs) and the Samba-1 model suite, designed to accelerate AI inference and fine-tuning. The company offers both cloud-based (SambaCloud) and on-premise (SambaStack) deployment options, targeting enterprises and governments with demanding data security and performance requirements. SambaNova positions itself as a high-performance, energy-efficient alternative to GPU-based AI infrastructure, particularly for large language models and sovereign AI initiatives.

startup $1.1B
Sila Nanotechnologies logo

Sila Nanotechnologies

Alameda, United States

Sila Nanotechnologies develops and manufactures silicon anode materials that significantly increase the energy density and performance of lithium-ion batteries. Their core product, Titan Silicon, is a drop-in replacement for traditional graphite anodes, delivering up to 20% more energy density and faster charging capabilities. Sila targets battery manufacturers and device companies across electric vehicle, consumer electronics, and industrial sectors, offering both materials supply and battery engineering services to optimize cell performance.

scaleup $930M
B

Black Sesame Technologies

Shanghai, China

Black Sesame Technologies is a Chinese fabless semiconductor company specializing in high-performance, low-power AI chips for automotive applications. Their core product is the Huashan series of Systems-on-Chips (SoCs), which leverage advanced image processing and adaptive light control sensing technology to enhance autonomous driving capabilities. Black Sesame targets automotive OEMs and Tier 1 suppliers requiring customized and comprehensive AI-powered vision solutions for ADAS and autonomous vehicles.

scaleup $800M
Cerebras Systems logo

Cerebras Systems

Sunnyvale, United States

Cerebras Systems develops AI hardware, specifically the Wafer Scale Engine, a large-scale chip designed to accelerate deep learning workloads. Unlike traditional GPUs, Cerebras’ technology aims to significantly reduce the time and cost associated with training complex AI models. Their target market is organizations requiring high-performance computing for demanding AI applications, such as large language models and scientific computing.

startup $720M
Graphcore logo

Graphcore

Bristol, United Kingdom

Graphcore designs Intelligence Processing Units (IPUs) optimized for machine intelligence workloads with novel architecture.

startup $682M
DriveNets logo

DriveNets

Ra'anana, Israel

DriveNets delivers high-performance Ethernet-based network infrastructure solutions optimized for demanding AI workloads and large-scale cloud deployments. Their core technology is a disaggregated networking operating system enabling service providers and cloud builders to create scalable and open routing fabrics. DriveNets specifically targets organizations requiring high-bandwidth, low-latency networking to support AI training and inference, as well as next-generation service provider networks.

scaleup $587M
Cambricon logo

Cambricon

Beijing, China

Cambricon is a Chinese fabless semiconductor company specializing in the design and development of Neural Processing Units (NPUs) for both cloud and edge computing applications. Their core product is a series of AI chips intended to accelerate machine learning workloads in servers, smart terminals, and robotics. Cambricon targets the growing demand for on-device AI processing and integrated end-to-cloud AI solutions within the Chinese market and beyond.

enterprise $500M
Scaleway logo

Scaleway

Paris, France

Scaleway is a European cloud infrastructure provider specializing in GPU-accelerated computing for AI and machine learning workloads. Their core offering is the Scaleway AI Supercomputer – currently featuring the Nabu-2023 and expanding to 127 DGX nodes – providing scalable resources for model training and deployment. Scaleway differentiates itself by offering a sovereign European cloud solution with predictable pricing, robust data security, and 24/7 support, targeting organizations prioritizing data residency and cost transparency.

enterprise $500M
Lightmatter logo

Lightmatter

Boston, United States

Lightmatter develops photonic integrated circuits and co-packaged optics (CPO) designed to accelerate AI workloads. Their core innovation lies in the Passage family of products – including the Passage M1000 3D Photonic Superchip and L200 optical engines – which utilize a vertically integrated electronic-photonic architecture to deliver industry-leading bandwidth density and efficiency for data movement between GPUs, TPUs, and data center switches. Targeting large-scale AI model training and inference, Lightmatter recently demonstrated a world-first 16-wavelength bidirectional optical DWDM link on a single fiber, achieving an 8x increase in bidirectional fiber bandwidth density and positioning them as a leader in next-generation AI infrastructure.

startup $420M
Groq logo

Groq

Mountain View, United States

Groq designs the Language Processing Unit (LPU), a novel processor architecture specifically engineered for high-performance, low-latency language model inference. Pioneered in 2016, the LPU fundamentally differs from traditional GPUs by utilizing a deterministic, software-defined dataflow architecture to maximize throughput and predictability for demanding AI workloads. Groq offers its LPU hardware alongside GroqCloud, a fully managed inference platform, serving customers requiring real-time responses from large language models in applications like generative AI and natural language processing.

startup $360M
Celestial AI logo

Celestial AI

Santa Clara, United States

Celestial AI develops photonic fabric interconnects that eliminate memory bottlenecks in AI systems. Raised $250M Series C at $2.5B.

startup $350M
Tenstorrent logo

Tenstorrent

Toronto, Canada

Tenstorrent designs high-performance AI processors and ML infrastructure, currently offering the Grayskull™ chip based on a novel, scalable architecture. Their key innovation lies in the combination of this architecture with an open-source, MLIR-based compiler stack, TT-Forge™, enabling optimized workload deployment and broad software compatibility. Tenstorrent is targeting AI inference and training applications, and recently released TT-Forge™ into public beta to foster community development and accelerate model support for their hardware.

startup $335M
Blaize logo

Blaize

El Dorado Hills, United States

Blaize develops edge AI computing platforms and silicon utilizing their Graph Streaming Processor (GSP) architecture. Their platforms, including Blaize Pathfinder and Xplorer, offer a code-free software suite to simplify and accelerate AI application deployment from data center to edge devices. Blaize targets industries seeking to implement scalable, energy-efficient AI solutions for applications like computer vision, overcoming the limitations of traditional hardware and software approaches.

scaleup $230M
Rebellions logo

Rebellions

Seoul, South Korea

Rebellions is a South Korean AI hardware company specializing in high-performance accelerator cards, servers, and rack-scale solutions for data centers and edge computing. Their core product, the REBEL-Quad, utilizes a chiplet architecture and HBM3E memory to deliver industry-leading performance per watt for demanding AI workloads like those built on PyTorch and vLLM. Rebellions targets organizations requiring scalable and efficient AI infrastructure, offering a full-stack hardware and software solution.

scaleup $225M
Mythic logo

Mythic

Austin, United States

Mythic is a US-based AI hardware company developing the Mythic AMP™ analog processor for edge AI applications. Their technology stores AI parameters directly within the processor, eliminating memory bottlenecks and delivering significantly improved power efficiency and performance compared to traditional digital architectures. Mythic targets industries like robotics, defense, and security, enabling advanced, real-time inference at the edge with reduced energy consumption.

startup $165M
d-Matrix logo

d-Matrix

Santa Clara, United States

d-Matrix develops in-memory computing chips—specifically their “Corsair” platform—designed to accelerate Generative AI inference workloads. Their technology integrates memory and compute to deliver ultra-low latency and high throughput, addressing the growing energy consumption and cost challenges of AI deployment. d-Matrix targets enterprises and data centers seeking to efficiently scale Generative AI applications without compromising performance or sustainability.

startup $160M
Untether AI logo

Untether AI

Toronto, Canada

Untether AI develops specialized hardware accelerating AI inference through an innovative at-memory compute architecture. Their core product, the tsunAImi® accelerator card, significantly improves performance and energy efficiency for demanding AI workloads by integrating computation directly within memory. Targeting applications ranging from assisted driving to smart cities, Untether AI enables cost-effective deployment of AI at the edge and in data centers by supporting standard AI frameworks like TensorFlow and PyTorch.

startup $125M
Enfabrica logo

Enfabrica

San Jose, United States

Enfabrica develops high-performance, low-latency Dataflow Architecture networking chips designed to accelerate AI workloads within data center infrastructure. Their innovative approach bypasses traditional von Neumann bottlenecks, enabling significantly faster data transfer and improved energy efficiency for demanding applications like large language models and generative AI. Recognized as a leading AI hardware innovator – featured in TechRadarPro’s “10 Hottest AI Hardware Companies to Follow in 2025” and a member of The Futuriom 50 – Enfabrica recently surpassed $100M in funding (TechCrunch, 2024), signaling strong investor confidence in their technology.

startup $125M
Axelera AI logo

Axelera AI

Eindhoven, Netherlands

Axelera AI delivers the world's most powerful and advanced solutions for AI at the Edge with the industry-defining Metis AI platform.

startup $120M
E

Etched

Mountain View, United States

Etched is a US-based AI hardware company developing custom silicon focused on accelerating inference workloads. Their core product is a novel, high-performance AI chip architecture designed to deliver leading performance per watt for demanding AI applications. Founded in 2022, Etched targets organizations requiring substantial on-premise AI inference capabilities, positioning themselves for the future of large-scale AI deployment.

startup $120M
Recogni logo

Recogni

San Jose, United States

Recogni, now operating as Tensordyne, develops AI inference systems—including custom silicon and optimized math—designed to significantly reduce the computational cost and energy consumption of large AI models. Their technology targets data centers requiring high-throughput, low-latency AI processing for generative AI and other demanding applications. Tensordyne differentiates itself through fundamental mathematical innovation in AI, enabling greater density and efficiency in AI infrastructure.

startup $102M
DEEPX logo

DEEPX

Seoul, South Korea

DEEPX develops core technology for high-performance AI semiconductors with over 300 patents pending across the US, China, and Korea.

scaleup $100M
Kneron logo

Kneron

San Diego, United States

Kneron designs and manufactures neural processing units (NPUs) optimized for edge AI applications. Their SoCs enable on-device AI inference, reducing latency and enhancing data privacy compared to cloud-based solutions. Kneron targets smart home, automotive, and industrial IoT markets requiring low-power, high-performance AI capabilities directly within their devices.

scaleup $100M
Syntiant logo

Syntiant

Irvine, United States

Syntiant develops neural decision processors that enable on-device AI processing for ultra-low-power applications. Their core technology utilizes at-memory compute to deliver significantly improved efficiency and throughput compared to traditional microcontrollers when running deep learning models. Syntiant targets the mobile, earbud, and IoT markets, providing scalable hardware solutions for applications like voice control, always-on audio recognition, and vibration analysis directly on the edge device.

scaleup $100M
Sapeon logo

Sapeon

Seoul, South Korea

Sapeon develops neural processing units (NPUs) specifically designed to accelerate AI inference workloads in data centers. Their technology focuses on delivering high performance and energy efficiency for applications like image recognition and natural language processing. Sapeon targets cloud service providers and enterprises seeking to optimize the cost and performance of their AI-powered services.

scaleup $100M
Origin AI logo

Origin AI

San Francisco, United States

Here's a company description for Origin AI, based on the provided information: Origin AI develops WiFi-based motion sensing technology for indoor spatial understanding and home monitoring. Their core product utilizes radio frequency (RF) signals to detect and analyze movement within a home without requiring cameras or additional hardware. This technology targets the home security and smart home automation markets, offering privacy-focused activity detection and potential for advanced features like fall detection and occupancy-based energy savings.

scaleup $53M
Lambda Labs logo

Lambda Labs

San Francisco, United States

Lambda Labs is a US-based provider of GPU cloud infrastructure and hardware specifically designed for demanding deep learning workflows. They offer on-demand access to high-performance GPUs, including the NVIDIA B200 and H100, through cloud instances, private cloud deployments, and dedicated workstations. Lambda Labs targets AI researchers and organizations requiring scalable and cost-effective solutions for model training and inference.

scaleup $44M
AIStorm logo

AIStorm

San Jose, United States

AIStorm develops charge-domain computing chips and IP that deliver AI, memory, and digital processing directly in silicon—up to 100× more efficient.

startup $20M
Huawei AI logo

Huawei AI

Shenzhen, China

Huawei develops the Ascend series of AI chips – including the Ascend 910 and NPU-based modules – designed to accelerate machine learning workloads for edge and cloud deployments. Their AI capabilities are demonstrated through the Pangu large language model series, which includes models like Pangu-α and Pangu-Weather, showcasing advancements in natural language processing and weather forecasting accuracy. Primarily serving the telecommunications, manufacturing, and financial services sectors, Huawei AI has achieved notable deployments of its solutions in smart city initiatives and industrial automation across China and internationally.

enterprise Est. 1987
Ambient Scientific logo

Ambient Scientific

Palo Alto, United States

Ambient Scientific pioneers AI-native compute architecture fusing analog efficiency with digital scalability for AI performance.

startup Est. 2021
Blumind logo

Blumind

Toronto, Canada

Blumind makes AI accessible and sustainable with innovative analog AI chips for Edge AI applications in AIoT, robotics, and smart mobility.

startup Est. 2020
Supermicro logo

Supermicro

San Jose, United States

Supermicro designs and manufactures server and storage hardware specifically optimized for demanding artificial intelligence and GPU-accelerated workloads. Their product line focuses on high-performance, modular systems engineered to maximize GPU density and efficiency within data center environments. This positions Supermicro as a key infrastructure provider for organizations deploying AI applications in areas like machine learning, deep learning, and high-performance computing.

enterprise Est. 1993
Synopsys logo

Synopsys

Sunnyvale, United States

Synopsys develops Electronic Design Automation (EDA) tools and Semiconductor IP used in the creation of complex integrated circuits. Their solutions increasingly leverage AI, particularly through collaborations with companies like NVIDIA, to accelerate chip design, verification, and heterogeneous integration – crucial for next-generation AI hardware. Synopsys targets companies developing advanced semiconductors for diverse markets including automotive, high-performance computing, and mobile applications, enabling faster time-to-market and improved silicon success rates.

enterprise Est. 1986
Wolf Speed logo

Wolf Speed

Durham, United States

Wolfspeed is a U.S.-based semiconductor company specializing in silicon carbide (SiC) power devices and materials. They manufacture SiC MOSFETs and modules, including 200mm wafers, designed to improve the efficiency and performance of power electronics systems. Wolfspeed targets industries requiring high-performance power solutions, such as electric vehicles, renewable energy, and industrial power supplies, with a focus on enabling scalable and durable designs.

enterprise Est. 1987
Habana Labs logo

Habana Labs

Caesarea, Israel

Habana Labs, an Intel company, designs high-performance AI processors specifically for deep learning workloads. Their primary product is the Gaudi® series of accelerators, engineered to efficiently handle both training and inference for large-scale generative AI models. Habana Labs targets data center and cloud service providers requiring optimized compute for demanding AI applications like autonomous vehicles and advanced AI deployments.

enterprise Est. 2016
Qualcomm AI logo

Qualcomm AI

San Diego, United States

Qualcomm AI develops specialized semiconductor solutions that integrate AI processing directly onto mobile devices and edge computing infrastructure. Their core technology centers on AI engines embedded within Snapdragon platforms and dedicated edge AI compute, enabling on-device machine learning capabilities. This positions Qualcomm as a key enabler for applications requiring low-latency, power-efficient AI processing in areas like smartphones, automotive, and IoT devices.

enterprise Est. 1985
BrainChip logo

BrainChip

Sydney, Australia

BrainChip develops and manufactures Akida, a neuromorphic computing processor IP designed for edge AI applications. This technology mimics the human brain to achieve ultra-low power consumption and efficient on-device processing of sensor data, eliminating reliance on cloud connectivity. BrainChip targets developers of edge AI solutions in markets like automotive, industrial automation, and smart devices where low latency and energy efficiency are critical.

enterprise Est. 2004
Rain AI logo

Rain AI

San Francisco, United States

Rain AI develops novel AI hardware, specifically neuromorphic computing chips, designed to significantly reduce energy consumption for AI workloads. Their technology focuses on event-driven processing, mimicking biological neural networks to achieve greater efficiency than traditional architectures. The company targets data center operators and AI infrastructure providers seeking to lower operational costs and improve sustainability.

company Est. 2017
Esperanto Technologies logo

Esperanto Technologies

Mountain View, United States

Esperanto Technologies developed high-performance, low-power AI and HPC accelerators based on a massively parallel architecture of over a thousand custom RISC-V cores. Their ET-SoC-1 chip utilizes both high-performance, out-of-order cores and energy-efficient, in-order cores to deliver superior compute efficiency for demanding workloads like generative AI and HPC. While the IP has been acquired by Nekko.ai, Esperanto focused on providing a lower total cost of ownership through RISC-V based solutions for data center applications.

company Est. 2014

Frequently Asked Questions

Why do AI models need special hardware?
Neural network training and inference require trillions of floating-point operations. Specialised AI chips execute these in parallel far more efficiently than general-purpose CPUs, reducing cost and latency by 10–100×.
Who competes with NVIDIA in AI chips?
AMD (MI300), Intel (Gaudi), Google (TPU), Amazon (Trainium/Inferentia), and startups like Cerebras, Groq, SambaNova, and Tenstorrent are all challenging NVIDIA's dominance.
Which AI chip companies are most notable?
Top AI hardware companies include NVIDIA, AMD, Intel.