Google Alumni-Founded MatX Secures $500M to Build the Anti-Nvidia AI Chip
News Summary
Nvidia Rival MatX Secures $500 Million Series B to Challenge AI Chip Dominance
Funding Round Overview
On Tuesday, February 24, 2026 (ET), AI chip startup MatX announced the closing of a $500 million Series B funding round, marking a significant escalation in the race to unseat Nvidia as the dominant supplier of AI accelerators. The round was led by Jane Street and Situational Awareness, an investment firm founded by former OpenAI researcher Leopold Aschenbrenner. Additional investors include semiconductor giant Marvell Technology, venture firms NFDG and Spark Capital, and Stripe co-founders Patrick and John Collison.
MatX declined to disclose its exact post-money valuation, but confirmed it is now valued at several billion dollars. For context, its closest competitor, Etched, raised $500 million last month at a $5 billion valuation.
Who Is MatX?
MatX was founded in 2022 by Reiner Pope and Mike Gunter, both veterans of Google's semiconductor division. Pope previously led AI software development for Google's Tensor Processing Units (TPUs), while Gunter served as a lead hardware designer on the same platform. The two left Google with a singular mission: build a chip purpose-built for large language models (LLMs).
This latest round follows a Series A of approximately $100 million in 2024 — also led by Spark Capital — which valued the company at over $300 million at the time.
The MatX One Chip: Technical Ambitions
The centerpiece of MatX's product roadmap is the MatX One, an LLM-optimized accelerator designed to outperform Nvidia's GPUs by a factor of ten in both training and inference performance. The chip employs an innovative SRAM-first architecture combined with High Bandwidth Memory (HBM), a hybrid approach the company believes can simultaneously deliver the high throughput of GPU-style designs and the ultra-low latency of SRAM-based accelerators.
Key technical differentiators include:
- A split systolic array promising industry-leading FLOPS per mm²
- Scalability to hundreds of thousands of chips
- Support for LLM pre-training, reinforcement learning, inference prefill, and decode — all within a single chip
- HBM used specifically for key-value (KV) cache storage, not model weights
CEO Reiner Pope stated: "Our position is that it is actually possible to do both in the same product and you get a much better product as a result."
Manufacturing and Timeline
MatX plans to manufacture the MatX One in partnership with TSMC (Taiwan Semiconductor Manufacturing Company). The company expects to finalize chip design in 2026 and begin shipping products in 2027. A portion of the $500 million will be used specifically to reserve manufacturing capacity and secure critical components well ahead of production.
Competitive Landscape
MatX enters a crowded but high-stakes battlefield. Nvidia, whose market cap briefly surpassed $5 trillion in October 2025, continues to dominate AI accelerator sales with its H100 and B200 GPU lines. Other challengers — including Groq, Cerebras, SambaNova, and dMatrix — have largely focused on inference workloads, while Etched has staked its position on transformer-specific silicon.
MatX differentiates itself by targeting the full AI compute stack: training, fine-tuning, and inference. Whether that breadth of ambition proves an advantage or a liability will become clearer as the company moves toward its 2027 product launch.
Analyst Perspective
The $500 million raise underscores how capital-intensive AI chip development has become. Competing with Nvidia requires not just superior engineering, but also the ability to build developer ecosystems, software toolchains, and manufacturing relationships — all of which demand sustained financial firepower. With this round, MatX now has the runway to pursue those goals in earnest.