Qualcomm Enters Data Center Market: Launches AI200 and AI250 Chips to Challenge Nvidia's Dominance
Abstract
Qualcomm announced on Monday, October 27, 2025, ET, the launch of two new artificial intelligence (AI) chips, the AI200 and AI250, officially entering the data center market to compete with Nvidia and AMD. These two chips are scheduled for commercial availability in 2026 and 2027, respectively, and will be sold as accelerator cards and rack-level systems. Following the announcement, Qualcomm's stock price surged over 20% at one point during the day, closing up 11%.
Qualcomm Technologies officially unveiled its next-generation AI inference-optimized solutions for data centers on Monday, October 27, 2025, ET, including accelerator cards and rack systems based on the Qualcomm AI200 and AI250 chips. This marks the formal entry of the company, renowned for its smartphone chips, into the rapidly growing data center AI market.
Product Specifications and Technical Features
The AI200 is a solution specifically designed for rack-level AI inference, aiming to provide low total cost of ownership (TCO) and optimized performance for large language models (LLMs) and multimodal model inference. Each card featuring this chip supports 768GB of LPDDR memory, offering higher memory capacity at a lower cost.
The more advanced AI250 will feature an innovative memory architecture based on near-memory computing, delivering over 10 times the effective memory bandwidth compared to the AI200, while significantly reducing power consumption. Both chips are based on Qualcomm's Hexagon architecture, which is the foundational technology for the neural processing units (NPUs) found in Qualcomm's consumer-grade system-on-chips (SoCs).
Both rack solutions are equipped with direct liquid cooling systems for enhanced thermal efficiency, support PCIe horizontal scaling and Ethernet vertical scaling, feature confidential computing capabilities to secure AI workloads, and have a rack-level power consumption of 160 kilowatts.
Market Positioning and Competitive Landscape
Qualcomm's data center chips are based on the Hexagon Neural Processing Unit (NPU), an AI component found in its smartphone chips. Durga Malladi, General Manager of Qualcomm's Data Center and Edge business, stated: "We wanted to prove ourselves in other areas first, and once we established our strength there, entering the data center tier became quite easy."
Qualcomm claims its AI chips offer advantages over other accelerators in terms of power efficiency, total cost of ownership, and a novel memory architecture approach. The company's AI accelerator competitors include Nvidia's H100 and H200 chips, AMD's Instinct MI300X accelerator, and Intel's Gaudi accelerator.
Qualcomm will sell both AI chips and other components, particularly targeting hyperscale customers who prefer to design their own racks. Malladi suggested that other AI chip companies like Nvidia or AMD might even become customers for some of Qualcomm's data center components, such as central processing units (CPUs).
First Customer and Commercialization Plan
Saudi AI company Humain will be the first customer for these new chips, planning to deploy 200 megawatts of computing power starting in 2026. Qualcomm plans to begin shipping the AI200 and AI250 in 2026 and 2027, respectively, and stated it would update its data center AI processor product line annually in the future.
Market Reaction
Following the announcement, Qualcomm's stock price surged over 20% at one point on Monday, closing up 11%. Investors welcomed the news, which at one point during intraday trading pushed the company's stock up by 15%.
Industry Background
Qualcomm's entry into the data center sector marks new competition in one of the tech industry's fastest-growing markets: equipment for new AI server clusters. According to McKinsey estimates, nearly $6.7 trillion in capital expenditure will be invested in data centers by 2030, with the majority allocated to AI chip-based systems.
The industry has been dominated by Nvidia, whose GPUs currently hold over 90% market share, with sales driving the company's market capitalization past $4.5 trillion. Nvidia's chips are used to train OpenAI's GPT large language models, which power ChatGPT.
Notably, this is not Qualcomm's first attempt to enter the data center market. In 2017, the company announced a partnership with Microsoft to develop the Qualcomm Centriq 2400 platform, but the project quickly failed due to fierce competition from Intel and AMD, as well as internal company issues (including a series of lawsuits).
In its return to the data center market, Qualcomm has adopted a different strategy, focusing on AI inference rather than general-purpose computing. Leveraging its decades of expertise in mobile low-power processors, it aims to secure a foothold in this rapidly expanding market.