How Qualcomm Is Building Its Way Back Into Data Centers
Qualcomm's hyperscaler ASIC win, the AI200 inference rack, and the Alphawave deal mark a serious second attempt at the data center.
AnIntent Editorial
Photo by Bernard Hermant on Unsplash
Most readers still file Qualcomm under "the company that makes phone chips." That mental shortcut is now wrong. The Qualcomm data center chip story has shifted from a footnote into one of the most-watched bets in the AI hardware market, and the trigger was a single sentence from CEO Cristiano Amon on the Q2 FY2026 earnings call confirming a custom silicon win at a major cloud operator.
The market reaction made the stakes clear. Qualcomm shares surged roughly 16% after CEO Amon confirmed data center chip shipments to "a large hyperscaler" within calendar year 2026, according to CNBC's coverage of the earnings report. For a company whose growth narrative has hinged on smartphones for two decades, that single disclosure rewrote the thesis.
The misconception: this is not Qualcomm's first attempt
The easy assumption is that Qualcomm is a newcomer chasing the AI boom. It isn't. Qualcomm withdrew from the data center market in 2018 after failing in its attempts to develop Arm-based server chips, as SDxCentral documented. The Centriq 2400 program was shelved after roughly a year of shipping silicon, undone by weak customer pull and an internal restructuring.
What changed between 2018 and now is the workload. Training megamodels favors Nvidia's tightly integrated GPU stack. Serving them at scale, every day, to billions of queries, favors whoever can bring the cost-per-token down. Amon said the shift is driven by inference workloads focused on "tokens per dollar and tokens per watt" efficiency. That framing matters because it tells you exactly which fight Qualcomm is picking and which it is avoiding.
The hyperscaler ASIC: what was actually confirmed
The headline announcement was deliberately spare on names. CEO Cristiano Amon confirmed a "leading hyperscaler custom silicon engagement" on track for initial shipments later in calendar 2026, with Jon Peddie Research's earnings recap noting that CFO Akash Palkhiwala repeated the point in prepared remarks, specifying initial shipments for Q4 calendar 2026.
Two details from the call deserve more weight than they've been given. First, this is not transactional. Amon described the customer as a large hyperscaler and called it a "multi-generation engagement," not a one-off chip sale. Second, the scope is wider than a single accelerator. Asked about chip type, Amon said Qualcomm is building across CPU, accelerator, memory architecture, and custom ASIC capability. That is the language of a platform supplier, not a component vendor.
Wall Street took it that way. According to 24/7 Wall St., three Wall Street firms - Citi, JPMorgan, and Wells Fargo - raised QCOM price targets to $160 following Q2 FY2026 earnings, while maintaining Neutral or Equal Weight ratings. The split tells you the analyst community sees a real catalyst but wants execution proof before re-rating the stock. Wells Fargo analyst Aaron Rakers said results were "positively overshadowed" by Qualcomm's plan to ship an AI ASIC to a large hyperscaler in Q4 2026.
Alphawave is the piece most coverage glosses over
A Qualcomm ASIC business without high-speed SerDes IP is a non-starter. Hyperscaler chips live or die on the ability to move terabits between dies, packages, and racks without melting. That is what Alphawave does.
Qualcomm completed the acquisition of Alphawave Semi for $2.4 billion in December 2025, as CloudNews reported, and Alphawave complements Qualcomm's Oryon and Hexagon processors for computing and interconnect solutions for AI. Qualcomm acquired Alphawave for custom ASIC and high-speed connectivity IP; Amon linked the data center move directly to that acquisition. Read that as Amon admitting Qualcomm couldn't build a credible hyperscaler ASIC business without it.
The organizational tells point the same direction. Qualcomm has brought in profiles from Intel to bolster operations, supply chain, and data center CPUs. You don't poach Intel server veterans to make phone modems.
AI200 and AI250: the inference rack pitch
While the hyperscaler ASIC is custom and confidential, Qualcomm's merchant inference product is fully public. Qualcomm's AI200 and AI250 Arm-based inference engines were announced October 27, 2025, with availability planned for 2026 and 2027. The pitch behind the Qualcomm AI200 inference chip is unusual enough to be worth slowing down on.
Most AI accelerators chase peak FLOPS. The AI200 chases memory capacity. According to Tom's Hardware's breakdown, Qualcomm's AI200 rack-scale solutions will be the company's first data-center-grade inference system powered by AI200 accelerators with 768 GB of LPDDR memory onboard (which is a lot of memory for an inference accelerator) that will use PCIe interconnects for scale-up and Ethernet for scale-out scalability. The system will use direct liquid cooling and a power envelope of 160 kW per rack.
The AI250 is where the architecture gets more interesting. According to NAND Research, the AI250 introduces what Qualcomm characterizes as a "near-memory computing architecture," which the company claims delivers over 10x the effective memory bandwidth of conventional approaches.
The deeper bet is on memory economics. LPDDR is roughly an order of magnitude cheaper per gigabyte than HBM3e. By stacking 768 GB of LPDDR on a card, Qualcomm can fit a 400-billion-parameter model in FP16 on a single accelerator, where Nvidia's flagship parts need three to five cards working in parallel for the same job. Fewer cards means less inter-card chatter and simpler serving topology. The trade-off is honest and worth stating plainly: per-card token throughput will trail Nvidia's Blackwell-class hardware. Qualcomm is betting that lower memory cost and simpler topologies offset the throughput gap on a tokens-per-dollar basis. That bet is unproven.
The Humain anchor and why it isn't a vanity deal
No merchant inference product survives without a lighthouse customer. Qualcomm has one. An early data center customer is Humain, Saudi Arabia's government-backed AI company, for edge and cloud facilities globally. The scale is not ceremonial. According to DataCenterDynamics, the company has also partnered with Humain for 200MW deployment of new hardware offerings. Two hundred megawatts of inference capacity is a real workload, not a press release.
It also gives Qualcomm something it lacked in 2018: a guaranteed revenue floor while it courts the hyperscalers. If the AI200 is operationally functional at Humain by mid-2026, the larger conversations get easier.
Qualcomm vs Nvidia in the data center: the right question is narrower
Framing this as Qualcomm vs Nvidia data center misses the shape of the contest. Nvidia owns training. It also owns the largest share of inference today, but inference is the segment most exposed to competition because the workload is more bounded and the buyers are more cost-sensitive. The relevant rivalry is in custom ASICs.
Broadcom predicted its AI chip revenue surpassing $100 billion by 2027; cited as the company Qualcomm must be measured against in custom ASICs. Broadcom designs the TPU silicon Google ships and the custom parts behind several other hyperscaler programs. That is the benchmark Amon is implicitly setting himself against, not Jensen Huang's keynote slides. The Arm angle adds another wrinkle: in March 2026, Arm announced its AGI CPU, its first data center CPU for AI infrastructure, with Meta as lead partner and OpenAI, Cloudflare, SAP, and SK Telecom as other customers. Arm is now competing with its own licensees in the rack.
Qualcomm's CPU answer leans on what it already owns. The proposed data center CPU offering will utilize existing innovations found in its 64-bit Arm-based Oryon CPUs, the same family powering Snapdragon X laptops. Read that as an attempt to amortize the Nuvia acquisition across phones, PCs, and now servers - the Qualcomm Oryon CPU server play is essentially a re-skin of IP Qualcomm has already paid to develop.
Why this matters now: the financial backdrop
Qualcomm needs the data center business to work, and the timing is not a coincidence. The handset franchise is leaking customers. Apple began replacing Qualcomm modems in iPhones with its own in-house chip starting in 2025, removing a major handset customer. Handset revenue in Q2 FY2026 fell to $6.024 billion, down 13% year over year. The next quarter is steeper: Qualcomm Q3 revenue guidance came in at $9.2B–$10B, below analyst consensus of $10.19B, driven by smartphone headwinds.
The rest of the business is still healthy. Qualcomm reported Q2 FY2026 revenue of $10.6 billion, non-GAAP EPS of $2.65, slightly ahead of estimates, and automotive revenue hit a record $1.3 billion in Q2 FY2026, up 38% year over year. The company also signaled confidence with capital returns: Qualcomm authorized a $20 billion share repurchase program alongside the Q2 FY2026 earnings report. For more on why infrastructure has become the more strategic layer of the AI stack, see our analysis of why AI infrastructure is now more strategic than AI models.
The risks the bull case is downplaying
Three problems deserve more attention than they're getting.
First, software. Nvidia's moat is CUDA, not silicon. Qualcomm's Hexagon stack is mature on edge devices, but data center inference at hyperscaler scale is a different software discipline. The company has not yet shipped a serving stack that operators can drop into existing Kubernetes or Slurm-based MLOps pipelines without friction.
Second, memory supply. Amon said the memory shortage has not impacted its data center chip shipments coming this year, but a product whose entire pitch is "more memory per card" is structurally exposed to LPDDR pricing. If DRAM tightness extends into 2027, the AI250 ramp will feel it.
Third, customer concentration risk runs both ways. Bear case for Qualcomm cited by analysts includes memory pricing, Apple modem insourcing risk, and customer vertical integration. The same hyperscalers Qualcomm is courting are also building their own silicon. A multi-generation engagement can be canceled the moment the customer's internal team catches up.
What to watch next
The nearest concrete milestone is the Investor Day. Qualcomm's Investor Day on June 24 is expected to clarify the full data center strategy and customer details. Expect roadmap dates, software disclosures, and possibly the identity of the hyperscaler customer if contractual terms allow.
The deeper question is whether Qualcomm has built the right product for the moment. Large cloud operators are ramping up investment in proprietary chips to reduce dependence on merchant silicon, improve energy efficiency, and tailor hardware to their workloads. That trend cuts two ways: it creates the demand for Qualcomm's ASIC services, and it puts a ceiling on how much standardized silicon Qualcomm can sell into the same accounts. The 2018 attempt failed because Qualcomm tried to sell a generic Arm server chip to customers who didn't want one. The 2026 attempt looks different because the product is custom, the IP stack is broader, and the customer is asking for it. Whether that is enough to build a durable data center business - or just a well-timed cycle play - is the question the next eighteen months will answer.
For adjacent reading, our Explainers archive covers the underlying shifts in AI silicon, and the News index tracks earnings and product launches as they happen.
Frequently Asked Questions
Qualcomm has not publicly named the customer. CEO Cristiano Amon described it only as a "leading hyperscaler" in a multi-generation engagement, with initial shipments scheduled for Q4 calendar 2026. Investor Day on June 24 is expected to clarify customer details.
The AI200 carries 768 GB of LPDDR memory per card, versus 80 GB of HBM3 on the H100 and 192 GB of HBM3e on the B200. Qualcomm is trading peak bandwidth for capacity, betting that fitting larger models on fewer cards lowers total inference cost.
Qualcomm completed the $2.4 billion Alphawave acquisition in December 2025 to acquire custom ASIC and high-speed SerDes IP. Amon directly linked the data center push to the deal, and Alphawave's interconnect tech complements Qualcomm's Oryon CPUs and Hexagon NPUs.
The AI250 is scheduled for commercial availability in 2027, one year after the AI200. It introduces a near-memory computing architecture that Qualcomm claims delivers more than 10x the effective memory bandwidth of the AI200 design.
Both, but the more direct rivalry is with Broadcom in custom ASICs for hyperscalers. Broadcom has projected AI chip revenue above $100 billion by 2027, and that is the benchmark Qualcomm's ASIC business is implicitly being measured against.