Назад до новин

The Vertical Integration Divide: How Wall Street Is Splitting AI Winners from Laggards

businesstechnologyeconomy

The latest round of big tech earnings has revealed something fascinating about how markets are now thinking about artificial intelligence. While four of the largest technology companies could each plausibly be called AI winners, the market response has carved them into two distinct camps. Two stocks moved up, two moved down, and the dividing line tells us a great deal about what investors actually want to see from the companies pouring hundreds of billions of dollars into AI infrastructure.

A Tale of Two Camps

The split in market reaction is not random noise. It reflects a decisive judgment about which companies are actually generating a return on the staggering capital expenditure being deployed across the industry. The market is willing to forgive enormous spending — even spending that exceeds already aggressive expectations — provided the spender can demonstrate that the money is producing tangible returns. Where that proof is missing or murky, the punishment is swift.

The clearest articulation of the split is this: one camp consists of companies that are vertically integrated, owning the chips, the models, the distribution, and the user-facing surfaces. The other camp lacks the chip layer and depends on outside suppliers — most prominently external model partners and Nvidia silicon. That distinction has become the battle line for how AI investments are now valued.

The Full-Stack Champion

Among the names reporting, one company stands out as the strongest illustration of what a fully integrated AI business looks like at scale. Its cloud division grew at above 60% — an extraordinary figure given its absolute size. It has reached roughly half the size of the dominant cloud incumbent, a comparison that would have seemed delusional only three years ago. More remarkably, on an incremental basis, this cloud business is now adding more revenue quarter over quarter than the leader.

This performance, combined with 19% year-over-year growth in search, has dispelled the once-fashionable narrative that search would be killed by generative AI. Instead, the market now sees a fully connected ecosystem: the company owns the chips through its custom silicon, owns the models, owns the distribution layer through search, and continues to own the workspace through user-facing surfaces. As long as AI demand continues, this full-stack position is uniquely defensible.

It's worth recalling that before custom silicon arrived, this company's operating margins were stuck in the mid-teens. They are now in the 33–34% range. The implication is unmistakable: owning your own accelerators is not a marginal advantage. It is a structural reordering of the economics of running an AI business.

The Stuck-in-the-Middle Problem

At the other end of the spectrum sits a company that, despite genuinely impressive numbers, cannot escape an awkward middle ground. Its cloud arm is accelerating, and management called out a $37 billion AI revenue figure. But the disclosure was vague. It is unclear how much of that revenue comes from cloud AI consumption versus credits handed out to partners, versus other inputs.

The deeper problem is that the company has yet to prove that its productivity-focused AI assistant is actually a real, standalone AI product capable of being valued at a software multiple. The numbers are excellent, but the narrative feels unfinished. Markets are not willing to hand out a software multiple on revenue that lacks visibility, and so the assistant business continues to carry a "miss" narrative regardless of how strong the headline figures appear. The company had the first-mover advantage via its early partnership with the leading frontier-model lab, but that edge has been steadily eroded, and it is now working its way back into the conversation rather than leading it.

The Surprising Pivot

One of the most dramatic intraday moves came from a name that swung from down 4% to up 4% — an 8% reversal — on the back of a single thread of commentary in its earnings call. Its custom training silicon, in its second generation, is sold out. The third generation is nearly fully subscribed. There is more than $225 billion in committed credits attached to that silicon roadmap.

That announcement was the missing piece in the company's full-stack story. For years, the rival custom-chip incumbent was rewarded for its integration; this company lacked an obvious equivalent. Now, with its custom silicon shipping in greater unit volumes than Nvidia chips inside its own cloud, the structural margin story has shifted. The numbers back this up: four straight quarters of expanding cloud margins, combined with the strongest top-line cloud growth in nearly fifteen quarters. Revenue is reaccelerating while margins are simultaneously expanding — the unmistakable fingerprint of a vertically integrated platform.

The Misunderstood Spender

The most controversial member of the four is a social-media giant that took a $160 billion hit to market capitalization in a single day, largely because it announced roughly $10 billion more in capex than anticipated and forecast negative free cash flow tied to that spend. This reaction looks badly off-sized.

Unlike many AI stories where the connection between spend and return is speculative, this company offers unusually clear visibility into how AI investment translates into revenue, engagement, and ad performance. As capex rises, ad ranking improves, routing systems get more efficient, and impression quality goes up. The company is becoming better and better at knowing which ad a given user will respond to, without wasting compute on low-profitability impressions. That is precisely why it can grow both impressions and pricing at the same time — a combination that is exceptionally rare in advertising.

The strategic logic is also sound. The company has a distribution surface almost no one else can match, which means it does not need the absolute best frontier model. Its most recent frontier model may be tier-two at best, but a good-enough model deployed inside an unmatched distribution network is more valuable than a state-of-the-art model with nowhere to go. Punishing the stock for outspending expectations on infrastructure that is demonstrably improving the core business looks like a failure of trust rather than a fair valuation judgment — a market having a fit about a prisoner's-dilemma fear rather than reading the actual returns.

The Custom Silicon Timeline

A sobering reality underpins the current divide. The companies without their own competitive in-house chips are not going to close the gap quickly. It took three generations of iteration before the leading custom AI accelerator became a credible alternative to Nvidia. The second-place custom training chip is now showing similar maturation in its second and third generations. Any peer trying to start that journey today is likely at least five years away from having a meaningfully competitive in-house silicon program.

That timeline matters because it tells us the current market split is not a temporary mood. It is a structural advantage that compounds. While the laggards spend years trying to build their own chip stacks, the leaders capture the margin benefits of vertical integration today, reinvest those margins, and pull further ahead.

What the Market Is Actually Pricing

The deeper takeaway from this earnings cycle is that capital expenditure on AI is no longer evaluated as a single category. The market has stopped asking "are you spending on AI?" and started asking "are you spending on AI you control end-to-end?" Owning the chip layer is the most heavily weighted variable in that question, but distribution, model ownership, and the ability to demonstrate flow-through to revenue and margin all contribute.

For companies that can show the full stack — chips, models, distribution, surfaces — and can point to expanding margins and reaccelerating revenue as evidence, hundreds of billions in capex is being received as a strategic moat being widened. For companies that cannot make that showing, the same capex is being received as a tax on the income statement. The split is not about whether AI matters. It is about who has built, or is credibly building, the integrated machine that turns AI spending into durable economic returns.

Коментарі