From Laggard to Leader in Twelve Months
Barely a year ago, the conventional wisdom held that Google was trailing in the generative AI race. That narrative has collapsed. Through the Gemini family of models, a new generation of custom Tensor Processing Units, and an unrelenting cadence of product announcements, the company has reasserted itself as one of the most formidable players in the field. The signal is no longer theoretical: Anthropic and Meta — both capable of buying any silicon on the market — are turning to Google hardware to power their AI workloads. Even NATO is now purchasing cloud services from Google. That is not a company playing catch-up; that is a company setting the pace.
The Eighth-Generation TPU and the Full-Stack Bet
The centerpiece of Google's current push is its eighth-generation TPU, unveiled as part of a broader strategy to control more of the AI stack than any rival. The ambition here is not incremental. Google is not trying to win on models alone, nor on chips alone, nor on cloud services alone. It is trying to become the platform on which AI actually runs the enterprise — the substrate where business processes execute end to end.
The chip ambitions may stretch further still. Reports indicate Google is in discussions with Marvell about developing new AI silicon, including inference-focused TPUs. That would mark a meaningful expansion beyond its long-running partnership with Broadcom, and it underlines just how strategically central custom silicon has become to the company's plans. These chips are not a recent bet: they are the product of nearly a decade of sustained investment.
What emerges from that investment is something unlike anything else on the market — a closed loop running from silicon through the model layer, up into applications like Workspace, and out to end-user devices such as the Pixel phone. It is a vertically integrated AI story, wrapped in an ecosystem, that the broader market has arguably not fully priced in.
The Agentic Shift: From Copilots to Autonomous Workflows
The more important narrative shift, however, is conceptual. The future of enterprise software is no longer about copilots. It is about autonomous AI agents — systems that do not merely assist human workers but actually execute workflows across the organization. This is the agentic era, and Google is leaning into it explicitly.
The distinction matters. A better chatbot is a productivity enhancement; an agent that can run a business process is a structural change to how work is organized. Positioning the eighth-generation TPUs to support agents that run — not just assist — business processes reframes what a cloud provider is for. Cloud providers are evolving into AI operating systems that execute core business functions, and the next wave of enterprise value will likely be created by whoever wins that layer.
Multicloud Reality and the Wiz Play
Customers are not going to hand any single vendor a monopoly on that layer, and Google seems to understand this. The recently completed $32 billion acquisition of Wiz is being leveraged not as a lock-in mechanism but as a co-opetition play — an integration strategy that explicitly spans AWS, Microsoft, Databricks, ServiceNow, and others. Enterprises have operated in multicloud and hybrid environments for years, and they do not want platforms that work only inside a single provider's walls.
By embracing that reality rather than fighting it, Google Cloud is positioning itself as an AI operating system that unifies data, security, and applications across environments. It is an acknowledgment that the unit of competition is moving up the stack: the winner will not be whoever has the prettiest proprietary garden, but whoever runs the most enterprise workloads wherever they happen to live.
Tokenomics, Token Maxing, and the K-Shaped Future
Beneath all of this sits a subtler question: what actually separates durable AI businesses from the rest? The discussion is shifting from tokenomics — the raw cost and pricing of model inference — to token maxing, or how effectively organizations convert AI consumption into outcomes.
The likely result is a K-shaped future. Roughly the top twenty percent of organizations will extract real, sustainable growth from AI, deploying it agentically through full workflows. Many of the standout examples sit in private markets today: firms like Cursor, Anthropic, and OpenAI are already generating enormous volumes of their own code through AI, and Google is squarely in that conversation alongside them. The remaining majority will have AI somewhere in the loop but not genuinely driving their processes, and they will not capture the same compounding advantage.
The cleanest metric for watching this divergence play out is the ratio of AI spend growth to revenue growth. As the transformation matures, the expectation is that AI spending will approach saturation — companies will not need to scale their infrastructure budgets indefinitely — while revenue growth catches up and then outpaces it. The firms that reach that crossover, spending less on the margin while generating more in outcomes, will occupy the upper arm of the K. Those that cannot will stand in place.
The Acquisition Wire and What It Signals
A small but telling data point from the surrounding news cycle: Microsoft had reportedly considered an acquisition of Cursor before SpaceX stepped in with a deal. That kind of story — coding-focused AI companies attracting interest from hyperscalers and from adjacent industrial giants alike — is a useful reminder of how broadly AI platform competition has spread. It is no longer contained to the traditional cloud trio. The prize is large enough, and the integration opportunities varied enough, that non-obvious buyers are entering the market.
Conclusion
The through-line of all these developments is the same: AI has moved past the phase of demonstrating capability, and it is now in the phase of building durable platforms. Google's strategy — vertical integration from silicon to applications, an explicit bet on agentic enterprise workflows, and a multicloud posture that trades lock-in for reach — is one of the most coherent expressions of that shift. Whether it ultimately captures the dominant share of the AI operating system layer is an open question, but it is unmistakably one of the companies playing for the upper arm of the K.