A Shift From Conversational AI to Autonomous Agents
Artificial intelligence is entering a new chapter, one defined less by chatbots answering questions and more by autonomous agents executing complex, multi-step workflows on behalf of enterprises. This emerging "agentic economy" is poised to reshape not only how businesses operate but also how value flows across the technology stack. According to recent analysis from Goldman Sachs, enterprise AI agents could drive global token consumption up 24 times by 2030 and as much as 55 times by 2040. Those numbers describe a structural transformation, not a passing trend.
Why Compute Economics Are the Real Story
The reason this shift carries such weight for capital markets lies in the underlying economics. We are approaching a key inflection point at which compute costs are falling faster than token prices. That divergence is critical: it converts AI from a cost-heavy story, where every interaction eats into margins, into a genuine profit story. When the cost of producing each unit of intelligence drops more quickly than the price customers are willing to pay for it, scale becomes a tailwind rather than a burden. This shift opens the door for AI providers and infrastructure operators to expand margins as adoption deepens.
The Semiconductor Beneficiaries
At the foundation of this build-out sits a select group of semiconductor companies whose chips and accelerators are designed to handle the demands of always-on, complex AI agents. Broadcom, Nvidia, and AMD stand out as the architects of the silicon layer that makes agentic workloads feasible. As agents move beyond simple text exchanges and into multimodal workflows—handling images, audio, video, and continuous reasoning—token intensity and compute demand ramp up significantly. Each new modality compounds the workload, and the chipmakers positioned to serve those expanded requirements stand to capture an outsized share of the resulting spend.
The Platform Layer: Cloud Providers in the Sweet Spot
On the platform side, Alphabet and Amazon emerge as natural beneficiaries. Both companies operate hyperscale cloud businesses that gain on multiple fronts as agents proliferate. More agents mean more cloud usage, more inference workloads, and stronger unit economics, all converging at a moment when margins are beginning to inflect higher. Cloud providers monetize not just the raw compute but also the orchestration, storage, and ancillary services that agentic workflows require. The combination of rising volume and improving economics positions these platforms to translate the agentic wave into durable earnings power.
Compounding Demand for Infrastructure
The deeper insight is that agentic AI is fundamentally about richer workflows, longer context windows, continuous monitoring, and repeat interactions. Every one of those characteristics compounds demand for underlying infrastructure. A traditional chatbot might generate a single short response and then go quiet; an agent, by contrast, may run persistently in the background, monitor data feeds, retrieve information across vast contexts, and return to the same task again and again. Each of those behaviors multiplies the tokens consumed and the compute cycles required.
A Bigger Boat
The implication is straightforward but profound: the infrastructure that powered the first wave of generative AI will not be sufficient for what comes next. As agents become more capable and embedded in everyday enterprise operations, the entire stack—from silicon to cloud platforms—will need to scale accordingly. To borrow a fitting metaphor, when it comes to AI agents, we are going to need a bigger boat. The companies building that boat, and the platforms that operate it, are likely to define the next phase of value creation in the AI era.