Agents Over Bubbles
Ben Thompson argues that AI has evolved through three paradigms - ChatGPT, reasoning models like o1, and now functional agents - making massive compute investments justified rather than speculative. The rise of agents, which can autonomously execute complex tasks, reduces the need for widespread human adoption while dramatically increasing compute demand and economic impact.
Summary
Thompson challenges the prevailing narrative that AI represents a bubble, arguing instead that the technology has progressed through three distinct paradigms that justify current investments. The first paradigm began with ChatGPT in November 2022, which democratized LLM access but suffered from hallucinations and required constant human oversight. The second paradigm emerged with OpenAI's o1 model in September 2024, introducing reasoning capabilities that made LLMs more reliable by allowing them to self-evaluate and iterate on answers before delivery. The third and current paradigm involves functional agents like Anthropic's Opus 4.5 and OpenAI's GPT-5.2-Codex, which can autonomously complete complex, multi-hour tasks without human intervention by combining models with sophisticated 'harnesses' that can verify results and use deterministic tools. Thompson explains that each paradigm shift has exponentially increased compute demand - from basic inference to reasoning-heavy processes to multi-agent workflows requiring both GPU and CPU resources. Critically, agents reduce the need for widespread human adoption because a small number of people with agency can control multiple agents, creating massive compute demand without requiring mass consumer adoption. The economic implications are profound for enterprises, which face pressure not just to cut costs but to fundamentally restructure around AI-native operations to compete with smaller, more efficient AI-powered competitors. Thompson also addresses the value chain dynamics, arguing that agents require integration between models and harnesses, making companies like Anthropic and OpenAI less commoditized and more profitable than previously expected. He cites Microsoft's shift from model-agnostic approaches to integrated solutions as evidence that successful agent deployment requires tight coupling between components, challenging the assumption that models will become commodities.
Key Insights
- Agents dramatically reduce the human adoption threshold needed for massive AI impact because one person with agency can control multiple autonomous agents, creating exponential compute demand without mass consumer buy-in
- The integration between AI models and execution harnesses (not just model performance) creates sustainable competitive advantages, making companies like Anthropic and OpenAI less commoditized than expected
- Enterprises will likely over-cut workforces in anticipation of AI capabilities rather than under-cut, as the economic pressure to compete with AI-native startups forces preemptive restructuring around smaller, agent-augmented teams
Topics
Full transcript available for MurmurCast members
Sign Up to Access