Y Combinator
The AI Operating System for Companies
The transcript argues that AI-native companies succeed by making their entire organization 'queryable' — capturing all meetings, tickets, and interactions into a unified AI layer. This transforms companies from open-loop to closed-loop systems, enabling continuous monitoring and adjustment. The speaker sees a major opportunity to build the connective infrastructure that makes this possible by default.
SaaS Challengers
The transcript argues that AI has dramatically reduced software development costs, making legacy SaaS companies vulnerable to disruption. Startups are encouraged to build AI-native challengers targeting even the most entrenched enterprise software markets. The speaker frames this moment as analogous to the cloud transition that created the last generation of great software companies.
Industrial Capabilities in Space
Adi Oltean, co-founder of Star Cloud, advocates for developing industrial capabilities in space, focusing on extracting raw materials and 3D printing structures on the moon. He highlights the efficiency advantages of lunar manufacturing and encourages related startups to apply to Y Combinator.
Software for Agents
The transcript argues that AI agents represent the next trillion users of the internet and require purpose-built software infrastructure. Unlike humans, agents need machine-readable interfaces such as APIs, MCPs, and CLIs rather than visual UIs. The biggest startup opportunity lies not in building agents, but in building the software agents depend on.
Hardware Supply Chain
The transcript discusses the significant gap in hardware iteration speed between the US and China, particularly Shenzhen's ecosystem. A few US startups are beginning to address this, but the overall infrastructure stack remains incomplete. The speaker signals strong investment interest in startups that can dramatically accelerate hardware development cycles.
Recursion Is The Next Scaling Law In AI
YC visiting partner Francois Shaard discusses two 2025 AI papers—Hierarchical Reasoning Models (HRM) and Tiny Recursive Models (TRM)—that demonstrate recursion at inference time as a powerful alternative to simply scaling model size. A 7-million parameter TRM outperforms much larger LLMs on reasoning benchmarks like ARC Prize by leveraging recursive hidden states instead of chain-of-thought token generation. The conversation contrasts these approaches with traditional LLMs and RNNs, exploring why recursion addresses fundamental limitations in transformer reasoning.