DiscussionOpinion

OpenAI Misses Targets, Codex vs Claude, Elon vs Sam Trial, Big Hyperscaler Beats, Peptide Craze

The All-In podcast hosts discuss OpenAI missing its user and revenue targets, the Musk vs. Altman trial, massive hyperscaler CapEx announcements totaling $725 billion, and emerging peptide drugs like retatrutide. The conversation spans AI competitive dynamics, energy constraints on AI growth, cybersecurity implications of frontier models, and a Supreme Court case on EPA regulatory authority.

Summary

The episode opens with a humorous segment featuring clips from the 'Miss Thing' podcast doing a 'Gay Name or Straightening' bit about the hosts, before transitioning into substantive topics.

On OpenAI, the hosts discuss a Wall Street Journal report revealing OpenAI missed its target of 1 billion weekly active users before end of 2025 and also missed revenue targets. David Sacks offers a contrarian view, arguing that while consumer numbers disappointed, the product side has been strong with GPT-5.5 receiving positive developer reviews, while Anthropic's Opus 4.7 has been criticized for compute rationing and quality regression. Sacks argues Sam Altman may prove right for the wrong reasons — missing consumer targets but benefiting from massive compute commitments as enterprise and coding use cases explode. Chamath argues the core constraint is power/energy, not demand, and that hyperscalers will benefit most from this dynamic while OpenAI and Anthropic are squeezed. Polymarket odds on OpenAI IPO by end of 2026 have dropped from 60% to 32%.

On the Musk vs. Altman trial, the hosts discuss Elon's accusations of breach of charitable trust and unjust enrichment, with Elon seeking $150 billion in damages and a reversion to nonprofit status. Greg Brockman's diary entries emerged as potentially damaging evidence. The hosts note it's a bench trial with Judge Rogers (an Obama appointee who handled Epic vs. Apple) making the final call. Chamath speculates the likely outcome is some form of settlement.

On hyperscaler earnings, Google Cloud grew 63% YoY, Microsoft Cloud 30%, and AWS 28%. Combined CapEx guidance from Amazon, Microsoft, Google, and Meta totals $725 billion for 2026. Free cash flow is being sacrificed — Amazon's down 97%. Sacks argues this validates the AI bull thesis and contrasts it with the dark fiber problem of 2000, noting there are 'no dark GPUs.' Chamath warns these companies will increasingly resemble leveraged industrial businesses. Freeberg discusses MIT research on neural network pruning that could reduce inference costs by 10x.

On cybersecurity, the hosts discuss OpenAI releasing GPT-5.5 Cyber, which reportedly matches Anthropic's Mythos in end-to-end cyber attack simulation capability. Sacks argues Mythos is not a doomsday weapon but rather an automation of existing cyber activities, similar to how AI automates coding, and that the real opportunity is using these tools defensively to harden infrastructure before black hat actors get access.

On vibe coding risks, the hosts discuss a founder who lost his entire production database and backups when an AI agent (Opus 4.6 via Cursor) deleted a Railway volume without confirmation. Sacks argues this represents AI not knowing what it doesn't know, and that agents require human supervision — pushing back on narratives of full job automation.

On retatrutide (Lily's triple agonist peptide), Freeberg summarizes phase 3 data showing 37-pound average weight loss in 40 weeks, 80% reduction in liver fat, A1C dropping from 7.9% to 6%, and anti-inflammatory effects. The drug is projected for FDA approval around mid-2027 and is generating significant excitement in fitness and health communities.

The episode closes with Freeberg describing his experience attending a Supreme Court oral argument for the Monsanto/Roundup case, where Bayer is arguing federal preemption (EPA label authority) against state failure-to-warn laws, complicated by the court's own prior overturning of the Chevron Doctrine.

Key Insights

  • Sacks argues OpenAI may prove Sam Altman right for the wrong reason — he missed consumer targets but massive compute commitments may give OpenAI a supply advantage over token-constrained Anthropic in the now-dominant coding market.
  • Chamath argues the primary constraint on AI growth is not demand but power/energy supply, and that less than half of announced data center projects are actually being built due to supply chain delays and red tape.
  • Chamath contends the energy and compute bottleneck will disproportionately benefit hyperscalers (Oracle, Amazon, Meta, Microsoft, Google) while hurting OpenAI and Anthropic, who must give up equity or control to secure compute access.
  • Sacks argues the $725B hyperscaler CapEx boom is fundamentally different from the 2000 dark fiber overbuild because current infrastructure is being pulled forward by voracious, existing demand — 'there are no dark GPUs today.'
  • Freeberg cites an MIT paper showing neural network pruning techniques can reduce model size by 90% with no accuracy loss, potentially enabling 10x more inference output per unit of energy — which could substantially change the compute economics.
  • Sacks claims AI-generated code now accounts for approximately 75% of US GDP growth in the last quarter, framing AI as now synonymous with American economic growth.
  • Sacks argues that Mythos and GPT-5.5 Cyber don't create vulnerabilities — they discover pre-existing bugs — and that a one-time upgrade cycle using these tools defensively could actually harden global infrastructure before reaching a new equilibrium.
  • Sacks contends the 'agents will eliminate all software developers' thesis has hit peak inflated expectations, citing the production database deletion incident as evidence that agents still require human supervision and accountability.
  • Chamath argues that all operational software running the world will eventually be rewritten — partly for economic efficiency gains and partly because human-written code is fundamentally insecure — and that machines writing code will make it more impregnable.
  • Freeberg reports retatrutide phase 3 data showing 37-pound average weight loss, 80% liver fat reduction, and A1C improvement from 7.9% to 6% in 40 weeks, with the glucagon receptor agonist favoring fat burning over muscle loss compared to prior GLP-1 drugs.
  • Freeberg argues that the Supreme Court's prior overturning of the Chevron Doctrine (federal agency deference) has introduced a 50/50 uncertainty into the Monsanto/Roundup case, where previously a 6-3 Bayer win seemed likely — because states may now claim the right to interpret federal law independently of EPA labels.
  • Chamath argues the therapy and medication industrial complex is built on cultivating rumination, and that the incentive structure of therapists (maintaining long-term paying clients) is fundamentally misaligned with patient recovery.

Topics

OpenAI missing user and revenue targetsMusk vs. Altman trialHyperscaler CapEx ($725B) and earningsAI cybersecurity models (Mythos, GPT-5.5 Cyber)Vibe coding risks and AI agent failuresRetatrutide peptide drugSupreme Court Monsanto/Roundup caseEnergy and compute constraints on AI growth

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.