All-In Podcast
David Sacks on Mythos Threat: We have no choice but to take this seriously
David Sacks argues that Anthropic's warnings about AI cyber threats should be taken seriously, as increasingly capable coding models will become better at finding vulnerabilities and creating exploits. He recommends that organizations use the next few months to patch vulnerabilities, believing that proper preparation can prevent a doomsday scenario.
Chamath: Anthropic's Warning Is Pure Theater
Chamath argues that Anthropic's AI safety warnings about their Claude model are mostly theatrical marketing tactics. He compares it to OpenAI's similar warnings about GPT-2 in 2019, which he claims were overblown, and suggests this is a pattern among AI companies to generate attention and adoption.
Anthropic is kicking OpenAI’s ass: Insights from the largest revenue explosion in tech history
Anthropic has experienced massive revenue growth over the past 90 days, surpassing OpenAI despite being counted out previously. This represents the largest revenue explosion in tech history, driven by AI capabilities reaching a threshold where customers view it as essential for labor augmentation rather than just an IT expense.
Why they are trying to KILL OpenClaw
The speaker argues that there's a coordinated effort to kill an open source product (OpenClaw) because it threatens the dominance of large language model companies. They believe open source models, particularly smaller verticalized ones, will eventually capture 90% of token usage and undercut the entire frontier model space.
Anthropic’s $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence
The hosts discuss Anthropic's withholding of their powerful Mythos model due to cybersecurity risks, the company's unprecedented $30B revenue ramp, and current geopolitical tensions including the Iran-Israel ceasefire negotiations.