AI News: Impressive New Model From Unexpected Company
This AI news roundup covers Thinking Machines Labs' impressive new interaction model with real-time translation and context-aware interruption, OpenAI's Codex mobile app, and Google's upcoming IO announcements. Additional topics include Anthropic's controversial subscription changes, new image model Crea 2, and various smaller updates from Meta, Notion, and robotics.
Summary
The video opens with coverage of Thinking Machines Labs, founded by former OpenAI interim CEO Mira Murati, which debuted demos of a new AI interaction model. The model stands out for real-time language translation, context-aware interruption (including user-defined triggers like animal names), posture monitoring via camera, proactive safety interventions, simultaneous tool calls while conversing, and built-in timekeeping — capabilities the host argues feel genuinely novel compared to incremental benchmark improvements seen elsewhere.
OpenAI received coverage for two updates: the launch of Codex mobile, which lets users remotely access and steer their Codex coding sessions from a phone, and the introduction of 'Daybreak,' a cybersecurity scanning tool positioned as an alternative to Anthropic's own security model. Unlike Anthropic's approach of providing direct model access, OpenAI offers to run scans on users' behalf.
Anthropic featured prominently with mixed news. Claude Code received a 50% increase in weekly limits through July 13th, and an 'agent view' UI was added to consolidate multiple terminal agents. However, a controversial subscription change announced for June 15th drew community backlash: third-party API usage (via tools like OpenClaw or Hermes) will now draw from a credit pool billed at API rates, which users calculated would deplete far faster than previous unlimited plans. Despite this, Anthropic surpassed OpenAI in business adoption for the first time according to Ramp data, and continued its industry-by-industry expansion with new Claude offerings for the legal sector and small businesses. A viral story also highlighted a user recovering 5 Bitcoin from an 11-year-locked wallet using Claude to analyze their old hard drive.
Google's Android event previewed several AI-driven features including Gemini integration in Chrome for Android, AI-enhanced voice transcription, form auto-fill, and a reimagined AI-aware mouse pointer with head/eye tracking. The 'Google Book' was introduced as an AI-first evolution of the Chromebook. Ahead of Google IO the following week, rumors pointed to a new Gemini 3.2 Flash model and a 'Gemini Spark' always-on agent.
Additional rapid-fire updates covered Crea 2's image model with mood board-driven style control, Meta adding incognito chat to WhatsApp and rolling out Llama-based Llama Spark across its products, Notion's new developer platform, the relaunch of Digg (di.gg) as an AI-curated trend tracker, World Labs' open-source image-to-3D-environment tool, Rivian's new in-vehicle AI assistant, and Figure Robotics' 34+ hour live-streamed robot package-sorting demonstration.
Key Insights
- Thinking Machines Labs' new model can proactively interrupt a user mid-sentence when it detects dangerous advice is being followed — such as taking elderly parents mountain biking or visiting an active volcano — rather than waiting for the user to finish speaking.
- The host argues that Thinking Machines Labs' demos represent the first genuinely novel AI advancement since GPT-4's launch, contrasting it with the 'marginal benchmark improvements' that have characterized most model releases since then.
- Anthropic's new subscription credit system, launching June 15th, is characterized by community members as a 'massive nerf' because credits for third-party tools like OpenClaw are billed at expensive API rates, meaning heavy users could exhaust them within hours.
- According to Ramp data cited in the video, Anthropic surpassed OpenAI in business adoption for the first time in April, with Anthropic at 34.4% versus OpenAI at 32.3%, reversing a prior gap.
- Google's demo of an AI-aware pointer using head/eye tracking — where a user directs actions by looking and speaking without touching keyboard or mouse — leads the host to suggest this represents a near-future paradigm shift toward gesture-and-voice-only computing reminiscent of Iron Man's Jarvis.
Topics
Full transcript available for MurmurCast members
Sign Up to Access