Stop using Claude. Start using Codex?
Riley Brown joins Greg Eisenberg to demonstrate why Codex by OpenAI is his preferred AI super-app, covering its ability to combine vibe coding, document creation, browser control, and automation in a single interface. The episode walks through key features including skills, plugins, Remotion video generation, computer use, and the new Chronicle memory feature. Riley argues Codex outperforms alternatives like Claude Code and Cursor by unifying knowledge work and coding in one platform.
Summary
Greg Eisenberg invites Riley Brown back on the podcast to make the case for switching from Claude to Codex by OpenAI. Riley begins by explaining that Codex is a state-of-the-art AI agent interface that combines vibe coding, document creation, browser control, and automation in a single platform. He contrasts it with competitors like Claude Code and Cursor, arguing that Anthropic made a mistake by splitting their coding and knowledge work tools into separate products (Claude Code and Claude Co-work), whereas Codex handles both seamlessly.
Riley walks through the Codex interface, showing how projects can be organized into folders with threaded chats, similar to Manus and the new Claude desktop app. He introduces the concept of 'skills' — user-created instruction sets stored as markdown files — versus 'plugins,' which are official third-party integrations approved by OpenAI. Examples of plugins include Slack, email, Notion, Canva, Remotion, and Sora. Riley demonstrates a YouTube researcher skill that pulls transcripts and generates critical analysis reports, and shows how automations can be scheduled (e.g., a weekly Friday report on content weaknesses).
A significant portion of the discussion covers Remotion, a tool that converts code into motion graphic videos. Riley explains that Codex has a built-in Remotion plugin, and combined with a custom 'brand assets' skill and GPT Image 2, creators can generate high-quality branded videos with minimal prompting. He mentions that some Remotion videos generated through his workflow have exceeded 800,000 views.
Riley also demonstrates browser use within Codex, showing the AI playing chess against itself using the in-app browser — highlighting that browser automation has become significantly faster than earlier tools like Manus. He notes that Codex's computer use feature can control multiple apps simultaneously, including Canva, and that a full persistent browser (internally linked to OpenAI's Atlas project) is coming, which will allow the AI to stay logged into sites and take unrestricted actions.
The episode touches on GPT 5.5, described as more expensive via API than GPT 5.4 (2x) and Opus 4.7 (20%), but more token-efficient due to better intent modeling. Riley also covers the Chronicle feature, which watches the user's screen to build contextual memory, though he cautions that privacy implications should be investigated before enabling it.
For onboarding advice, Riley recommends: (1) playing with browser use by having AI play a game against itself, (2) doing deep research and converting it into spreadsheets, documents, and presentations, (3) building a 3D simulation or mobile app in Swift, and (4) identifying a repetitive daily task and turning it into an automation. He closes by emphasizing that tinkering — not immediate productivity gains — is how people get the most out of AI tools.
Key Insights
- Riley argues that Anthropic made a strategic mistake by splitting Claude Code and Claude Co-work into separate products with different permissions, whereas Codex unifies coding and knowledge work in a single interface — which he sees as the correct product decision.
- Riley claims that Codex is the only AI interface that handles vibe coding and knowledge work (docs, spreadsheets, presentations) in one platform, and that competitors like Cursor cannot natively create and preview documents the way Codex does.
- Riley reveals that OpenAI had a 'big red alert' internally and decided to consolidate their efforts — including the Atlas browser project — into Codex, with plans to make it a full persistent web browser where the AI stays logged into sites.
- Riley demonstrates that Codex's Remotion plugin, combined with a custom brand assets skill and GPT Image 2, can generate branded motion graphic videos in one shot — some of which have exceeded 800,000 views on his channel.
- Riley observes that browser use in Codex is meaningfully faster than earlier tools like Manus, and predicts that by the end of 2025, browser agents will operate at human speed — marking the first time he genuinely believes in browser automation.
- Riley explains that Claude Code can be run directly inside Codex via the terminal using Command J, allowing users to benefit from both their Claude and ChatGPT subscriptions without switching platforms.
- Riley states that GPT 5.5 is twice as expensive as GPT 5.4 via the API, but argues it is more token-efficient because it better models user intent and reaches correct outputs faster, reframing cost evaluation from tokens to task-completion cost.
- Riley describes Chronicle, a new Codex feature that watches the user's screen to build contextual memory, allowing the AI to understand ongoing work without re-explanation — though he acknowledges significant privacy implications that users should investigate.
- Riley argues that collecting good examples of desired outputs — stored in an organized knowledge base like Notion — is the single most impactful thing a company can do right now to improve AI agent performance.
- Riley claims he heard rumors that future versions of Codex will accept screen recording uploads, allowing the AI to watch how a user performs a task and then replicate it via computer control — prompting him to increase his own screen recording habits now.
- Riley distinguishes skills (user-created markdown instruction files accessible via slash commands) from plugins (officially approved third-party integrations accessible via @ mentions), noting that OpenAI controls plugin approval in a process similar to an app store.
- Riley recommends that new Codex users start by having the AI build a game and use browser use to play it against itself — not for productivity, but to viscerally understand where AI agent capabilities are heading, which he says changes how people think about preparing their workflows.
Topics
Full transcript available for MurmurCast members
Sign Up to Access