AI агент по расписанию: Gemini, Claude Code и Codex без ручного управления – Scheduler настройка
The video demonstrates how to configure AI agents (Gemini, Claude Code, Codex) to work autonomously on scheduled timers without manual interaction. The creator shows a step-by-step process of training agents to perform recurring tasks like content creation and architecture audits, then automating them to run at specific times. Materials and handout files are provided to help viewers replicate this setup.
Summary
The video opens with the creator's philosophy that the ideal AI agent, like the ideal employee, completes tasks independently without constant supervision. He argues that manually chatting with AI tools like ChatGPT, Gemini, or Claude to get work done is inefficient, and that the better approach is configuring agents to operate autonomously on a schedule.
He showcases two existing autonomous agents as examples. The first is a Codex-based agent that runs every Monday at 5 AM, audits its own architecture and documentation, and sends a report on what changed during the week. The second is a Gemini-based agent that generates social media content for Threads every Monday at 7 AM — producing three versions of each post, based on an analysis of previously approved content patterns, hooks, and endings. The creator's role is reduced to reviewing drafts, selecting what he likes, and deleting what he doesn't.
A practical reason for scheduling agents to run at off-peak hours is discussed: Anthropic's Claude has peak load hours (roughly 1 PM to 7 PM) during which token consumption increases faster, making it economically unwise to run heavy tasks at those times. Shifting tasks to night hours avoids this problem.
The creator explains his methodology for setting up a scheduled task. First, he performs the task together with the agent and refines the output through 3–5 iterations until the agent consistently delivers satisfactory results. Only then does he instruct the agent to place the task on a timer. He emphasizes that memory is a prerequisite — agents must be configured with persistent memory (e.g., linked to Obsidian or Telegram) before scheduling can work reliably.
A live demonstration is shown using his Codex agent. He sends it handout files (8 files available via Telegram) and a custom prompt to automate the creation of Instagram Reels scripts from his YouTube video content. The agent pulls the last 3 video notes, analyzes the last 15 Reels for stylistic patterns, and outputs 9 separate Reels scripts into an ideas folder. The agent also reports token usage, the number of files created, and self-validates that all output files exist and are non-empty.
The creator also describes a pre-automation testing phase where the agent runs the task manually and checks against a list of known failure modes — such as the agent starting but going silent, running tasks multiple times in a row, burning excessive tokens overnight, producing incorrectly formatted output, or breaking after running for a week. Once these are verified and corrected, the automation is finalized. The video concludes with a reminder to download the provided materials and a call to action for likes and comments.
Key Insights
- The creator argues that Anthropic deliberately increases token consumption during peak hours (1 PM to 7 PM), causing weekly token limits to burn faster — which is a core reason he migrated recurring tasks to scheduled, off-peak autonomous execution.
- The creator describes a Gemini-based agent that, before generating new Threads posts, analyzes previously approved posts to identify patterns, hooks, and endings — using the creator's own editorial choices as training signal for next week's content.
- The creator's methodology requires repeating a task with the agent 3–5 times until it consistently delivers correct results before placing it on a timer, framing this as a quality threshold rather than a one-shot setup.
- The creator asks agents to self-report token consumption per task run so he can evaluate the economic efficiency of each automation — comparing cost to satisfaction with output quality before committing to a recurring schedule.
- The creator describes a dedicated pre-launch test against a known list of failure modes — including agents running silently, executing tasks multiple times, burning 20–30% of the weekly token budget overnight, and tasks breaking after running for a week — before finalizing any automation.
Topics
Full transcript available for MurmurCast members
Sign Up to Access