7 Secret Prompts That Make Claude 10x Better
The video presents seven lesser-known Claude commands and prompts designed to improve productivity and efficiency. These include tools for maximizing reasoning effort, reducing token usage, analyzing usage patterns, scheduling tasks, multitasking within sessions, and managing context. The presenter claims most Claude users are unaware these features exist.
Summary
The presenter, who claims to teach AI to millions of people monthly and reached 30 million views, shares seven 'secret codes' for Claude that most users allegedly do not know about, framing them as productivity multipliers especially useful for users on limited plans.
The first code is 'ultra think,' a keyword that overrides the current reasoning effort setting and forces Claude to apply maximum reasoning for that specific prompt. This is particularly useful for complex or technical tasks where deeper analysis is needed, while allowing users to conserve tokens on routine work by staying in a default low or medium reasoning mode.
The second code is '/caveman,' described as an open-source skill that significantly reduces output token count. Users install it by asking Claude to install the skill, then invoke it by typing /caveman in prompts. This is positioned as helpful for users on lower-tier plans who need to manage usage limits carefully.
The third code is '/insights,' which instructs Claude to analyze the user's sessions from the past 30 days and generate a detailed HTML report. This report covers what the user is doing well, what anti-patterns or inefficiencies exist, and tailored recommendations for features or prompts to try โ all based on actual usage rather than generic advice. The presenter recommends running this monthly.
The fourth and fifth codes are '/loop' and '/schedule,' both used for automating recurring tasks. '/loop' runs tasks on a user's local machine at intervals as short as one minute but expires after three days and requires the computer to remain on. '/schedule' runs independently in the cloud at a minimum one-hour interval, does not require the computer to be on, and does not expire, but lacks local file access and has daily run caps depending on the subscription tier (5, 15, or 25 runs per day for pro, next-tier, and team/enterprise plans respectively).
The sixth code is '/btw' (by the way), which allows users to ask Claude a secondary question or run a secondary prompt without interrupting an ongoing long-running task. This enables a form of parallelization within the same session, preserving the existing context while handling additional queries simultaneously.
The seventh and final code is '/clear,' which deletes all previous context from the current session. The presenter cites research suggesting AI accuracy degrades with excessive context, and recommends using /clear when switching between unrelated tasks to both maintain response quality and reduce unnecessary token consumption.
Key Insights
- The presenter argues that typing 'ultra think' in a Claude prompt overrides the default reasoning effort setting and forces maximum reasoning just for that prompt, allowing users to reserve high-quality reasoning for complex tasks while conserving tokens on routine ones.
- The presenter claims the '/insights' command causes Claude to analyze a user's actual session history from the past 30 days and generate a personalized HTML report with specific recommendations โ rather than generic advice โ based on real usage patterns.
- The presenter distinguishes '/loop' from '/schedule' by explaining that '/loop' runs locally on the user's machine (requiring it to be on, with a 3-day expiration and 1-minute minimum interval), while '/schedule' runs independently in the cloud with no expiration but no local file access and a 1-hour minimum interval.
- The presenter argues that '/btw' allows users to send Claude a secondary prompt or question without interrupting an active long-running task in the same session, effectively enabling parallelization while preserving accumulated conversation context.
- The presenter cites research claiming that AI accuracy decreases when there is excessive context in a session, and recommends using '/clear' between unrelated tasks to both maintain response quality and reduce token consumption for users on limited plans.
Topics
Full transcript available for MurmurCast members
Sign Up to Access