TechnicalInsightful

7 Months of Claude Code Lessons in 19 Minutes

Sabrina Ramonov ๐Ÿ„

Sabrina Ramonov shares her 7-month workflow for using Claude Code in production software development, centered around a detailed claude.md rules file with named shortcuts (Q new, Q plan, Q code, Q check, Q git) that enforce test-driven development, code quality checklists, and consistent codebase patterns. She demonstrates the full process by implementing a prompt-selection dropdown feature in a live production app, emphasizing the importance of staying engaged while AI codes rather than walking away.

Summary

Sabrina Ramonov, a UC Berkeley computer science and physics graduate who previously sold an AI company for millions of dollars, presents her refined workflow for using Claude Code in real-world production software engineering โ€” explicitly distinguishing this from 'vibe coding' or MVP building.

The foundation of her workflow is a comprehensive claude.md rules file (equivalent to cursor rules in Cursor AI). This file is organized into sections covering implementation best practices, function writing best practices, test writing best practices, code organization, and git conventions. Key rules include: always ask clarifying questions before coding, draft and confirm an approach, follow test-driven development (TDD), prefer composable testable functions over classes, use branded types in TypeScript, rely on self-explanatory code rather than excessive comments, and critically โ€” avoid extracting new functions unless they will be reused, are the only way to unit test otherwise untestable logic, or dramatically improve readability. Each rule is tagged with codes (C1, C2, T1, T2, BP1, etc.) so Claude can cite specific rule violations in its output.

A major component of her system is a set of named shortcut commands embedded in the claude.md file. 'Q new' clears context and forces Claude to read and internalize the rules file at the start of every session. 'Q plan' instructs Claude to analyze similar parts of the codebase to ensure the proposed plan is consistent with existing patterns and reuses existing code. 'Q code' triggers implementation with TDD enforcement and lint/type checks. 'Q check' runs a quality review against the function and test checklists โ€” she finds it more effective to run 'Q check F' (functions only) and 'Q check T' (tests only) separately. 'Q X' has Claude role-play as a human UX tester and output a prioritized list of test scenarios. 'Q git' stages all changes, creates a conventional commit message, and pushes to the remote branch.

She demonstrates the full workflow by implementing a prompt-selection dropdown feature in her live production app โ€” a content repurposing tool where users convert TikTok videos into LinkedIn or Facebook posts. She shows how she iterates on the plan before ever running Q code, pushes back on unnecessary features like search functionality (arguing users have dozens of prompts, not hundreds), and actively monitors file changes in the source control diff view while Claude is coding. She catches and corrects sloppy code patterns by highlighting specific code blocks and asking Claude to clean them up.

After the first working implementation appears within roughly 10 minutes, she runs the quality check shortcuts and finds issues: no TDD was followed, some code was unnecessarily complex, and there were messy multi-line constructs that could be simplified to one line. She walks through iteratively correcting these. She concludes by running Q X for UX test scenarios, testing the feature manually against the generated list, and finally running Q git to commit and push. Her central thesis is that the first draft from Claude will be functionally working but often of poor code quality, and engineers must stay engaged, actively question the AI's decisions, and use structured checklists to prevent accumulation of spaghetti code in complex production codebases.

Key Insights

  • Ramonov argues that switching from Cursor AI and Windsurf to Claude Code gave her significantly more power, and that Claude Code typically does not require manually tagging files for context โ€” it discovers relevant files on its own during the planning phase.
  • Ramonov claims that staying engaged and reading Claude's output in real time โ€” rather than walking away while it runs โ€” can save 10 or more minutes per session by catching and stopping wrong rabbit holes before they compound.
  • Ramonov argues that AI tools are far too trigger-happy about extracting code into separate functions, and she added an explicit rule to her claude.md stating functions should not be extracted unless they will be reused, are the only path to unit test otherwise untestable logic, or drastically improve readability.
  • Ramonov describes her Q plan shortcut as one of the most valuable steps in her process, explaining that it forces Claude to find analogous patterns already in the codebase and redesign the plan to reuse existing code rather than inventing new solutions from scratch.
  • Ramonov states that Claude's first draft will typically produce working code within 10 minutes, but the code quality introduced is often poor โ€” with sloppy constructs, unused elements, and unnecessary complexity โ€” and that failing to actively clean this up leads to accumulated spaghetti code in complex production codebases.

Topics

Claude Code setup and workflowclaude.md AI coding rules fileNamed shortcut commands (Q new, Q plan, Q code, Q check, Q git)Test-driven development with AI coding toolsCode quality review and iterative correctionLive feature implementation walkthrough

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.