90% of AI Users Are Getting Mediocre Output. Don't Be One of Them (Stop Prompting, Do THIS Instead)
Most AI users get mediocre results because AI models are trained to produce average responses that satisfy the broadest range of users. The video explains four key levers beyond prompting - memory, instructions, tools, and style controls - that allow users to customize AI for their specific needs.
Summary
The speaker argues that 90% of AI users receive mediocre output because they rely on default settings that are optimized for the average user, not their specific needs. AI models undergo reinforcement learning from human feedback (RLHF), where human raters evaluate responses and choose what seems most helpful to most people, creating a statistical median response. This training process, while making models generally helpful, prevents them from being calibrated to individual users' particular constraints and preferences. The speaker introduces four major levers for customization: Memory (AI retaining information about users across conversations), Instructions (persistent context about user preferences and desired AI behavior), Apps and Tools (capabilities like web search and file access), and Style Controls (adjusting communication tone and personality). Each platform - ChatGPT, Claude, and Gemini - implements these features differently. The key to success is being specific rather than vague in customization, continuously capturing corrections when AI responses feel off, and encoding those patterns back into the AI's settings. The speaker emphasizes that users who achieve 10x results maintain discipline in updating their AI configurations based on mistakes and patterns they observe.
Key Insights
- Modern AI assistants learn to be average through reinforcement learning from human feedback, where human raters compare multiple responses and pick which seems most helpful to most people, causing the model to hit the middle of preference distribution
- The training process that makes AI models helpful in general is exactly what makes them mediocre for specific users, as the same mechanism preventing weird outputs also prevents calibration to particular needs
- Boris Churnney's team practice involves adding a rule to claude.markdown whenever Claude does something wrong, treating it as a living document that the whole team contributes to and maintains in Git
- Being specific in instructions creates dramatically better results - comparing 'be more helpful' versus 'when I'm stuck on a problem, please ask me diagnostic questions rather than immediately giving solutions'
- People getting 10x results capture corrections when they notice patterns and encode them back into the AI through instructions, memory, and style settings, while most people just get frustrated and move on
Topics
Full transcript available for MurmurCast members
Sign Up to Access