TechnicalInsightful

Claude Cowork + Ollama is INSANE

Julian Goldie SEO

This video introduces Claude Co-work and Ollama as two powerful AI tools that, when combined, enable users to automate complex workflows. Claude Co-work acts as an agentic AI system that can schedule tasks, organize files, and generate reports autonomously, while Ollama allows open-source AI models to run locally. Together, they offer a privacy-conscious, highly customizable automation setup.

Summary

The video opens by arguing that most people are wasting hours each week on tasks that AI could handle, and promises to show viewers how Claude Co-work and Ollama can change that. The host, presenting as a digital avatar of Julian Goldie, positions himself as someone focused on practical AI tool usage rather than theoretical discussion.

The first major topic is Claude Co-work, which the host distinguishes sharply from standard Claude chat usage. Rather than a back-and-forth chatbot, Co-work is described as Anthropic's agentic AI system for knowledge work — one that receives a goal from the user and autonomously executes multi-step tasks to completion. It runs on the Claude desktop app and connects to local files and applications. Real examples cited directly from Anthropic's product page include: automatically pulling analytics metrics into a weekly report every Friday, scanning a cluttered downloads folder and proposing an organized filing plan, extracting data from receipts and invoices into formatted spreadsheets, and drafting reports from company templates and meeting notes.

The host also addresses concerns about data privacy and user control. Users explicitly choose which folders Claude can access, Claude proposes actions before executing them, users can redirect or stop the process at any point, and conversation history is stored locally rather than in the cloud. A forthcoming feature called 'Dispatch' is mentioned, which would allow Claude to operate directly on the user's computer — opening apps, navigating browsers, and interacting with the screen — with the phone-to-desktop task delegation already in research preview for Pro and Max subscribers.

The second major tool covered is Ollama, described as a platform for running open-source AI models locally. Key attributes include offline capability, no data training on user inputs, broad model support, and full open-source availability on GitHub. A significant development highlighted is that starting with Ollama version 0.4.0, Ollama added full compatibility with the Anthropic Messages API, meaning Claude Code — Anthropic's agentic coding tool — can now be run with open-source models through Ollama. Setup is described as simple: one terminal command each for Claude Code and Ollama installation, with a single command ('ollama launch claude') to get started. Specific model recommendations are provided for cloud use (Kimi K2.5 Cloud, GLM5 Cloud) and local use (Qwen 3.5, Qwen 3 Coda), with a minimum 64K token context length recommended.

Ollama's '/loop' command is highlighted as a particularly powerful feature, enabling scheduled automated tasks within Claude Code — for example, checking pull requests every 30 minutes or summarizing AI news every hour. Additional integrations mentioned include headless mode for Docker containers and CI/CD pipelines, and a Telegram bot integration for sending tasks from a mobile device.

The video closes by discussing Co-work's enterprise features — admin controls, usage tracking, and a plugin system with bundled skills, connectors, and sub-agents — and by framing the combination of these two tools as a meaningful shift toward delegating entire workflows rather than just asking questions. The host promotes his communities, AI Profit Boardroom and AI Success Lab, as resources for practical implementation.

Key Insights

  • The host explains that Claude Co-work is fundamentally different from standard Claude chat because it doesn't just answer questions — it autonomously executes multi-step tasks from start to finish, such as pulling analytics into a report template every Friday with a single instruction.
  • The host notes that Co-work stores conversation history locally on the user's device rather than in the cloud, and requires explicit user permission before accessing any folders or taking significant actions, positioning it as privacy-conscious compared to typical cloud AI tools.
  • The host highlights that starting with Ollama version 0.4.0, Ollama added full compatibility with the Anthropic Messages API, meaning Claude Code can now be run against open-source models on local hardware — not just Anthropic's own infrastructure.
  • The host demonstrates Ollama's '/loop' command, which enables scheduled automated tasks inside Claude Code — for example, checking open pull requests every 30 minutes or generating an AI news briefing every hour — without any additional scripting.
  • The host describes a forthcoming Co-work feature called 'Dispatch' that allows Claude to operate directly on the user's computer by opening apps and navigating the browser, with the system prioritizing direct integrations first and only falling back to screen interaction as a last resort.

Topics

Claude Co-work agentic AI systemOllama local model hosting and Anthropic API compatibilityWorkflow automation and task schedulingPrivacy and user control in agentic AIClaude Code integration with open-source models

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.