InsightfulTechnical

Why LLM Wiki? ๐Ÿง  Future Of Knowledge For Agentic AI & Humans

Wanderloots

Callum, a former IP lawyer, explains the concept of knowledge graphs and introduces the 'LLM Wiki' โ€” a separate, AI-maintained structured knowledge base that allows multiple AI tools to share the same persistent, interlinked information. He contrasts standard RAG retrieval with graph RAG, arguing that a structured wiki layer dramatically improves how AI handles complex, multi-source knowledge.

Summary

The video begins with Callum introducing two distinct knowledge systems he has built: one for his own thinking (a personal knowledge graph) and a separate one โ€” the LLM Wiki โ€” designed specifically for his agentic AI tools. He frames the video around the foundational building blocks of a knowledge graph: nodes (things, ideas, people, events), edges (named relationships between nodes), and triples (subject-relationship-object), which he describes as the atomic unit of any knowledge graph.

To make knowledge graphs tangible, Callum uses Google's knowledge panel and Wikipedia as real-world examples, noting that both are essentially large-scale knowledge graphs. He then walks viewers through building a knowledge graph in Obsidian using personal examples like favorite inventions, demonstrating how linking concepts while writing naturally produces a graph structure without deliberate graph-building effort. He emphasizes that the graph compounds over time, allowing users to rediscover and build on prior notes rather than starting from scratch.

Callum then transitions to discussing how AI currently retrieves information through RAG (Retrieval Augmented Generation), which converts documents into numerical embeddings and fetches the most similar chunks. While effective for simple queries, he argues RAG fails when answers live in the relationships between documents rather than within individual ones. He introduces graph RAG as a more effective alternative for complex, high-volume data, as it allows AI to follow relational paths between sources rather than retrieving thousands of disconnected chunks.

The video's central concept โ€” the LLM Wiki โ€” is introduced as a solution to the problem of AI tools having siloed, incompatible memories. Callum cites Andrej Karpathy's articulation of the idea: rather than indexing raw documents for retrieval at query time, an LLM incrementally builds and maintains a persistent wiki of structured, interlinked markdown files. When new sources are added, the agent reads, extracts, and integrates them into the existing wiki, updating entity pages and flagging contradictions. This knowledge is compiled once and kept current, rather than rederived on every query.

Callum describes his personal implementation: a separate human vault for his own thinking, and a distinct LLM vault that AI agents build and maintain from raw clipped sources. The LLM Wiki has three layers โ€” raw sources (untouched), a compiled wiki (structured, interlinked pages written by the agent), and ongoing maintenance (contradiction checking, orphan page resolution). He closes by positioning the combination of a human vault and an agentic vault as the closest approximation to a true second brain.

Key Insights

  • Callum argues that a knowledge graph is not something you explicitly build โ€” it is the natural byproduct of being specific and relational in how you take notes, stating 'I didn't try to build the graph. I just wrote about the relationship between different concepts.'
  • Callum contends that standard RAG fails not when answers live within individual documents, but when they live between documents โ€” in the connections and dependencies โ€” at which point what is needed is 'a reference librarian, not a chatbot.'
  • Callum cites Andrej Karpathy's framing of the LLM Wiki: rather than indexing raw sources for retrieval at query time, an LLM incrementally builds a persistent, structured wiki โ€” integrating new information, updating entity pages, and noting contradictions โ€” so knowledge is 'compiled once and kept current, not rederived on every query.'
  • Callum explains that AI tools each maintain their own siloed memory, which works within a single tool but breaks down entirely when switching between tools โ€” leading him to propose the LLM Wiki as a shared structured knowledge base that all AI tools can draw from simultaneously.
  • Callum deliberately keeps his human vault and LLM vault separate so he can clearly distinguish what came from his own thinking versus what was generated or compiled by AI, describing this separation as a personal 'firewall' between human and machine knowledge.

Topics

Knowledge graphs and their structure (nodes, edges, triples)LLM Wiki as a shared AI knowledge baseRAG vs. Graph RAG for AI retrievalObsidian as a personal knowledge management toolSeparating human thinking from AI-generated knowledge

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.