LLM Wiki
A pattern for using an AI agent to maintain a persistent Markdown wiki, so useful synthesis can compound instead of being re-created from raw sources every time.
LLM Wiki treats the knowledge base as an artifact an AI agent can update over time. Sources stay auditable, while a Markdown wiki layer carries the synthesis forward.
The useful shift is from answering again to maintaining what has already been understood.
Compiled knowledge
The durable output is not only an answer. It is an updated knowledge layer future questions can reuse.
Curate and judge
The human chooses sources, asks better questions, reviews emphasis, and decides what is worth preserving.
Maintain structure
The AI agent updates summaries, concepts, links, contradictions, indexes, and related notes across the wiki.
Growing topics
Use it when a topic accumulates sources and relationships over time. Keep one-off captures simple.
Three layers make the pattern work.
The pattern separates evidence, synthesized knowledge, and the rules that let an AI agent maintain the system consistently.
Raw sources stay fixed
Articles, papers, transcripts, datasets, saved pages, screenshots, and other source material remain auditable.
Wiki becomes maintained synthesis
Markdown wiki
Instructions govern updates
Schema and agent instructions define where things go, how links work, how provenance is recorded, and when existing notes should change.
Source files stay auditable. The wiki summarizes and connects them, but does not replace them.
Reusable understanding moves into linked notes so future questions start from maintained context.
The schema tells the agent when to create, update, cite, link, log, or leave something alone.
The wiki compounds through repeated passes.
The loop is not just capture, summarize, and forget. Each useful pass should leave the knowledge base easier to query, browse, audit, and maintain.
Ingest
When new source material arrives, the agent decides what it changes: concepts, entities, links, comparisons, and provenance.
Query
Useful answers can become durable notes instead of disappearing into chat history.
Maintain
Periodic checks catch stale claims, contradictions, missing links, orphan notes, and weak navigation.
Each pass leaves the wiki easier to query, browse, audit, and maintain. The output is not just an answer; it is a better knowledge base.
Best for articles, papers, clips, transcripts, or durable research captures that should affect more than one note.
A good comparison, framework, dashboard, or project note should not disappear into chat history.
Health checks keep the graph useful as the number of notes and source packets grows.
The pattern becomes concrete when each layer has a clear home.
This is the generalized version of the map in my own notes: keep evidence, synthesis, entities, entry points, instructions, and logs in separate roles.
Source layer
Durable source packets, PDFs, clips, archives, and binaries that future notes cite.
Knowledge layer
Reusable concept, framework, project, and synthesis notes that become working knowledge.
entities
People, organizations, products, and recurring references.
maps
Human-facing entry points for browsing and agent orientation.
system
Routing, YAML, templates, and task instructions.
logs
Chronological record of meaningful AI-made changes.
The useful unit is not a single summary file. A meaningful source can update a source packet, a library note, entity pages, maps, and a change log in one disciplined pass.
RAG and LLM Wiki solve different parts of the knowledge problem.
Query-time retrieval
Maintained synthesis
Use it deliberately, not for every small note.
- A topic will accumulate sources over weeks or months.
- Repeated synthesis would be wasteful.
- Concepts, entities, and comparisons need to stay connected.
- Future AI agents should inherit maintained context.
- The note is a small one-off capture.
- A quick answer is enough.
- There is no reason to preserve provenance.
- The topic does not need ongoing maintenance.
Steer the meaning
Pick sources, ask questions, review emphasis, and judge what deserves to become durable.
Do the maintenance
Read sources, update notes, create links, reconcile contradictions, refresh indexes, and leave concise logs of meaningful changes.
Constrain the workflow
Folder roles, YAML, templates, task instructions, and safety rules make updates consistent enough to compound instead of drifting.
The idea is interesting because it makes AI knowledge work cumulative.
What I like about LLM Wiki is that it reframes AI from an answer machine into a maintenance layer for knowledge. The important question becomes: after this source or conversation, is the knowledge base better than before?
That feels especially useful for topics that keep expanding. A good answer should not only help once. If it clarifies a concept, compares ideas, or updates an understanding, it should leave something durable behind.