LLM Wiki

A pattern for using an AI agent to maintain a persistent Markdown wiki, so useful synthesis can compound instead of being re-created from raw sources every time.

LLM Wiki treats the knowledge base as an artifact an AI agent can update over time. Sources stay auditable, while a Markdown wiki layer carries the synthesis forward.

Skim first

The useful shift is from answering again to maintaining what has already been understood.

Core shift

Compiled knowledge

The durable output is not only an answer. It is an updated knowledge layer future questions can reuse.

Human role

Curate and judge

The human chooses sources, asks better questions, reviews emphasis, and decides what is worth preserving.

Agent role

Maintain structure

The AI agent updates summaries, concepts, links, contradictions, indexes, and related notes across the wiki.

Best fit

Growing topics

Use it when a topic accumulates sources and relationships over time. Keep one-off captures simple.

Architecture

Three layers make the pattern work.

The pattern separates evidence, synthesized knowledge, and the rules that let an AI agent maintain the system consistently.

Schema rail Agent instructions define where knowledge goes, how notes are formatted, and what gets updated.
folders YAML tasks
Evidence layer

Raw sources stay fixed

Articles, papers, transcripts, datasets, saved pages, screenshots, and other source material remain auditable.

Articlesweb clips and references
PapersPDFs and notes
Transcriptscalls, videos, podcasts
Compiled layer

Wiki becomes maintained synthesis

Maintained
Markdown wiki
concepts
entities
compare
indexes
Rule layer

Instructions govern updates

Schema and agent instructions define where things go, how links work, how provenance is recorded, and when existing notes should change.

Routingwhich folder and workflow
Provenancewhat source backs a claim
Linkingwhat deserves a wikilink
Preserve evidence

Source files stay auditable. The wiki summarizes and connects them, but does not replace them.

Compile synthesis

Reusable understanding moves into linked notes so future questions start from maintained context.

Govern updates

The schema tells the agent when to create, update, cite, link, log, or leave something alone.

Operating loop

The wiki compounds through repeated passes.

The loop is not just capture, summarize, and forget. Each useful pass should leave the knowledge base easier to query, browse, audit, and maintain.

source in

Ingest

When new source material arrives, the agent decides what it changes: concepts, entities, links, comparisons, and provenance.

answer out

Query

Useful answers can become durable notes instead of disappearing into chat history.

health check

Maintain

Periodic checks catch stale claims, contradictions, missing links, orphan notes, and weak navigation.

Compounding result

Each pass leaves the wiki easier to query, browse, audit, and maintain. The output is not just an answer; it is a better knowledge base.

Ingest detail New source becomes structure

Best for articles, papers, clips, transcripts, or durable research captures that should affect more than one note.

Query detail Useful answers get filed back

A good comparison, framework, dashboard, or project note should not disappear into chat history.

Maintain detail The wiki needs maintenance passes

Health checks keep the graph useful as the number of notes and source packets grows.

Knowledge map

The pattern becomes concrete when each layer has a clear home.

This is the generalized version of the map in my own notes: keep evidence, synthesis, entities, entry points, instructions, and logs in separate roles.

sources

Source layer

Durable source packets, PDFs, clips, archives, and binaries that future notes cite.

wiki

Knowledge layer

Reusable concept, framework, project, and synthesis notes that become working knowledge.

entities

People, organizations, products, and recurring references.

maps

Human-facing entry points for browsing and agent orientation.

system

Routing, YAML, templates, and task instructions.

logs

Chronological record of meaningful AI-made changes.

The useful unit is not a single summary file. A meaningful source can update a source packet, a library note, entity pages, maps, and a change log in one disciplined pass.

RAG contrast

RAG and LLM Wiki solve different parts of the knowledge problem.

RAG

Query-time retrieval

question
retrieve chunks
synthesize answer now
vs
LLM Wiki

Maintained synthesis

read source once
update wiki structure
future answers start compiled
Memory Source access happens at question time. Synthesis is preserved in Markdown notes.
Best for Evidence lookup and answering against large source collections. Topics where repeated synthesis and evolving context matter.
Failure mode Good evidence can still produce repeated one-off answers. Unclear schema can create inconsistent folders, conventions, or workflows.
Together Use retrieval to find and verify source material. Use the wiki to preserve what should compound.
How to use it

Use it deliberately, not for every small note.

Use when
  • A topic will accumulate sources over weeks or months.
  • Repeated synthesis would be wasteful.
  • Concepts, entities, and comparisons need to stay connected.
  • Future AI agents should inherit maintained context.
Avoid when
  • The note is a small one-off capture.
  • A quick answer is enough.
  • There is no reason to preserve provenance.
  • The topic does not need ongoing maintenance.
Human

Steer the meaning

Pick sources, ask questions, review emphasis, and judge what deserves to become durable.

Agent

Do the maintenance

Read sources, update notes, create links, reconcile contradictions, refresh indexes, and leave concise logs of meaningful changes.

System

Constrain the workflow

Folder roles, YAML, templates, task instructions, and safety rules make updates consistent enough to compound instead of drifting.

My note

The idea is interesting because it makes AI knowledge work cumulative.

What I like about LLM Wiki is that it reframes AI from an answer machine into a maintenance layer for knowledge. The important question becomes: after this source or conversation, is the knowledge base better than before?

That feels especially useful for topics that keep expanding. A good answer should not only help once. If it clarifies a concept, compares ideas, or updates an understanding, it should leave something durable behind.

Original concept reference The public concept note is based on Andrej Karpathy's LLM Wiki gist. Open Karpathy's LLM Wiki gist