Greener-Dalii247 downloadsKarpathy's LLM Wiki implementation - multi-page knowledge generation with entity/concept pages and conversational query.

AI-powered structured knowledge base that ingests your notes and generates a connected Wiki — based on Andrej Karpathy's LLM Wiki concept.
Author: Greener-Dalii | Version: 1.7.13
English | 中文文档 | 日本語 | 한국어 | Deutsch | Français | Español | Português | Official Site | Discussions
You write. AI organizes. You ask. That's it.
The problem. Your notes are a goldmine — people, concepts, ideas, connections. But right now they're just files in folders. Finding what relates to what means searching, tagging, and hoping you remember the thread.
The fix. Andrej Karpathy suggested something elegant: treat your notes as raw material, and let an LLM do the architect work. It reads what you write, pulls out entities and concepts, and weaves them into a structured Wiki — complete with [[bidirectional links]], an auto-generated index, and a chat interface that answers questions from your knowledge.
So you don't have to be the librarian. No deciding what deserves a page. No maintaining cross-links. No wondering if something is out of date. Drop notes into sources/ and the LLM reads, extracts, writes, links, and even flags contradictions — while you stay in flow.
And it's not another chatbot. ChatGPT knows the internet. LLM-Wiki knows you — or rather, what you've taught it. Every answer carries [[wiki-links]] back into your knowledge graph. Every response is a trailhead, not a dead end.
Obsidian is brilliant at linked thinking. But there's a catch: you're the one doing all the linking.
LLM-Wiki flips that. Instead of you building the graph by hand, the AI grows it with you. Add a note about a new concept — it finds the connections you'd miss. Ask a question — it walks your own knowledge graph and brings back answers with citations.
[[wiki-links]] as breadcrumbs. Every answer is a path deeper into your own knowledge.Recommended — Obsidian Community Plugin Market:
Or from the Community Plugin website — visit community.obsidian.md/plugins/karpathywiki and click Add to Obsidian to install directly.
Manual (alternative):
main.js, manifest.json, styles.css from Releaseskarpathywiki, drop the three files insideDevelopment: git clone, pnpm install, pnpm build.
Ollama (local, no API key): Install Ollama, pull a model (ollama pull gemma4), select "Ollama (Local)" in the provider dropdown.
See README_CN.md for provider-specific instructions in Chinese.
| Method | How |
|---|---|
Ingest from sources/ |
Cmd+P → "Ingest Sources" — processes the entire sources/ folder |
| Ingest any folder | Cmd+P → "Ingest from Folder" — pick a folder, generate Wiki from existing notes |
| Query Wiki | Cmd+P → "Query Wiki" — ask questions, get streaming answers with [[wiki-links]] |
| Lint Wiki | Cmd+P → "Lint Wiki" — health scan with duplicate detection, dead links, orphans |
Re-ingesting the same source does incremental updates on entity/concept pages (new info merged in). Summary pages are regenerated.
Smart Batch Skip: When ingesting a folder, the plugin automatically detects already-processed files and skips them to save time and API costs. The batch report shows skipped count.
Upgrading from an earlier version? Run
Cmd+P→ "Regenerate index" to rebuild your Wiki index with aliases included — this enables alias-aware search in Query (e.g., searching "DSA" will find "DeepSeek-Sparse-Attention").
Ingestion Acceleration: For sources with many entities (20+), enable parallel page generation in Settings → Ingestion Acceleration:
Safety: Parallel generation uses
Promise.allSettled— if one page fails, others continue. Failed pages are retried individually with exponential backoff.
reviewed: true pages protected from overwritedetected → review_ok → resolved (AI fix) or detected → pending_fix (manual)[[wiki-links]], multi-turn historysources/ (read-only) → wiki/ (LLM-generated) → schema/ (co-evolved config)src/| Command | Description |
|---|---|
| Ingest single source | Select a note → generate Wiki pages with entities, concepts, and summary |
| Ingest from folder | Select a folder → batch generate Wiki from existing notes |
| Query wiki | Conversational Q&A over your Wiki, streaming responses with [[wiki-links]] |
| Lint wiki | Full health scan: duplicates, dead links, empty pages, orphans, missing aliases, contradictions |
| Regenerate index | Manually rebuild wiki/index.md |
| Suggest schema updates | LLM analyzes Wiki and proposes schema improvements |
Input: sources/machine-learning.md
# Machine Learning
Machine learning uses algorithms to learn from data.
## Types
- Supervised learning
- Unsupervised learning
- Reinforcement learning
Output — Entity page: wiki/entities/supervised-learning.md
---
type: entity
created: 2026-05-15
updated: 2026-05-15
sources: ["[[sources/machine-learning]]"]
tags: [method]
aliases: ["监督学习", "Supervised Learning"]
---
# Supervised Learning
## Basic Information
- Type: method
- Source: [[sources/machine-learning]]
## Description
Supervised learning is a machine learning paradigm where models learn
from labeled training data to make predictions on unseen data...
## Related Concepts
- [[concepts/Machine-Learning|Machine Learning]]
- [[concepts/Unsupervised-Learning|Unsupervised Learning]]
## Related Entities
- [[entities/Arthur-Samuel|Arthur Samuel]]
## Mentions in Source
- "Supervised learning uses labeled data to train predictive models..."
This plugin follows Karpathy's philosophy: feed the LLM full Wiki context, not chunked RAG retrieval. Long-context models are strongly recommended — the larger your Wiki grows, the more context the LLM needs.
Why not RAG? Karpathy's original critique argues that RAG fragments knowledge and breaks the LLM's ability to reason across the full knowledge graph.
Top recommendations:
| Model | Context Window | Why |
|---|---|---|
| DeepSeek V4 | 1M tokens | Best value — ultra-low pricing, strong Chinese support |
| Gemini 3.1 Pro | 1M+ tokens | Largest context window, strong reasoning |
| Claude Opus 4.7 | 1M tokens | Strongest agentic coding and reasoning |
| GPT-5.5 | 1M tokens | Latest OpenAI flagship, top AI intelligence index |
| Claude Sonnet 4.6 | 1M tokens | Great balance of speed, cost, and quality |
For local models (Ollama): context windows are typically smaller (8K–128K). Consider using a cloud provider for ingestion + local model for query.
Anthropic Compatible (Coding Plan): If your provider offers an Anthropic-compatible API endpoint, select "Anthropic Compatible" and enter your provider's Base URL and API Key.
Karpathy's three-layer separation design:
sources/ # Your source documents (read-only)
↓ ingest
wiki/ # LLM-generated Wiki pages
↓ query / maintain
schema/ # Wiki structure configuration (naming, templates, categories)
Codebase (src/):
wiki/ # Wiki engine modules
wiki-engine.ts # Orchestrator
query-engine.ts # Conversational query
source-analyzer.ts # Iterative batch extraction
page-factory.ts # Entity/concept CRUD + merge
lint-controller.ts # Lint orchestration
lint-fixes.ts # Fix logic + duplicate candidate generation
contradictions.ts # Contradiction detection
system-prompts.ts # Language directive + section labels
schema/ # Schema co-evolution
schema-manager.ts # Schema CRUD + suggestions
auto-maintain.ts # File watcher + periodic lint
ui/ # User interface
settings.ts # Settings panel
modals.ts # Lint/Ingest/Query modals
+ shared modules: llm-client.ts, prompts.ts, texts.ts, utils.ts, types.ts
Generated pages:
wiki/sources/filename.md — Source summarywiki/entities/entity-name.md — Entity pages (people, orgs, projects, etc.)wiki/concepts/concept-name.md — Concept pages (theories, methods, terms, etc.)wiki/index.md — Auto-generated indexwiki/log.md — Operation logMIT License — see LICENSE.