bramses110k downloadsA seamless integration of ChatGPT, OpenRouter.ai and local LLMs via Ollama into your notes.
🚀 A seamless integration of ChatGPT, OpenRouter.ai and local LLMs via Ollama/LM Studio into Obsidian.

Create reusable AI personas with custom system prompts, models, and temperature settings.
Define agents as Markdown files with frontmatter and a system prompt body. Apply an agent to any note with the Choose Agent command, or create new agents with the AI Wizard—describe what you want, and AI generates the name, temperature, and a comprehensive system prompt for you.
Your AI assistant can now actively search your vault, read files, and query the web—with a human-in-the-loop architecture that keeps you in control.
Tool calling built on privacy-first principles. When your AI needs information, it requests permission to use tools—you approve execution, review results, and control exactly what gets shared back to the model. Nothing leaves your vault without explicit consent.
Get started in just a few simple steps:
Settings > Community Plugins > Browse, search for ChatGPT MD and click Install.ChatGPT MD: Chat command (cmd + p or ctrl + p) to start a conversation from any note.💡 Pro tip: Set up a hotkey for the best experience! Go to Settings > Hotkeys, search for ChatGPT MD: Chat and add your preferred keybinding (e.g., cmd + j).
Start chatting, don't worry too much about the more advanced features. They will come naturally :-)
Want to keep your conversations private and avoid API costs? Use local LLMs with ChatGPT MD!
ollama pull llama3.2 # or any model of your choice
ollama pull qwen2.5 # another popular option
Settings > ChatGPT MD > Ollama Defaultshttp://localhost:11434[email protected])ChatGPT MD: Chat command to start conversations with your configured default model, or override it in individual notes:---
model: [email protected] # Override default if needed
temperature: 0.7
---
Settings > ChatGPT MD > LM Studio Defaultshttp://localhost:1234lmstudio@your-model-name)ChatGPT MD: Chat command to start conversations with your configured default model, or override it in individual notes:---
model: lmstudio@your-model-name # Override default if needed
temperature: 0.7
---
ollama list in terminal to see installed modelsollama list (Ollama) or check LM Studio interface to find your available model names for configurationgpt-4o-search-preview and Perplexity's openrouter@perplexity/llama-3.1-sonar-small-128k-online (via openrouter.ai).ChatGPT MD is
The plugin comes with a well-balanced pre-configuration to get you started immediately.
You can change the global settings or use the local parameters in any note via frontmatter
(start typing --- in the first line of your note to add properties)
---
system_commands: ['You are a helpful assistant.']
temperature: 0.3
top_p: 1
max_tokens: 300
presence_penalty: 0.5
frequency_penalty: 0.5
stream: true
stop: null
n: 1
model: gpt-5-mini
# Service-specific URLs (optional, will use global settings if not specified)
openaiUrl: https://api.openai.com
# openrouterUrl: https://openrouter.ai
# ollamaUrl: http://localhost:11434
---
💡 Pro tip: Increasing max_tokens to a higher value e.g. 4096 for more complex tasks like reasoning, coding or text creation.
The default model gpt-5-mini is optimized for speed and efficiency. Upgrade to gpt-5 for enhanced reasoning capabilities or use gpt-5-nano for ultra-lightweight responses.
ChatGPT MD: Chat command. AI will request tool use when needed.The implementation follows a three-layer approval pattern:
Vault Search (vault_search)
File Read (file_read)
Web Search (web_search)
Enable tool calling in Settings → ChatGPT MD → Tool Calling:
Research Assistant: "Search my vault for notes about quantum computing algorithms and recent papers on the topic"
→ AI discovers relevant notes → You approve which files to share → AI synthesizes information with proper attribution
Knowledge Synthesis: "Find all my Q3 meeting notes and summarize key decisions about product roadmap"
→ Vault search returns meeting files → You select the relevant ones → AI extracts and summarizes decisions
Web-Enhanced Writing: "Search the web for latest climate change statistics and incorporate them into my article"
→ Web search fetches current data → You filter reliable sources → AI integrates citations into your draft
Cross-Reference Discovery: "Find notes that mention both machine learning and productivity techniques"
→ Multi-word OR search finds intersections → You approve interesting connections → AI highlights patterns you might have missed
⚠️ Note: Tool support depends on model capabilities. Older models may not support function calling. You can check tool capabilities in the tool selection list after enabling tool support in the settings.
You can set and change the model for each request in your note.
Specify the model property via frontmatter:
for OpenAI models (including the latest GPT-5 family)
---
model: gpt-5 # or gpt-5-mini, gpt-5-nano, gpt-5-chat-latest
system_commands: [act as a senior javascript developer]
---
prefix it with ollama@ for Ollama models or lmstudio@ for LM Studio models.
---
model: ollama@gemma2:27b
temperature: 1
---
The AI responses will keep the used model name in the response title for future reference.
You can find the list of your installed Ollama model names from your terminal via ollama list or the available openAI model names online on this openAI models page.
Each AI service has its own dedicated URL parameter that can be configured globally in settings or per-note via frontmatter:
---
# For OpenAI
openaiUrl: https://api.openai.com
# For OpenRouter
openrouterUrl: https://openrouter.ai
# For Ollama
ollamaUrl: http://localhost:11434
---
The default URLs are:
https://api.openai.comhttps://openrouter.aihttp://localhost:11434Note: Previous versions used a single url parameter which is now deprecated. Please update your templates and notes to use the service-specific URL parameters.
Run commands from Obsidian's command pallet via cmd + p or ctrl + p and start typing chatgpt or set hotkeys
(a chat command hotkey is highly recommended for effortless chats (I use cmd + j, which works fantastic, because your index finger is already resting on that key)).
cmd + j.Chat Folder.Chat Template Folder.Want to try the latest features before they're officially released? You can beta test ChatGPT MD using the BRAT (Beta Reviewer's Auto-update Tool) community plugin:
bramses/chatgpt-md as a beta pluginThis gives you early access to new features while they're still being developed and tested.
⚠️ WARNING: Beta testing is dangerous and happens at your own risk. Always test beta versions on a new empty vault, not on your main vault. Beta features can break and possibly lead to data loss.
Use the ChatGPT MD: Chat command from the Obsidian command Palette (cmd + p or ctrl + p) to start a conversation from any note.
ChatGPT MD: Chat command?Yes, you should! Go to Settings > Hotkeys, search for ChatGPT MD: Chat and add your preferred keybinding (e.g., cmd + j).
You can use OpenAI's GPT 3 and 4 models, various models through OpenRouter.ai (like Claude, Gemini, DeepSeek, Llama, Perplexity), or any model you have installed via Ollama. DeepSeek-r1:7b works great for reasoning locally via Ollama.
Ensure your custom API adheres to OpenAI's specifications, such as Azure's hosted endpoints. Consult your provider for API key management details.
In the plugin settings, add your OpenAI API key and/or install Ollama and local LLMs of your choice.
The single 'url' parameter is now deprecated. In v2.2.0 and higher, we've introduced service-specific URL parameters: openaiUrl, openrouterUrl, and ollamaUrl. This allows for more flexibility and clarity when configuring different services. Please update your templates and notes accordingly.
🤖 Enjoy exploring the power of ChatGPT MD in your Obsidian vault!🚀
Pull requests, bug reports, and all other forms of contribution are welcomed and highly encouraged! :octocat:
# Clone the repository
git clone https://github.com/bramses/chatgpt-md.git
cd chatgpt-md
# Install dependencies
yarn install
# Development mode (watch for changes)
yarn dev
# Run tests
yarn test
# Run tests with coverage
yarn test:coverage
# Lint code
yarn lint
yarn lint:fix
# Production build
yarn build
The project uses Jest for testing with 104 tests covering utility functions. Before submitting a PR:
yarn test to ensure all tests passyarn lint to check code qualityyarn build to verify the build succeedsany usage; prefer explicit typessrc/Utilities/*.test.tsPre-commit hooks automatically run on git commit:
GitHub Actions automatically runs on pull requests:
src/
├── Commands/ # Obsidian command handlers
├── Services/ # Business logic & AI adapters
├── Views/ # UI components & modals
├── Utilities/ # Pure helper functions (well-tested)
├── Models/ # TypeScript interfaces
├── Types/ # Type definitions
└── core/ # Dependency injection container
For detailed development documentation, see CLAUDE.md and docs/development.md.
git checkout -b feature/amazing-feature)git commit -m 'Add amazing feature')git push origin feature/amazing-feature)Your PR will be automatically checked by CI. Please ensure all checks pass before requesting review!
Bram created ChatGPT MD in March 2023 lives in NYC and is building Your Commonbase (A Self Organizing Scrapbook with Zero Stress Storing, Searching, and Sharing). His personal website and newsletter is located at bramadams.dev
Deniz joined Bram in 2024 to continue development. He is working in a gaming company in Germany and uses AI heavily in his work and private life. Say "hi" on Bluesky: Deniz
Happy writing with ChatGPT MD! 💻 🎉