ebullient107 downloadsLocal AI content generation with Ollama. Features custom prompts, content filtering, link expansion, continuous conversations, and extensible filter API.
Generate AI content in Obsidian using local LLMs or OpenAI-compatible APIs. Features custom prompts, content filters, and an API for advanced integrations.
Important Notes
- Privacy: Supports local processing with Ollama or external OpenAI-compatible APIs
- Network Use: Plugin only communicates with your configured LLM provider(s)
- Mobile Support: Works on both desktop and mobile devices
[[wikilinks]] in your promptswindow.promptFlow.filtersChoose one or more:
llama3.1).obsidian/plugins/prompt-flow/ directoryAssuming you have the BRAT plugin installed and enabled:
https://github.com/ebullient/obsidian-prompt-flow/ as the URL, and install the pluginSee CONTRIBUTING.md for development setup instructions.
ollama pull llama3.1https://api.openai.com, https://openrouter.ai/api, http://localhost:8080)gpt-4o, gpt-4o-mini, meta-llama/llama-3.1-8b-instruct)<prompt name> (e.g., "Generate reflection question")In Settings → Prompt Flow → Prompts:
For each prompt, a command is automatically created: Generate [prompt name]
You can override the prompt file on a per-note basis using frontmatter:
---
prompt-file: "prompts/creative-writing-coach.md"
---
Available override:
prompt-file: Path to a custom prompt fileThis override allows different notes to use different prompt files with the same command. Without this override, the plugin uses the prompt file configured in Settings for that command.
To override the connection or model, specify these in the prompt file's frontmatter instead (see Prompt File Configuration below).
Prompt files can include frontmatter to customize behavior:
---
connection: openrouter
model: meta-llama/llama-3.1-8b-instruct
num_ctx: 4096
temperature: 0.7
top_p: 0.9
isContinuous: true
includeLinks: true
excludeCalloutTypes: ["todo", "warning"]
wrapInBlockquote: true
---
You are a reflective companion. Ask concise questions that help summarize the
day.
Available options:
connection: Connection name to use (overrides default)model: Specific model to usenum_ctx: Context window size (tokens) or max_tokens for OpenAI-compatibletemperature: Randomness (0.0-2.0, default: 0.8)top_p: Nucleus sampling threshold (0.0-1.0)top_k: Top-k sampling limit (Ollama only)repeat_penalty: Penalty for repetition (>0, default: 1.1, Ollama only)isContinuous: Keep conversation context between requests (default: false)includeLinks: Auto-expand [[wikilinks]] to include linked content (default: false)excludePatterns: Array of regex patterns to exclude linksexcludeCalloutTypes: Array of callout types to filter from contentfilters: Array of filter function names from window.promptFlow.filterswrapInBlockquote: Format output as blockquote (default: true)calloutHeading: Heading text for callout-style formattingWhen isContinuous is true, the plugin maintains conversation context for each prompt/note combination. This allows follow-up prompts to build on previous exchanges. Context is automatically cleared after 30 minutes of inactivity.
When includeLinks is enabled, the plugin automatically includes content from [[wikilinks]] and embedded files in your note. This provides the AI with broader context.
Link filtering:
Configure global exclude patterns in Settings → Link filtering, or use excludePatterns in prompt frontmatter to filter specific links.
The plugin exposes window.promptFlow.filters for external scripts (CustomJS, other plugins) to register content transformation functions.
Example filter registration:
// In a CustomJS script or another plugin
window.promptFlow = window.promptFlow || {};
window.promptFlow.filters = window.promptFlow.filters || {};
window.promptFlow.filters.redactSecrets = (content) => {
return content.replace(/password:\s*\S+/gi, "password: ***");
};
Using filters in prompt files:
---
filters:
- redactSecrets
- removeEmojis
---
Generate a thoughtful reflection question.
Filters are applied sequentially in the order specified before sending content to the LLM.
Configure one or more LLM provider connections in Settings → Prompt Flow → Connections.
Ollama connection settings:
http://localhost:11434)llama3.1, mistral)10m, 1h, -1 for always)OpenAI-compatible connection settings:
The plugin auto-detects the correct API path structure for different OpenAI-compatible services (standard /v1 or OpenWebUI /api/v1).
[display text](link target)Cannot connect to LLM provider:
ollama serve) and check the base URLNo models found:
ollama pull llama3.1) and verify with ollama listGenerated content not appearing:
See CONTRIBUTING.md for development setup, build commands, and architecture details. AI assistants should review CLAUDE.md for working guidelines.
This plugin is based on Build an LLM Journaling Reflection Plugin for Obsidian by Thomas Chang. See his implementation.
Additional implementation ideas come from the Canvas Conversation plugin by André Baltazar
MIT