Search...Search plugins and themes...
⌘K
Sign in
  • Get started
  • Download
  • Pricing
  • Enterprise
  • Account
  • Obsidian
  • Overview
  • Sync
  • Publish
  • Canvas
  • Mobile
  • Web Clipper
  • CLI
  • Learn
  • Help
  • Developers
  • Changelog
  • About
  • Roadmap
  • Blog
  • Resources
  • System status
  • License overview
  • Terms of service
  • Privacy policy
  • Security
  • Community
  • Plugins
  • Join the community
  • Discord
  • Forum / 中文论坛
  • Merch store
  • Brand guidelines
Follow us
DiscordTwitterBlueskyThreadsMastodonYouTubeGitHub
© 2026 Obsidian

Smart Second Brain

your-papayour-papa59k downloads

Interact with your privacy focused assistant by leveraging Ollama or OpenAI and making your second brain even smarter.

Add to Obsidian
  • Overview
  • Scorecard
  • Updates30

2-05

Your Smart Second Brain

Your Smart Second Brain is a free and open-source Obsidian plugin to improve your overall knowledge management. It serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing and it can operate completely offline, ensuring your data remains private and secure.

S2B Chat

🌟 Features

📝 Chat with your Notes

  • RAG pipeline: All your notes will be embedded into vectors and then retrieved based on the similarity to your query in order to generate an answer based on the retrieved notes
  • Get reference links to notes: Because the answers are generated based on your retrieved notes we can trace where the information comes from and reference the origin of the knowledge in the answers as Obsidian links
  • Chat with LLM: You can disable the function to answer queries based on your notes and then all the answers generated are based on the chosen LLM’s training knowledge
  • Save chats: You can save your chats and continue the conversation at a later time
  • Different chat views: You can choose between two chat views: the ‘comfy’ and the ‘compact’ view

🤖 Choose ANY preferred Large Language Model (LLM)

  • Ollama to integrate LLMs: Ollama is a tool to run LLMs locally. Its usage is similar to Docker, but it's specifically designed for LLMs. You can use it as an interactive shell, through its REST API, or using it from a Python library.
  • Quickly switch between LLMs: Comfortably change between different LLMs for different purposes, for example changing from one for scientific writing to one for persuasive writing.
  • Use ChatGPT: Although, our focus lies on a privacy-focused AI Assistant you can still leverage OpenAI’s models and their advanced capabilities.

⚠️ Limitations

  • Performance depends on the chosen LLM: As LLMs are trained for different tasks, LLMs perform better or worse in embedding notes or generating answers. You can go with our recommendations or find your own best fit.
  • Quality depends on knowledge structure and organization: The response improves when you have a clear structure and do not mix unrelated information or connect unrelated notes. Therefore, we recommend a well-structured vault and notes.
  • AI Assistant might generate incorrect or irrelevant answers: Due to a lack of relevant notes or limitations of AI understanding the AI Assistant might generate unsatisfying answers. In those cases, we recommend rephrasing your query or describing the context in more detail

🔧 Getting started

[!NOTE]
If you use Obsidian Sync the vector store binaries might take up a lot of space due to the version history.
Exclude the .obsidian/plugins/smart-second-brain/vectorstores folder in the Obsidian Sync settings to avoid this.

Follow the onboarding instructions provided on initial plugin startup in Obsidian.

⚙️ Under the hood

Check out our Architecture Wiki page.

🎯 Roadmap

  • Support Gemini and Claude models and integrate KoboldCpp
  • Similar note connections view
  • Chat Threads
  • Hybrid Vector Search
  • Predictive Note Placement
  • Agent with Obsidian tooling
  • Multimodality

🧑‍💻 About us

We initially made this plugin as part of a university project, which is now complete. However, we are still fully committed to developing and improving the assistant in our spare time. This and the papa-ts (backend) repo serve as an experimental playground, allowing us to explore state-of-the-art AI topics further and as a tool to enrich the obsidian experience we’re so passionate about. If you have any suggestions or wish to contribute, we would greatly appreciate it.

📢 You want to support?

  • Report issues or open a feature request here
  • Open a PR for code contributions (Development setup instructions TBD)

❓ FAQ

Don't hesitate to ask your question in the Q&A

Are any queries sent to the cloud?

The queries are sent to the cloud only if you choose to use OpenAI's models. You can also choose Ollama to run your models locally. Therefore, your data will never be sent to any cloud services and stay on your machine.

How does it differ from the SmartConnections plugin?

Our plugin is quite similar to Smart Connections. However, we improve it based on our experience and the research we do for the university.

For now, these are the main differences:

  • We are completely open-source
  • We support Ollama/local models without needing a license
  • We place more value on UI/UX
  • We use a different tech stack leveraging Langchain and Orama as our vector store
  • Under the hood, our RAG pipeline uses other techniques to process your notes like hierarchical tree summarization

What models do you recommend?

OpenAI's models are still the most capable. Especially "GPT-4" and "text-embedding-3-large". The best working local embedding modal we tested so far would be "mxbai-embed-large".

Does it support multi-language vaults?

It’s supported, although the response quality may vary depending on which prompt language is used internally (we will support more translations in the future) and which models you use. It should work best with OpenAI's "text-embedding-large-3" model.

76%
HealthGood
ReviewCaution
About
Chat with your notes using local or cloud LLMs and a RAG pipeline that embeds and retrieves relevant vault content to answer queries. Trace answers to source notes via Obsidian links, save and resume chats, switch models (including Ollama or OpenAI), and run fully offline to keep data local.
AI
Details
Current version
1.3.0
Last updated
2 years ago
Created
3 years ago
Updates
30 releases
Downloads
59k
Compatible with
Obsidian 1.5.0+
License
MIT
Report bugRequest featureReport plugin
Author
your-papayour-papa
github.com/nicobrauchtgit
GitHubyour-papa
  1. Community
  2. Plugins
  3. AI
  4. Smart Second Brain

Related plugins

Smart Connections

AI link discovery copilot. See related notes as you write. Lookup using semantic (vector) search across your vault. Zero-setup local model for embeddings, no API keys, private.

Copilot

Your AI Copilot: Chat with Your Second Brain, Learn Faster, Work Smarter.

Khoj

An AI personal assistant for your digital brain.

Text Generator

Generate text content using GPT-3 (OpenAI).

Smart Composer

AI chat with note context, smart writing assistance, and one-click edits for your vault.

MCP Tools

Securely connect Claude Desktop to your vault with semantic search, templates, and file management capabilities.

Local GPT

Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access.

YOLO

Smart, snappy, and multilingual AI assistant for your vault.

Github Copilot

Implement suggestions from Github Copilot directly in your editor.

Whisper

Speech-to-text using OpenAI Whisper.