j0rd1smit11k downloadsA highly configurable copilot-like auto-completion using the ChatGPT API.
This plugin adds a Copilot-like auto-completion to Obsidian.
It uses large language models (LLMs) to generate text based on the n characters before and after your cursor.
It will show the suggested completion in transparent text next to your cursor.
You can then press Tab to insert the entire suggestion, or the right arrow key to insert part of the suggestion.
Additionally, you can press Escape or move the cursor to ignore the suggestion.
The plugin supports multiple API providers, such as OpenAI, Azure OpenAI and Ollama.

The plugin offers the following features:
n characters before and after your cursor, displaying the proposed text as transparent overlay near your cursor, tailored to fit the current context. Want to know how it works? See the how does the model work in detail? documentation..gitignore, automatically disabling itself when opening files that match certain patterns. This helps prevent unintended triggers in sensitive documents. For more information, click here.Obsidian Copilot: Disable command. This allows you to disable the plugin when working on sensitive documents or when you are currently not in need of suggestions.To install the plugin, please follow these steps:
The following GIF demonstrates what a successful connection test should look like:

The plugin is now ready for use.
It monitors the text you type for specific triggers, such as end-of-sentence punctuation, a new line, or a list item.
Upon detecting a trigger, it presents context-specific suggestions.
For instance, try typing the following:
# A Tale of Two Cities
The most famous quote from this book is:
Once you type a space after the :, the plugin should display a suggestion like this.

Note: To minimize API calls and costs, this plugin is not as sensitive to triggers as the original Copilot. It only activates after specific triggers, such as end-of-sentence punctuation, a new line, or a list item. For more information, see the triggers documentation.
This plugin utilizes large language models (LLMs) to perform fill-in-the-middle auto-completion. By default, most LLMs are not trained for this specific task. However, through prompt engineering, we can adapt LLMs to facilitate fill-in-the-middle auto-completion. The system prompt we use looks roughly like this:
Your job is to predict the most logical text that should be written at the location of the <mask/>.
Your answer can be either code, a single word, or multiple sentences.
Your answer must be in the same language as the existing text.
...
We then supply the model with the truncated text before and after the cursor, formatted as <truncated_text_before_cursor> <mask/> <truncated_text_after_cursor>.
The model responds with the text it predicts should fill the <mask/>.
After some post-processing, we present this prediction to the user as a suggestion.
In addition to the system prompt, we provide the model with context-specific examples to enhance its performance and make it more context-aware. For instance, if the cursor is within a code block, we supply code-related examples to the model. Conversely, if the cursor is in a title, we offer title-related examples. This approach informs the model of our expectations for the response in the given context. The plugin accommodates a wide array of contexts, including code blocks, math blocks, lists, headings, paragraphs, and more. These few-shot examples are customizable, allowing you to tailor them to your writing style or language preferences (see Custom Few-Shot Examples for more information).

This overview gives you a high-level understanding of how the plugin functions. Interested in more details? Explore the following pages:
The plugin is designed to be highly customizable, allowing you to tailor the following aspects:
For detailed guidance on customizing these settings, please visit the Personalization and Settings page.
The plugin supports the following keyboard shortcuts:
| Key | State | Action |
|---|---|---|
Tab |
Suggesting | Accept the entire suggestion |
Right Arrow |
Suggesting | Accept the next word of the suggestion |
Escape |
Suggesting | Reject the suggestion and clear the suggestion cache |
Escape |
Predicting | Cancel the prediction request. This prevents the suggestion from showing up, but the cost has already incurred. |
Escape |
Queued | Cancel the prediction request. This prevents the API call, thus no additional costs are incurred. |
Note that the keyboard shortcuts have different effects depending on the state of the plugin. If the plugin is in a state not listed in the table above, the keys will function normally. The current state of the plugin is always displayed in the plugin's status bar at the bottom of the screen. Click here for more information about the plugin's states.
When dealing with privacy-sensitive documents, you may prefer not to share their contents with API providers such as OpenAI or Azure OpenAI. These providers could potentially store your data and utilize it to enhance their models, based on their current terms and conditions. So always make sure to read the terms and conditions of your chosen API provider before using it with this plugin.
To safeguard your privacy, you can take the following measures:
ignore functionality within the plugin. Within the settings, you can define a list of patterns similar to .gitignore glob patterns. If you open a file matching one of these patterns, the plugin will automatically deactivate for that file and reactivate when you switch to a non-matching file. By default, the settings are configured to ignore all files within any parent folder named secret. For instructions on setting this up, [click here](docs/how-to/ignore files.md).Obsidian Copilot: Disable command.As you write, the plugin monitors the text preceding your cursor to see if it matches any predefined triggers. Unlike Copilot, this plugin does not activate after each character you type; it only activates with specific triggers, such as end-of-sentence punctuation, a new line, a list item, math block, code block, etc. This method is intended to minimize the number of API calls and, as a result, the associated costs. The gif below demonstrates how the plugin gets automatically triggered after adding an new line inside a math block.

You can tailor these triggers in the plugin's settings according to your preferences. However, please note that more sensitive triggers might increase API calls and, thus, incur higher expenses. See the Personalization and Settings to learn how to customize these triggers to your liking.
In addition to automatic triggers, you can force the plugin to make a prediction by using the command palette (with CMD + P on Mac or CTRL + P on Windows) and typing Obsidian Copilot: Predict.
This command enables you to request a prediction from the plugin at any time, independent of automatic triggers.
Obsidian allows you to assign this command to any hotkey of your choice.
To do so, search for Copilot in the hotkey settings and assign a hotkey to the Obsidian Copilot: Predict command.

Want to contribute? Great! Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests.
This plugin serves as a connection to the API provider. We do not access or retain your data, but it is possible that the API provider does. Therefore, it is important to review and understand their terms and conditions and privacy policy. Please note that we are not liable for any information you provide to the API provider. You have the discretion to enable or disable the plugin based on the content of your documents. However, when you do this is your own responsibility. Please exercise caution when sharing sensitive information such as secrets or personal data with your API provider.
If you find this plugin useful and would like to support its development, you can buy me a coffee.