First class Sublime Text AI assistant with gpt-5, Opus 4.6, Gemini 3 and ollama support!
# Add to your Claude Code skills
git clone https://github.com/yaroslavyaroslav/OpenAI-sublime-text[![Star on GitHub][img-stars]][stars] ![Package Control][img-downloads]
Cursor level of AI assistance for Sublime Text. I mean it.
Works with OpenAI Responses, Anthropic Claude, Google Gemini and the whole zoo of OpenAI-compatible APIs: llama.cpp server, ollama or whatever third party LLM hosting you decided to trust today.

No comments yet. Be the first to share your thoughts!
Via Package Control
Package Control: Install Package.OpenAI and press Enter.Via Git Clone
Preferences: Browse Packages.git clone https://github.com/yaroslavyaroslav/OpenAI-sublime-text.git OpenAI\ completion in that folder that Sublime opened.OpenAI and press Enter.[!NOTE] Highly recommended complimentary packages:
- https://github.com/SublimeText-Markdown/MarkdownCodeExporter
- https://sublimetext-markdown.github.io/MarkdownEditing
You can interact with the AI in several ways, primarily through commands available in the Sublime Text Command Palette:
OpenAI: Chat Model Select: This is the most flexible command. It opens a panel allowing you to:
Phantom or a chat View in a panel/new tab).
This command automatically includes any files you've marked for context (see "Additional Request Context Management" below).OpenAI: New Message: This command sends your input directly using the assistant and output mode that were last selected or are currently active. It's quicker if you're consistently using the same settings. This command also includes any files marked for context.OpenAI: Chat Model Select), the response will appear as an inline overlay.OpenAI: Open in Tab command.Including Build/LSP Output: For more specific contexts, especially when coding, you can use commands that automatically include output from Sublime Text's diagnostic panels:
OpenAI: New Message With Build OutputOpenAI: Chat Model Select With Build OutputOpenAI: New Message With LSP OutputOpenAI: Chat Model Select With LSP OutputThese commands will append recent lines from the respective output panels (Build results or LSP diagnostics) to your request. The number of lines included can be configured with the build_output_limit setting in openAI.sublime-settings. This is useful for asking the AI to explain errors, debug code, or summarize diagnostics.
Managing Chat Sessions:
OpenAI: Refresh Chat: Reloads the chat history into the output panel or tab.OpenAI: Reset Chat History: Clears the chat history for the current context (project-specific or global).You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:
{
"settings": {
"ai_assistant": {
"cache_prefix": "/absolute/path/to/project/"
}
}
}
You can include the content of specific files as context for the AI. Files marked for context will have their content sent along with your prompt. There are several ways to manage this:
OpenAI: Add Sheets to Context command. If you run this while one or more tabs are selected (e.g., using Ctrl+Click or Cmd+Click on tabs, or by selecting files in the sidebar that get focused as tabs), it will toggle their inclusion in the AI context.OpenAI: Add File to Context from the context menu to toggle its inclusion.OpenAI: Add File to Context from the context menu to toggle their inclusion.Once files are added to the context:
status_hint setting).OpenAI: Chat Model Select command preview panel will also list the files currently included.OpenAI: Show All Selected Sheets command from the Command Palette. This will select these files in their respective views/groups.Files can be deselected using the same methods (the commands effectively toggle the inclusion status).
Image handle can be called by OpenAI: Handle Image command.
It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png). In addition command can be passed by input panel to proceed the image with special treatment. png and jpg images are only supported.
[!NOTE] Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
Phantom as an output mode in quick panel OpenAI: Chat Model Select.phantom_integrate_code_only is true) to the clipboard.ctrl+c to stop prompting same as with in panel mode.
"url" setting of a given model to point to whatever host you're server running on (e.g.http://localhost:8080/v1/chat/completions)."token" if your provider required one."api_type": "plain_text" for older OpenAI-compatible hosts or "api_type": "open_ai" for modern chat-completions implementations."chat_model" to a model of your choice and you're set."url" to the Gemini API root: https://generativelanguage.googleapis.com/v1beta."api_type": "google"."token" if your provider required one."chat_model" to a model from the list of supported models."url" to https://api.openai.com/v1/responses."api_type": "open_ai_responses".gpt-5."url" to https://api.anthropic.com/v1/messages."api_type": "anthropic"."token"."chat_model" to the Claude model you want to use.[!NOTE] You can set both
urlandtokeneither global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
The OpenAI Completion plugin has a settings file where youcan set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings with the Preferences: OpenAI Settings command and paste your API key in the token property, as follows: You can also access these settings and the default keybindings via the main menu: Preferences -> Package Settings -> OpenAI completion.
{
"toke