A JupyterLab extension supporting Claude Code, Copilot, Ollama, and OpenAI-compatible LLMs, with MCP, skills, plugins, and notebook agents.
# Add to your Claude Code skills
git clone https://github.com/notebook-intelligence/notebook-intelligenceGuides for using ai agents skills like notebook-intelligence.
Notebook Intelligence (NBI) is an AI coding assistant and extensible AI framework for JupyterLab. It adds chat, inline edit, auto-complete, and an agent that can drive notebooks — backed by GitHub Copilot, an OpenAI-compatible or LiteLLM-compatible endpoint, local Ollama models, or Anthropic's Claude Code CLI.
NBI is free and open-source. Connect it to a free or paid LLM provider of your choice — GitHub Copilot, any OpenAI- or LiteLLM-compatible endpoint, Ollama (local), or Anthropic Claude (via the Claude Code CLI). Provider charges, when applicable, are paid directly to the provider.
No comments yet. Be the first to share your thoughts!
npx.pip install notebook-intelligence
jupyter lab # restart JupyterLab if it was already running
After restart:
If the panel stays empty or login does nothing, see Troubleshooting.
A short glossary you'll see referenced throughout these docs.
@mention-able persona inside the chat panel (@workspace, @mcp, …). Participants route the request to a specific tool surface.PATH.api.anthropic.com) is the HTTPS endpoint NBI calls directly for inline chat and auto-complete in Claude mode. Claude Code is Anthropic's local CLI agent that NBI shells out to for the chat panel; it talks to Anthropic itself.~/.jupyter/nbi/rules/ that get injected into the system prompt to enforce conventions, coding standards, or domain rules.NBI provides a dedicated mode for Claude Code integration. In Claude mode, NBI uses the Claude Code CLI for the chat panel, and Claude models (via the Anthropic API) for inline chat and auto-complete suggestions. This brings Claude Code's tools, skills, MCP servers, and custom commands into JupyterLab.
Configure via the NBI Settings dialog (gear icon in the chat panel, or Settings → Notebook Intelligence Settings). Toggle Enable Claude mode, then:
If the Claude Code CLI is on PATH, NBI launches it automatically. To override the location, set the NBI_CLAUDE_CLI_PATH environment variable before starting JupyterLab.
When Claude mode is on, the chat sidebar shows a history icon next to the gear. Click it to list the Claude Code sessions recorded for the current working directory (the same transcripts the Claude Code CLI stores under ~/.claude/projects/). Selecting a session reconnects via resume, so the next message you send continues that transcript with full prior context.
When Claude mode is enabled and the Claude CLI is available, the JupyterLab launcher (the panel that opens with new tabs) shows a Claude Code tile alongside the standard kernel launchers. Clicking it opens a session picker — search across past transcripts and resume one in a fresh terminal, or start a new session in the file browser's active subdirectory. Session IDs are copyable from the picker for paste into a claude --resume <id> command.
In Agent mode, the built-in AI agent creates, edits, and executes notebooks for you interactively. It can detect issues in cells and fix them.

Use the sparkle icon on the cell toolbar or the keyboard shortcut to show the inline chat popover.
Ctrl+G / Cmd+G opens the popover. Ctrl+Enter / Cmd+Enter accepts the suggestion. Esc closes it. The accept shortcut overrides JupyterLab's default run cell binding only while the popover is open — outside the popover, Ctrl+Enter / Cmd+Enter still runs the active cell.

Auto-complete suggestions are shown as you type. Tab accepts. NBI provides auto-complete in code cells and Python file editors.
You can paste or attach images alongside a chat prompt — the image goes to the model as input when the active model supports vision.
Right-click a cell output (or hover for the toolbar) to send it straight into the chat as context:
Each is per-user toggleable from Settings (saved as enable_explain_error, enable_output_followup, enable_output_toolbar in config.json, default on) and admin-lockable via NBI_EXPLAIN_ERROR_POLICY / NBI_OUTPUT_FOLLOWUP_POLICY / NBI_OUTPUT_TOOLBAR_POLICY.
Active notebooks show a sparkle icon on the toolbar. Click it to open a popover that scopes the generation request to that specific notebook — handy for multi-notebook sessions where you don't want the chat sidebar to compete for context.
Configure your provider, model, and API key from NBI Settings — the gear icon in the chat panel, the /settings chat command, or the JupyterLab command palette. For background, see the provider blog post.
NBI saves configuration at ~/.jupyter/nbi/config.json. It also supports an environment-wide base configuration at <env-prefix>/share/jupyter/nbi/config.json — organizations can ship default configuration there, and user changes save as overrides on top.
These config files store provider, model, and MCP configuration. API keys for custom LLM providers are also stored here in plaintext — never commit ~/.jupyter/nbi/config.json to git, share it, or sync it across users. If a key leaks, rotate it at the provider immediately.
Manual edits to
config.jsonrequire a JupyterLab restart to take effect. Edits via the Settings dialog are picked up live.
Most settings panel toggles can be locked by org administrators. Two shapes:
Boolean policies use the *_POLICY suffix and accept three values: user-choice (default — user toggles freely), force-on (locked enabled), force-off (locked disabled). When forced, the panel control is disabled with a "Locked by your administrator" tooltip and any client-side wri