by jonigl
A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.
# Add to your Claude Code skills
git clone https://github.com/jonigl/mcp-client-for-ollamaNo comments yet. Be the first to share your thoughts!
MCP Client for Ollama (ollmcp) is a modern, interactive terminal application (TUI) for connecting local Ollama LLMs to one or more Model Context Protocol (MCP) servers, enabling advanced tool use and workflow automation. With a rich, user-friendly interface, it lets you manage tools, models, and server connections in real time—no coding required. Whether you're building, testing, or just exploring LLM tool use, this client streamlines your workflow with features like fuzzy autocomplete, advanced model configuration, MCP servers hot-reloading for development, and Human-in-the-Loop safety controls.
Option 1: Install with pip and run
pip install --upgrade ollmcp
ollmcp
Option 2: One-step install and run
uvx ollmcp
Option 3: Install from source and run using virtual environment
git clone https://github.com/jonigl/mcp-client-for-ollama.git
cd mcp-client-for-ollama
uv venv && source .venv/bin/activate
uv pip install .
uv run -m mcp_client_for_ollama
Run with default settings:
ollmcp
If you don't provide any options, the client will use
auto-discoverymode to find MCP servers from Claude's configuration.
[!TIP] The CLI now uses
Typerfor a modern experience: grouped options, rich help, and built-in shell autocompletion. Advanced users can use short flags for faster commands. To enable autocompletion, run:ollmcp --install-completionThen restart your shell or follow the printed instructions.
--mcp-server, -s: Path to one or more MCP server scripts (.py or .js). Can be specified multiple times.--mcp-server-url, -u: URL to one or more SSE or Streamable HTTP MCP servers. Can be specified multiple times. See Common MCP endpoint paths for typical endpoints.--servers-json, -j: Path to a JSON file with server configurations. See Server Configuration Format for details.--auto-discovery, -a: Auto-discover servers from Claude's default config file (default behavior if no other options provided).[!TIP] Claude's configuration file is typically located at:
~/Library/Application Support/Claude/claude_desktop_config.json
--model, -m MODEL: Ollama model to use. Default: qwen2.5:7b--host, -H HOST: Ollama host URL. Default: http://localhost:11434--version, -v: Show version and exit--help, -h: Show help message and exit--install-completion: Install shell autocompletion scripts for the client--show-completion: Show available shell completion optionsSimplest way to run the client:
ollmcp
[!TIP] This will automatically discover and connect to any MCP servers configured in Claude's settings and use the default model
qwen2.5:7bor the model specified in your configuration file.
Connect to a single server:
ollmcp --mcp-server /path/to/weather.py --model llama3.2:3b
# Or using short flags:
ollmcp -s /path/to/weather.py -m llama3.2:3b
Connect to multiple servers:
ollmcp --mcp-server /path/to/weather.py --mcp-server /path/to/filesystem.js
# Or using short flags:
ollmcp -s /path/to/weather.py -s /path/to/filesystem.js
[!TIP] If model is not specified, the default model
qwen2.5:7bwill be used or the model specified in your configuration file.
Use a JSON configuration file:
ollmcp --servers-json /path/to/servers.json --model llama3.2:1b
# Or using short flags:
ollmcp -j /path/to/servers.json -m llama3.2:1b
[!TIP] See the Server Configuration Format section for details on how to structure the JSON file.
Use a custom Ollama host:
ollmcp --host http://localhost:22545 --servers-json /path/to/servers