by shinpr
Local-first RAG server for developers. Semantic + keyword search for code and technical docs. Works with MCP or CLI. Fully private, zero setup.
# Add to your Claude Code skills
git clone https://github.com/shinpr/mcp-local-ragLocal RAG for developers via MCP or CLI. Semantic search with keyword boost for exact technical terms — fully private, zero setup.
Semantic search with keyword boost
Vector search first, then keyword matching boosts exact matches. Terms like useEffect, error codes, and class names rank higher—not just semantically guessed.
Smart semantic chunking Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.
Quality-first result filtering Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.
Runs entirely locally No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.
Zero-friction setup
One npx command. No Docker, no Python, no servers to manage.
Use via MCP, CLI, or both. Optional help AI assistants form better queries and interpret results.
No comments yet. Be the first to share your thoughts!
Set BASE_DIR to the folder you want to search. Documents must live under it.
Add the MCP server to your AI coding tool:
For Cursor — Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"local-rag": {
"command": "npx",
"args": ["-y", "mcp-local-rag"],
"env": {
"BASE_DIR": "/path/to/your/documents"
}
}
}
}
For Codex — Add to ~/.codex/config.toml:
[mcp_servers.local-rag]
command = "npx"
args = ["-y", "mcp-local-rag"]
[mcp_servers.local-rag.env]
BASE_DIR = "/path/to/your/documents"
For Claude Code — Run this command:
claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag
Restart your tool, then start using it:
You: "Ingest api-spec.pdf"
Assistant: Successfully ingested api-spec.pdf (47 chunks created)
You: "What does the API documentation say about authentication?"
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
The flow is described in section 3.2...
Or use directly as CLI — no MCP server needed:
npx mcp-local-rag ingest ./docs/
npx mcp-local-rag query "authentication API"
That's it. No Docker, no Python, no server setup.
You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.
Privacy. Your documents might contain sensitive data. This runs entirely locally.
Cost. External embedding APIs charge per use. This is free after the initial model download.
Offline. Works without internet after setup.
Code search. Pure semantic search misses exact terms like useEffect or ERR_CONNECTION_REFUSED. Keyword boost catches both meaning and exact matches.
Agent reality. In practice, many AI environments mainly use tool calling. CLI support and Agent Skills make the same workflows available even without full MCP integration.
mcp-local-rag provides two interfaces: an MCP server for AI coding tools and a CLI for direct use from the terminal.
The MCP server provides 6 tools: ingest_file, ingest_data, query_documents, list_files, delete_file, status.
"Ingest the document at /Users/me/docs/api-spec.pdf"
Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.
Re-ingesting the same file replaces the old version automatically.
Use ingest_data to ingest HTML content retrieved by your AI assistant (via web fetch, curl, browser tools, etc.):
"Fetch https://example.com/docs and ingest the HTML"
The server extracts main content using Readability (removes navigation, ads, etc.), converts to Markdown, and indexes it. Perfect for:
HTML is automatically cleaned—you get the article content, not the boilerplate.
Note: The RAG server itself doesn't fetch web content—your AI assistant retrieves it and passes the HTML to
ingest_data. This keeps the server fully local while letting you index any content your assistant can access. Please respect website terms of service and copyright when ingesting external content.
"What does the API documentation say about authentication?"
"Find information about rate limiting"
"Search for error handling best practices"
Search uses semantic similarity with keyword boost. This means useEffect finds documents containing that exact term, not just semantically similar React concepts.
Results include text content, source file, document title, and relevance score. The document title provides context for each chunk, helping identify which document a result belongs to. Adjust result count with limit (1-20, default 10).
"List all files in BASE_DIR and their ingested status" # See what's indexed
"Delete old-spec.pdf from RAG" # Remove a file
"Show RAG server status" # Check system health
All MCP tools are also available as CLI commands — no MCP server needed:
npx mcp-local-rag ingest ./docs/ # Bulk ingest files
npx mcp-local-rag query "authentication API" # Search documents
npx mcp-local-rag list # Show ingestion status
npx mcp-local-rag status # Database stats
npx mcp-local-rag delete ./docs/old.pdf # Remove content
npx mcp-local-rag delete --source "https://..." # Remove by source URL
query, list, status, and delete output JSON to stdout for piping (e.g., | jq). ingest outputs progress to stderr. Global options (--db-path, --cache-dir, --model-name) go before the subcommand. Run npx mcp-local-rag --help for details.
⚠️ The CLI does not read your MCP client config (
mcp.json,config.toml, etc.). Configure the CLI via flags or environment variables as shown below.
CLI flags — global options go before the subcommand, subcommand options go after:
npx mcp-local-rag --db-path ./my-db query "auth" --base-dir ./docs
Environment variables — set in your shell:
export DB_PATH=./my-db
export BASE_DIR=./docs
npx mcp-local-rag query "auth"
Sharing config between MCP and CLI — if your MCP client inherits shell environment variables, you can set them in your shell profile (e.g., ~/.zshrc) so both use the same values. Otherwise, set them explicitly in your MCP config as well.
export BASE_DIR=/path/to/your/documents
export DB_PATH=/path/to/lancedb
Configuration is resolved in this order:
For the full list of CLI flags, environment variables, and defaults, see Configuration.
For CLI-only setups (no MCP server), install Agent Skills so your AI assistant can form better queries and interpret results consistently.
⚠️
--model-namemust match your MCP server config. Using a different embedding model against an existing database produces incompatible vectors, silently degrading search quality.
Adjust these for your use case:
| Variable | Default | Description |
|----------|---------|-------------|
| RAG_HYBRID_WEIGHT | 0.6 | Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. |
| RAG_GROUPING | (not set) | similar for top group only, related for top 2 groups. |
| RAG_MAX_DISTANCE | (not set) | Filter out low-relevance results (e.g., 0.5). |
| RAG_MAX_FILES | (not set) | Limit results to top N files (e.g., 1 for single best file). |
For codebases and API specs, increase keyword boost so exact identifiers (useEffect, ERR_*, class names) dominate ranking:
"env": {
"RAG_HYBRID_WEIGHT": "0.7",
"RAG_GROUPING": "similar"
}
0.7 — balanced semantic + keyword1.0 — aggressive; exact matches strongly rerank resultsKeyword boost is applied after semantic filtering, so it improves precision without surfacing unrelated matches.
TL;DR:
When you ingest a document, the parser extracts text based on file type (PDF via mupdf, DOCX via mammoth, text files directly).
The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters. Markdown code blocks are kept intact—never split mid-block—preserving copy-pastable code in search results.
Eac