by nkapila6
"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨
# Add to your Claude Code skills
git clone https://github.com/nkapila6/mcp-local-rag"primitive" RAG-like web search model context protocol (MCP) server that runs locally. ✨ no APIs ✨
A RAG-based web search and deep research model context protocol (MCP) server that runs entirely locally. Features multi-engine research across 9+ search backends with semantic similarity ranking, and requires no API keys.
%%{init: {'theme': 'base'}}%%
flowchart TD
A[User] -->|1.Submits LLM Query| B[Language Model]
B -->|2.Sends Query| C[mcp-local-rag Tool]
subgraph mcp-local-rag Processing
C -->|Search DuckDuckGo| D[Fetch 10 search results]
D -->|Fetch Embeddings| E[Embeddings from Google's MediaPipe Text Embedder]
E -->|Compute Similarity| F[Rank Entries Against Query]
F -->|Select top k results| G[Context Extraction from URL]
end
G -->|Returns Markdown from HTML content| B
B -->|3.Generated response with context| H[Final LLM Output]
H -->|5.Present result to user| A
classDef default stroke:#333,stroke-width:2px;
classDef process stroke:#333,stroke-width:2px;
classDef input stroke:#333,stroke-width:2px;
classDef output stroke:#333,stroke-width:2px;
class A input;
class B,C process;
class G output;
No comments yet. Be the first to share your thoughts!
The server supports comprehensive multi-engine research capabilities that go beyond simple single-query searches:
deep_research - Comprehensive multi-engine research
deep_research_google - Google-focused deep dive
deep_research_ddgs - Privacy-first deep research
rag_search_ddgs & rag_search_google - Quick single searches
Locate your MCP config path here or check your MCP client settings.
uvxThis is the easiest and quickest method. You need to install uv for this to work. Add this to your MCP server configuration:
{
"mcpServers": {
"mcp-local-rag":{
"command": "uvx",
"args": [
"--python=3.10",
"--from",
"git+https://github.com/nkapila6/mcp-local-rag",
"mcp-local-rag"
]
}
}
}
Ensure you have Docker installed. Add this to your MCP server configuration:
{
"mcpServers": {
"mcp-local-rag": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--init",
"-e",
"DOCKER_CONTAINER=true",
"ghcr.io/nkapila6/mcp-local-rag:v1.0.2"
]
}
}
}
This repository includes Agent Skills that teach Claude how to effectively use the mcp-local-rag tools for intelligent web searches and deep research. Skills are folders of instructions that Claude loads dynamically to improve performance on specialized tasks.
local-rag-search - Teaches Claude best practices for:
num_results, top_k, and backend selection for different use casesThe skill enables comprehensive topic research using multiple search terms and engines. It's particularly useful for technical deep dives that leverage Google's documentation coverage, multi-perspective analysis that compares information across different search engines, privacy-focused research using DuckDuckGo or Brave, and factual verification by cross-referencing Wikipedia and other authoritative sources.
In Claude Desktop:
skills/local-rag-search/In conversations: Once loaded, simply ask Claude to search for information and it will automatically apply the skill's best practices. Try queries like:
Learn more about Agent Skills at the Anthropic Skills Repository.
See the skills/README.md for detailed usage instructions and skill development guidelines.
MseeP does security audits on every MCP server, you can see the security audit of this MCP server by clicking here.
The MCP server should work with any MCP client that supports tool calling. Has been tested on the below clients.
When an LLM (like Claude) is asked a question requiring recent web information, it will trigger mcp-local-rag.
When asked to fetch/lookup/search the web, the model prompts you to use MCP server for the chat.
In the example, have asked it about Google's latest Gemma models released yesterday. This is new info that Claude is not aware about.
mcp-local-rag performs a live web search, extracts context, and sends it back to the model—giving it fresh knowledge:
If the software I've built has been helpful to you. Please do buy me a coffee, would really appreciate it! 😄
Have ideas or want to improve this project? Issues and pull requests are welcome!
This project is licensed under the MIT License.