Kindly Web Search MCP Server: Web search + robust content retrieval for AI coding tools (Claude Code, Codex, Cursor, GitHub Copilot, Gemini, etc.) and AI agents (Claude Desktop, OpenClaw, etc.). Supports Serper, Tavily, and SearXNG.
# Add to your Claude Code skills
git clone https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-serverWeb search + robust content retrieval for AI coding tools.
Kindly Web Search is a part of the Shelpuk AI Technology Consulting agentic suite – a set of tools that together improve the code quality produced by AI coding agents by 15–20%. Read more on Claude Code generation quality improvement.
Works with Claude Code, Codex, Antigravity, Cursor, Windsurf, and any agent that supports skills or MCP servers.
| Component | Role | |--------------------------------------------------------------------------------------|---| | tdd | Enforces TDD, requirements discipline, and peer review for every coding task | | Serena | Semantic code navigation + persistent project memory | | Kindly Web Search ← you are here | Up-to-date API and package documentation via web search | | Lad MCP Server | Project-aware AI design and code review |
If you like what we're building, please ⭐ star this repo – it's a huge motivation for us to keep going!
No comments yet. Be the first to share your thoughts!
1. Install three MCP servers and one skill:
2. Use the skill when requesting a feature:
Prompt your favorite AI coding agent (Claude Code, Codex, Cursor, etc.) as usual. Then just add Follow $tdd in the end.
> Build [your feature description]. Follow $tdd.

Picture this: You're debugging a cryptic error in Google Cloud Batch with GPU instances. Your AI coding assistant searches the web and finds the perfect StackOverflow thread. Great, right? Not quite. Here's what most web search MCP servers give your AI:
{
"title": "GCP Cloud Batch fails with the GPU instance template",
"url": "https://stackoverflow.com/questions/76546453/...",
"snippet": "I am trying to run a GCP Cloud Batch job with K80 GPU. The job runs for ~30 min. and then fails..."
}
The question is there, but where are the answers? Where are the solutions that other developers tried? The workarounds? The "this worked for me" comments?
They're not there. Your AI now has to make a second call to scrape the page. Sometimes it does, sometimes it doesn't. And even when it does, most scrapers return either incomplete content or the entire webpage with navigation panels, ads, and other noise that wastes tokens and confuses the AI.
At Shelpuk AI Technology Consulting, we build custom AI products under a fixed-price model. Development efficiency isn't just nice to have - it's the foundation of our business. We've been using AI coding assistants since 2023 (GitHub Copilot, Cursor, Windsurf, Claude Code, Codex), and we noticed something frustrating:
When we developers face a complex bug, we don't just want to find a URL - we want to find the conversation. We want to see what others tried, what worked, what didn't, and why. We want the GitHub Issue with all the comments. We want the StackOverflow thread with upvoted answers and follow-up discussions. We want the arXiv paper content, not just its abstract.
Existing web search MCP servers are basically wrappers around search APIs. They're great at finding content, but terrible at delivering it in a way that's useful for AI coding assistants.
We built Kindly Web Search because we needed our AI assistants to work the way we work. When searching for solutions, Kindly:
✅ Integrates directly with APIs for StackExchange, GitHub Issues, arXiv, and Wikipedia - presenting content in LLM-optimized formats with proper structure
✅ Returns the full conversation in a single call: questions, answers, comments, reactions, and metadata
✅ Parses any webpage in real-time using a headless browser for cutting-edge issues that were literally posted yesterday
✅ Passes all useful content to the LLM immediately - no need for a second scraping call
✅ Supports multiple search providers (Serper and Tavily) with intelligent fallback
Now, when Claude Code or Codex searches for that GPU batch error, it gets the question and the answers. The code snippets. The "this fixed it for me" comments. Everything it needs to help you solve the problem - in one call.
If you give Kindly a try or like the idea, please drop us a star on GitHub - it’s always huge motivation for us to keep improving it! ⭐️
Kindly eliminates the need for:
✅ Generic web search MCP servers
✅ StackOverflow MCP servers
✅ Web scraping MCP servers (Playwright, Puppeteer, etc.)
It also significantly reduces reliance on GitHub MCP servers by providing structured Issue content through intelligent extraction.
Kindly has been our daily companion in production work for months, saving us countless hours and improving the effectiveness of our AI coding assistants. We're excited to share it with the community!
Tools
web_search(query, num_results=3) → top results with title, link, snippet, and page_content (Markdown, best-effort).get_content(url) → page_content (Markdown, best-effort).Search uses Serper (primary, if configured) or Tavily, and page extraction uses a local Chromium-based browser via nodriver.
SERPER_API_KEY (recommended) → TAVILY_API_KEY → SEARXNG_BASE_URL (self-hosted SearXNG)page_content extraction may fail for other sites.GITHUB_TOKEN (renders GitHub Issues in a much more LLM-friendly format: question + answers/comments + reactions/metadata; fewer rate limits)onnxruntime wheels may be unavailable).GITHUB_TOKEN can be read-only and limited to public repositories to avoid security/privacy concerns.
uvxmacOS / Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows (PowerShell):
irm https://astral.sh/uv/install.ps1 | iex
Re-open your terminal and verify:
uvx --version
page_content)You need Chrome / Chromium / Edge / Brave installed on the same machine running your MCP client.
Note: If you skip this, specialized sources (StackOverflow/StackExchange, GitHub Issues/Discussions, Wikipedia, arXiv) will still work well. Only universal page_content extraction for arbitrary sites requires the browser.
macOS:
brew install --cask chromium
Windows:
Get-Command chrome | Select-Object -ExpandProperty Source
# Common path:
# C:\Program Files\Google\Chrome\Application\chrome.exe
# If `Get-Command chrome` fails, try one of these:
# C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
# C:\Program Files\Microsoft\Edge\Application\msedge.exe
Linux (Ubuntu/Debian):
sudo apt-get update
sudo apt-get install -y chromium
which chromium
Other Linux distros: install chromium (or chromium-browser) via your package manager.
Set one of these. Provider selection order is: Serper → Tavily → SearXNG.
macOS / Linux:
export SERPER_API_KEY="..."
# or:
export TAVILY_API_KEY="..."
# or (self-hosted SearXNG):
export SEARXNG_BASE_URL="https://searx.example.org"
Windows (PowerShell):
$env:SERPER_API_KEY="..."
# or:
$env:TAVILY_API_KEY="..."
# or (self-hosted SearXNG):
$env:SEARXNG_BASE_URL="https://searx.example.org"
Optional (SearXNG): if your instance requires authentication or blocks bots, set:
export SEARXNG_HEADERS_JSON='{"Authorization":"Bearer ..."}'
export SEARXNG_USER_AGENT="Mozilla/5.0 ..."
Windows (PowerShell):
$env:SEARXNG_HEADERS_JSON='{"Authorization":"Bearer ..."}'
$env:SEARXNG_USER_AGENT="Mozilla/5.0 ..."
Optional (recommended for better GitHub Issue / PR extraction):
export GITHUB_TOKEN="..."
For public repos, a read-only token is enough (classic tokens often use public_repo; fine-grained tokens need repo read access).
uvx --from git+https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server \
kindly-web-search-mcp-server start-mcp-server
First-run note: the first uvx invocation may take 30–60 seconds while it builds the tool environment. If your MCP client times out on first start, run the command once in a terminal to “prewarm” it, t