by assafelovic
MCP server for enabling LLM applications to perform deep research via the MCP protocol
# Add to your Claude Code skills
git clone https://github.com/assafelovic/gptr-mcpWhile LLM apps can access web search tools with MCP, GPT Researcher MCP delivers deep research results. Standard search tools return raw results requiring manual filtering, often containing irrelevant sources and wasting context window space.
GPT Researcher autonomously explores and validates numerous sources, focusing only on relevant, trusted and up-to-date information. Though slightly slower than standard search (~30 seconds wait), it delivers:
https://github.com/user-attachments/assets/ef97eea5-a409-42b9-8f6d-b82ab16c52a8
Want to use this with Claude Desktop right away? Here's the fastest path:
Install dependencies:
git clone https://github.com/assafelovic/gptr-mcp.git
pip install -r requirements.txt
Set up your Claude Desktop config at ~/Library/Application Support/Claude/claude_desktop_config.json:
No comments yet. Be the first to share your thoughts!
{
"mcpServers": {
"gptr-mcp": {
"command": "python",
"args": ["/absolute/path/to/gpt-researcher/gptr-mcp/server.py"],
"env": {
"OPENAI_API_KEY": "your-openai-key-here",
"TAVILY_API_KEY": "your-tavily-key-here"
}
}
}
}
Restart Claude Desktop and start researching! 🎉
For detailed setup instructions, see the full Claude Desktop Integration section below.
research_resource: Get web resources related to a given task via research.deep_research: Performs deep web research on a topic, finding the most reliable and relevant informationquick_search: Performs a fast web search optimized for speed over quality, returning search results with snippets. Supports any GPTR supported web retriever such as Tavily, Bing, Google, etc... Learn more herewrite_report: Generate a report based on research resultsget_research_sources: Get the sources used in the researchget_research_context: Get the full context of the researchresearch_query: Create a research query promptBefore running the MCP server, make sure you have:
You can also connect any other web search engines or MCP using GPTR supported retrievers. Check out the docs here
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
cd gptr-mcp
pip install -r requirements.txt
.env.example file to create a new file named .env:cp .env.example .env
.env file and add your API keys and configure other settings:OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
You can also add any other env variable for your GPT Researcher configuration.
You can run the MCP server in several ways:
python server.py
mcp run server.py
The simplest way to run with Docker:
# Build and run with docker-compose
docker-compose up -d
# Or manually:
docker build -t gptr-mcp .
docker run -d \
--name gptr-mcp \
-p 8000:8000 \
--env-file .env \
gptr-mcp
If you need to connect to an existing n8n network:
# First, start the container
docker-compose up -d
# Then connect to your n8n network
docker network connect n8n-mcp-net gptr-mcp
# Or create a shared network first
docker network create n8n-mcp-net
docker network connect n8n-mcp-net gptr-mcp
Note: The Docker image uses Python 3.11 to meet the requirements of gpt-researcher >=0.12.16. If you encounter errors during the build, ensure you're using the latest Dockerfile from this repository.
Once the server is running, you'll see output indicating that the server is ready to accept connections. You can verify it's working by:
python test_mcp_server.pyImportant for Docker/n8n Integration:
0.0.0.0:8000 to work with Docker containers/sse endpoint firstThe GPT Researcher MCP server supports multiple transport protocols and automatically chooses the best one for your environment:
| Transport | Use Case | When to Use | |-----------|----------|-------------| | STDIO | Claude Desktop, Local MCP clients | Default for local development | | SSE | Docker, Web clients, n8n integration | Auto-enabled in Docker | | Streamable HTTP | Modern web deployments | Advanced web deployments |
The server automatically detects your environment:
# Local development (default)
python server.py
# ➜ Uses STDIO transport (Claude Desktop compatible)
# Docker environment
docker run gptr-mcp
# ➜ Auto-detects Docker, uses SSE transport
# Manual override
export MCP_TRANSPORT=sse
python server.py
# ➜ Forces SSE transport
| Variable | Description | Default | Example |
|----------|-------------|---------|---------|
| MCP_TRANSPORT | Force specific transport | stdio | sse, streamable-http |
| DOCKER_CONTAINER | Force Docker mode | Auto-detected | true |
// ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"gpt-researcher": {
"command": "python",
"args": ["/absolute/path/to/server.py"],
"env": {
"..."
}
}
}
}
# Set transport explicitly for web deployment
export MCP_TRANSPORT=sse
python server.py
# Or use Docker (auto-detects)
docker-compose up -d
# Use the container name as hostname
docker run --name gptr-mcp -p 8000:8000 gptr-mcp
# In n8n, connect to: http://gptr-mcp:8000/sse
When using SSE or HTTP transports:
GET /healthGET /sse (get session ID)POST /messages/?session_id=YOUR_SESSION_IDYou can integrate your MCP server with Claude using:
Claude Desktop Integration - For using with Claude desktop application on Mac
For detailed instructions, follow the link above.
To integrate your locally running MCP server with Claude for Mac, you'll need to:
~/Library/Application Support/Claude/claude_desktop_config.jsonClaude Desktop launches your MCP server as a separate subprocess, so you must explicitly pass your API keys in the configuration. The server cannot access your shell's environment variables or .env file automatically.
{
"mcpServers": {
"gptr-mcp": {
"command": "python",
"args": ["/absolute/path/to/your/server.py"],
"env": {
"OPENAI_API_KEY": "your-actual-openai-key-here",
"TAVILY_API_KEY": "your-actual-tavily-key-here"
}
}
}
}
🔒 Your Claude Desktop config contains sensitive API keys. Protect it:
chmod 600 ~/Library/Application\ Support/Claude/claude_desktop_config.json
Never commit this file to version control.
For bette