by keli-wen
A chatbot implementation compatible with MCP (terminal / streamlit supported)
# Add to your Claude Code skills
git clone https://github.com/keli-wen/mcp_chatbot
This project demonstrates how to integrate the Model Context Protocol (MCP) with customized LLM (e.g. Qwen), creating a powerful chatbot that can interact with various tools through MCP servers. The implementation showcases the flexibility of MCP by enabling LLMs to use external tools seamlessly.
[!TIP] For Chinese version, please refer to README_ZH.md.
Chatbot Streamlit Example
<img src="assets/mcp_chatbot_streamlit_demo_low.gif" width="800">Workflow Tracer Example
<img src="assets/single_prompt_demo.png" width="800">This project includes:
Clone the repository:
git clone git@github.com:keli-wen/mcp_chatbot.git
cd mcp_chatbot
Set up a virtual environment (recommended):
cd folder
# Install uv if you don't have it already
pip install uv
# Create a virtual environment and install dependencies
uv venv .venv --python=3.10
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Deactivate the virtual environment
deactivate
Install dependencies:
pip install -r requirements.txt
# or use uv for faster installation
uv pip install -r requirements.txt
Configure your environment:
Copy the .env.example file to .env:
cp .env.example .env
Edit the .env file to add your Qwen API key (just for demo, you can use any LLM API key, remember to set the base_url and api_key in the .env file) and set the paths:
LLM_MODEL_NAME=your_llm_model_name_here
LLM_BASE_URL=your_llm_base_url_here
LLM_API_KEY=your_llm_api_key_here
OLLAMA_MODEL_NAME=your_ollama_model_name_here
OLLAMA_BASE_URL=your_ollama_base_url_here
MARKDOWN_FOLDER_PATH=/path/to/your/markdown/folder
RESULT_FOLDER_PATH=/path/to/your/result/folder
Before running the application, you need to modify the following: