This project demonstrates how to integrate the Model Context Protocol (MCP) with customized LLM (e.g. Qwen), creating a powerful chatbot that can interact with various tools through MCP servers. The implementation showcases the flexibility of MCP by enabling LLMs to use external tools seamlessly.
[!TIP] For Chinese version, please refer to README_ZH.md.
Chatbot Streamlit Example
Workflow Tracer Example
This project includes:
Clone the repository:
git clone git@github.com:keli-wen/mcp_chatbot.git
cd mcp_chatbot
Set up a virtual environment (recommended):
cd folder
# Install uv if you don't have it already
pip install uv
# Create a virtual environment and install dependencies
uv venv .venv --python=3.10
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Deactivate the virtual environment
deactivate
Install dependencies:
pip install -r requirements.txt
# or use uv for faster installation
uv pip install -r requirements.txt
Configure your environment:
Copy the .env.example
file to .env
:
cp .env.example .env
Edit the .env
file to add your Qwen API key (just for demo, you can use any LLM API key, remember to set the base_url and api_key in the .env file) and set the paths:
LLM_MODEL_NAME=your_llm_model_name_here
LLM_BASE_URL=your_llm_base_url_here
LLM_API_KEY=your_llm_api_key_here
OLLAMA_MODEL_NAME=your_ollama_model_name_here
OLLAMA_BASE_URL=your_ollama_base_url_here
MARKDOWN_FOLDER_PATH=/path/to/your/markdown/folder
RESULT_FOLDER_PATH=/path/to/your/result/folder
Before running the application, you need to modify the following:
MCP Server Configuration:
Edit mcp_servers/servers_config.json
to match your local setup:
{
"mcpServers": {
"markdown_processor": {
"command": "/path/to/your/uv",
"args": [
"--directory",
"/path/to/your/project/mcp_servers",
"run",
"markdown_processor.py"
]
}
}
}
Replace /path/to/your/uv
with the actual path to your uv executable. You can use which uv
to get the path.
Replace /path/to/your/project/mcp_servers
with the absolute path to the mcp_servers directory in your project.
Environment Variables:
Make sure to set proper paths in your .env
file:
MARKDOWN_FOLDER_PATH="/path/to/your/markdown/folder"
RESULT_FOLDER_PATH="/path/to/your/result/folder"
The application will validate these paths and throw an error if they contain placeholder values.
You can run the following command to check your configuration:
bash scripts/check.sh
You can run the following command to run the unit test:
bash scripts/unittest.sh
The project includes two single prompt examples:
Regular Mode: Process a single prompt and display the complete response
python example/single_prompt/single_prompt.py
Streaming Mode: Process a single prompt with real-time streaming output
python example/single_prompt/single_prompt_stream.py
Both examples accept an optional --llm
parameter to specify which LLM provider to use:
python example/single_prompt/single_prompt.py --llm=ollama
[!NOTE] For more details, see the Single Prompt Example README.
The project includes two interactive terminal chatbot examples:
Regular Mode: Interactive terminal chat with complete responses
python example/chatbot_terminal/chatbot_terminal.py
Streaming Mode: Interactive terminal chat with streaming responses
python example/chatbot_terminal/chatbot_terminal_stream.py
Both examples accept an optional --llm
parameter to specify which LLM provider to use:
python example/chatbot_terminal/chatbot_terminal.py --llm=ollama
[!NOTE] For more details, see the Terminal Chatbot Example README.
The project includes an interactive web-based chatbot example using Streamlit:
streamlit run example/chatbot_streamlit/app.py
This example features:
[!NOTE] For more details, see the Streamlit Chatbot Example README.
mcp_chatbot/
: Core library code
chat/
: Chat session managementconfig/
: Configuration handlingllm/
: LLM client implementationmcp/
: MCP client and tool integrationutils/
: Utility functions (e.g. WorkflowTrace
and StreamPrinter
)mcp_servers/
: Custom MCP servers implementation
markdown_processor.py
: Server for processing Markdown filesservers_config.json
: Configuration for MCP serversdata-example/
: Example Markdown files for testingexample/
: Example scripts for different use cases
single_prompt/
: Single prompt processing examples (regular and streaming)chatbot_terminal/
: Interactive terminal chatbot examples (regular and streaming)chatbot_streamlit/
: Interactive web chatbot example using StreamlitYou can extend this project by:
mcp_servers/
directoryservers_config.json
to include your new servers.env
file
{
"mcpServers": {
"markdown_processor": {
"args": [
"--directory",
"/path/to/your/project/mcp_servers",
"run",
"markdown_processor.py"
],
"command": "/path/to/your/uv"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.