This project provides a Slack bot client that serves as a bridge between Slack and Model Context Protocol (MCP) servers. By leveraging Slack as the user interface, it allows LLM models to interact with multiple MCP servers using standardized MCP tools.
This project implements a Slack bot client that acts as a bridge between Slack and Model Context Protocol (MCP) servers. It uses Slack as a user interface while enabling LLM models to communicate with various MCP servers through standardized MCP tools.
Important distinction: This client is not designed to interact with the Slack API directly as its primary purpose. However, it can achieve Slack API functionality by connecting to a dedicated Slack MCP server (such as modelcontextprotocol/servers/slack) alongside other MCP servers.
flowchart LR
User([User]) --> SlackBot
subgraph SlackBotService[Slack Bot Service]
SlackBot[Slack Bot] <--> MCPClient[MCP Client]
end
MCPClient <--> MCPServer1[MCP Server 1]
MCPClient <--> MCPServer2[MCP Server 2]
MCPClient <--> MCPServer3[MCP Server 3]
MCPServer1 <--> Tools1[(Tools)]
MCPServer2 <--> Tools2[(Tools)]
MCPServer3 <--> Tools3[(Tools)]
style SlackBotService fill:#F8F9F9,stroke:#333,stroke-width:2px
style SlackBot fill:#F4D03F,stroke:#333,stroke-width:2px
style MCPClient fill:#2ECC71,stroke:#333,stroke-width:2px
style MCPServer1 fill:#E74C3C,stroke:#333,stroke-width:2px
style MCPServer2 fill:#E74C3C,stroke:#333,stroke-width:2px
style MCPServer3 fill:#E74C3C,stroke:#333,stroke-width:2px
style Tools1 fill:#9B59B6,stroke:#333,stroke-width:2px
style Tools2 fill:#9B59B6,stroke:#333,stroke-width:2px
style Tools3 fill:#9B59B6,stroke:#333,stroke-width:2px
After installing the binary, you can run it locally with the following steps:
# Using environment variables directly
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export SLACK_APP_TOKEN="xapp-your-app-token"
export OPENAI_API_KEY="sk-your-openai-key"
export OPENAI_MODEL="gpt-4o"
export LOG_LEVEL="info"
# Or create a .env file and source it
cat > .env << EOL
SLACK_BOT_TOKEN="xoxb-your-bot-token"
SLACK_APP_TOKEN="xapp-your-app-token"
OPENAI_API_KEY="sk-your-openai-key"
OPENAI_MODEL="gpt-4o"
LOG_LEVEL="info"
EOL
source .env
# Create mcp-servers.json in the current directory
cat > mcp-servers.json << EOL
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "$HOME"],
"env": {}
}
}
}
EOL
# Run with default settings (looks for mcp-servers.json in current directory)
slack-mcp-client
# Or specify a custom config file location
slack-mcp-client --config /path/to/mcp-servers.json
# Enable debug mode
slack-mcp-client --debug
# Specify OpenAI model
slack-mcp-client --openai-model gpt-4o-mini
The application will connect to Slack and start listening for messages. You can check the logs for any errors or connection issues.
For deploying to Kubernetes, a Helm chart is available in the helm-chart
directory. This chart provides a flexible way to deploy the slack-mcp-client with proper configuration and secret management.
The Helm chart is also available directly from GitHub Container Registry, allowing for easier installation without needing to clone the repository:
# Add the OCI repository to Helm (only needed once)
helm registry login ghcr.io -u USERNAME -p GITHUB_TOKEN
# Pull the Helm chart
helm pull oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0
# Or install directly
helm install my-slack-bot oci://ghcr.io/tuannvm/charts/slack-mcp-client --version 0.1.0 -f values.yaml
You can check available versions by visiting the GitHub Container Registry in your browser.
# Create a values file with your configuration
cat > values.yaml << EOL
secret:
create: true
env:
SLACK_BOT_TOKEN: "xoxb-your-bot-token"
SLACK_APP_TOKEN: "xapp-your-app-token"
OPENAI_API_KEY: "sk-your-openai-key"
OPENAI_MODEL: "gpt-4o"
LOG_LEVEL: "info"
# Optional: Configure MCP servers
configMap:
create: true
EOL
# Install the chart
helm install my-slack-bot ./helm-chart/slack-mcp-client -f values.yaml
The Helm chart supports various configuration options including:
For more details, see the Helm chart README.
The Helm chart uses the Docker image from GitHub Container Registry (GHCR) by default. You can specify a particular version or use the latest tag:
# In your values.yaml
image:
repository: ghcr.io/tuannvm/slack-mcp-client
tag: "latest" # Or use a specific version like "1.0.0"
pullPolicy: IfNotPresent
To manually pull the image:
# Pull the latest image
docker pull ghcr.io/tuannvm/slack-mcp-client:latest
# Or pull a specific version
docker pull ghcr.io/tuannvm/slack-mcp-client:1.0.0
If you're using private images, you can configure image pull secrets in your values:
imagePullSecrets:
- name: my-ghcr-secret
For local testing and development, you can use Docker Compose to easily run the slack-mcp-client along with additional MCP servers.
.env
file with your credentials:# Create .env file from example
cp .env.example .env
# Edit the file with your credentials
nano .env
mcp-servers.json
file (or use the example):# Create mcp-servers.json from example
cp mcp-servers.json.example mcp-servers.json
# Edit if needed
nano mcp-servers.json
# Start services in detached mode
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
The included docker-compose.yml
provides:
.env
fileversion: '3.8'
services:
slack-mcp-client:
image: ghcr.io/tuannvm/slack-mcp-client:latest
container_name: slack-mcp-client
environment:
- SLACK_BOT_TOKEN=${SLACK_BOT_TOKEN}
- SLACK_APP_TOKEN=${SLACK_APP_TOKEN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENAI_MODEL=${OPENAI_MODEL:-gpt-4o}
volumes:
- ./mcp-servers.json:/app/mcp-servers.json:ro
You can easily extend this setup to include additional MCP servers in the same network.
app_mentions:read
chat:write
im:history
im:read
im:write
app_mention
message.im
For detailed instructions on Slack app configuration, token setup, required permissions, and troubleshooting common issues, see the Slack Configuration Guide.
The client can be configured using the following environment variables:
Variable | Description | Default |
---|---|---|
SLACK_BOT_TOKEN | Bot token for Slack API | (required) |
SLACK_APP_TOKEN | App-level token for Socket Mode | (required) |
OPENAI_API_KEY | API key for OpenAI authentication | (required) |
OPENAI_MODEL | OpenAI model to use | gpt-4o |
LOG_LEVEL | Logging level (debug, info, warn, error) | info |
The client supports three transport modes:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
This project uses GitHub Actions for continuous integration and GoReleaser for automated releases.
Our CI pipeline performs the following checks on all PRs and commits to the main branch:
When changes are merged to the main branch:
{
"mcpServers": {
"filesystem": {
"env": {},
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"$HOME"
],
"command": "npx"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.