Connect your DevOps tools with the power of AI
This project evolved from a simple experiment into a powerful bridge between your essential DevOps platforms, providing a unified interface through Claude and other AI assistants. With integrated agent capabilities, it can even delegate work autonomously!
Connect your essential DevOps platforms with a natural language interface:
git clone https://github.com/theapemachine/mcp-server-devops-bridge
cd mcp-server-devops-bridge
go build
export AZURE_DEVOPS_ORG="your-org"
export AZDO_PAT="your-pat-token"
export AZURE_DEVOPS_PROJECT="your-project"
# Optional integrations
export GITHUB_PAT="your-github-pat"
export SLACK_BOT_TOKEN="your-slack-token"
export DEFAULT_SLACK_CHANNEL="some-slack-channel-id"
# AI and Memory integrations
export OPENAI_API_KEY="your-api-key"
export QDRANT_URL="http://localhost:6333"
export QDRANT_API_KEY="your-qdrant-api-key"
export NEO4J_URL="http://localhost:7474"
export NEO4J_USER="neo4j"
export NEO4J_PASSWORD="your-neo4j-password"
{
"mcpServers": {
"devops-bridge": {
"command": "/full/path/to/mcp-server-devops-bridge/mcp-server-devops-bridge",
"args": [],
"env": {
"AZURE_DEVOPS_ORG": "organization",
"AZDO_PAT": "personal_access_token",
"AZURE_DEVOPS_PROJECT": "project",
"SLACK_DEFAULT_CHANNEL": "channel_id",
"SLACK_BOT_TOKEN": "bot_token",
"GITHUB_PAT": "personal_access_token",
"OPENAI_API_KEY": "openaikey",
"QDRANT_URL": "http://localhost:6333",
"QDRANT_API_KEY": "yourkey",
"NEO4J_URL": "yourneo4jinstance",
"NEO4J_USER": "neo4j",
"NEO4J_PASSWORD": "neo4jpassword"
}
}
}
}
"Create a user story for the new authentication feature, link it to the existing GitHub PR #123, and notify the team in Slack"
"Find all work items related to authentication, show their linked PRs, and summarize recent code review comments"
"Generate a sprint report including:
- Work item status from Azure Boards
- PR review status from GitHub
- Team discussions from Slack"
"Update the wiki page for authentication and link it to relevant work items and PRs"
"Create an agent to monitor our authentication PRs, summarize code changes, and post daily updates to Slack"
The project includes a powerful agent system built on OpenAI's GPT-4o-mini, enabling Claude to create its own long-running agents that can:
Under the hood, each agent runs inside a dedicated Docker container, providing:
The bridge uses the Model Context Protocol to provide Claude with structured access to your DevOps tools. This enables:
If you don't have direct access to modify environment variables, create a start.sh
script and make it executable:
#!/bin/bash
# Azure DevOps Configuration
export AZURE_DEVOPS_ORG="YOUR ORG"
export AZDO_PAT="YOUR PAT"
export AZURE_DEVOPS_PROJECT="YOUR PROJECT"
# GitHub Configuration
export GITHUB_PAT="YOUR PAT"
# Slack Configuration
export SLACK_BOT_TOKEN="YOUR TOKEN"
export DEFAULT_SLACK_CHANNEL="YOUR CHANNEL ID"
# OpenAI Configuration
export OPENAI_API_KEY="YOUR API KEY"
# Qdrant Configuration
export QDRANT_URL="http://localhost:6333"
export QDRANT_API_KEY="your-qdrant-api-key"
# Neo4j Configuration
export NEO4J_URL="http://localhost:7474"
export NEO4J_USER="neo4j"
export NEO4J_PASSWORD="your-neo4j-password"
# Email Configuration (if using email features)
export EMAIL_INBOX_WEBHOOK_URL="YOUR WEBHOOK URL"
export EMAIL_SEARCH_WEBHOOK_URL="YOUR WEBHOOK URL"
export EMAIL_REPLY_WEBHOOK_URL="YOUR WEBHOOK URL"
/path/to/mcp-server-devops-bridge/mcp-server-devops-bridge
We welcome contributions! Key areas for enhancement:
This project is licensed under the MIT License - see the LICENSE file for details.
The bridge implements an intelligent memory system that enables AI assistants to automatically:
The memory system uses a dual-store approach:
The system includes a middleware layer that enhances MCP tools with memory capabilities:
// Apply memory middleware to any tool handler
wrappedHandler := MemoryMiddleware(originalHandler)
This middleware:
Query Phase: Before processing a tool request
Response Phase: After processing a tool request
Memory-enhancing specific tools is simple:
// Create your tool
myTool := mcp.NewTool("my_tool", /* ... */);
// Original handler function
func handleMyTool(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
// Tool implementation
}
// Wrap with memory middleware
wrappedHandler := MemoryMiddleware(handleMyTool)
// Register with MCP server
mcpServer.AddTool(myTool, wrappedHandler)
The memory system can be configured through environment variables:
# Vector Store (Qdrant)
export QDRANT_URL="http://localhost:6333"
export QDRANT_API_KEY="your-qdrant-api-key"
# Graph Database (Neo4j)
export NEO4J_URL="http://localhost:7474"
export NEO4J_USER="neo4j"
export NEO4J_PASSWORD="your-neo4j-password"
# OpenAI (for memory extraction)
export OPENAI_API_KEY="your-openai-key"
This repository contains the code for an MCP (Mission Control Panel) server DevOps bridge with a powerful agent system that allows AI models to create, manage, and coordinate long-running agents.
The system allows AI models to:
The system provides tools for:
Agents can communicate with each other through:
The system provides tools for:
// Get all agent-related tools
tools := ai.GetAllToolsAsOpenAI()
// Create an OpenAI client
client := openai.NewClient()
// Use OpenAI to coordinate agents
messages := []openai.ChatCompletionMessageParamUnion{
openai.SystemMessage(`You are a coordinator of AI agents.`),
openai.UserMessage("Create two agents and have them work together."),
}
// Call OpenAI with our tools
params := openai.ChatCompletionNewParams{
Model: openai.F(openai.ChatModelGPT4o),
Messages: openai.F(messages),
Tools: openai.F(tools),
}
// Process the response and handle tool calls
// ...
Create Agents: Create specialized agents for different tasks
Tool: agent
Arguments: {"id": "researcher", "system_prompt": "You are a research agent...", "task": "Find information about climate change"}
Subscribe to Topics: Have agents listen for relevant messages
Tool: subscribe_agent
Arguments: {"agent_id": "writer", "topic": "research_results"}
Send Messages: Share information between agents
Tool: send_agent_message
Arguments: {"topic": "research_results", "content": "Here is the information I found..."}
Send Commands: Give direct instructions to agents
Tool: send_command
Arguments: {"agent_id": "writer", "command": "Summarize the research in 3 paragraphs"}
The system implements several security measures:
See the examples/agent_example.go
file for a complete example of how to use the agent system to create and coordinate multiple agents.
Seamless access to top MCP servers powering the future of AI integration.