The DB MCP Server provides a standardized way for AI models to interact with multiple databases simultaneously. Built on the FreePeak/cortex framework, it enables AI assistants to execute SQL queries, manage transactions, explore schemas, and analyze performance across different database systems through a unified interface.
Unlike traditional database connectors, DB MCP Server can connect to and interact with multiple databases concurrently:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "localhost",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "postgres1",
"type": "postgres",
"host": "localhost",
"port": 5432,
"name": "db2",
"user": "user2",
"password": "password2"
}
]
}
For each connected database, the server automatically generates a set of specialized tools:
// For a database with ID "mysql1", these tools are generated:
query_mysql1 // Execute SQL queries
execute_mysql1 // Run data modification statements
transaction_mysql1 // Manage transactions
schema_mysql1 // Explore database schema
performance_mysql1 // Analyze query performance
The server follows Clean Architecture principles with these layers:
Database | Status | Features |
---|---|---|
MySQL | ✅ Full Support | Queries, Transactions, Schema Analysis, Performance Insights |
PostgreSQL | ✅ Full Support (v9.6-17) | Queries, Transactions, Schema Analysis, Performance Insights |
The quickest way to get started is with Docker:
# Pull the latest image
docker pull freepeak/db-mcp-server:latest
# Option 1: Run with environment variables (recommended)
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
-e TRANSPORT_MODE=sse \
-e CONFIG_PATH=/app/my-config.json \
freepeak/db-mcp-server
# Option 2: Override the entrypoint
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
--entrypoint /app/server \
freepeak/db-mcp-server \
-t sse -c /app/my-config.json
# Option 3: Use shell to execute the command
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
freepeak/db-mcp-server \
/bin/sh -c "/app/server -t sse -c /app/my-config.json"
Note: We mount to
/app/my-config.json
because the container already has a file at/app/config.json
. If you encounter platform mismatch warnings, you can specify the platform:--platform linux/amd64
or--platform linux/arm64
.
# Clone the repository
git clone https://github.com/FreePeak/db-mcp-server.git
cd db-mcp-server
# Build the server
make build
# Run the server in SSE mode
./server -t sse -c config.json
The server supports multiple transport modes to fit different use cases:
Ideal for integration with AI coding assistants:
# Run the server in STDIO mode
./server -t stdio -c config.json
Output will be sent as JSON-RPC messages to stdout, while logs go to stderr.
For Cursor integration, add this to your .cursor/mcp.json
:
{
"mcpServers": {
"stdio-db-mcp-server": {
"command": "/path/to/db-mcp-server/server",
"args": [
"-t", "stdio",
"-c", "/path/to/config.json"
]
}
}
}
For web-based applications and services:
# Run with default host (localhost) and port (9092)
./server -t sse -c config.json
# Specify a custom host and port
./server -t sse -host 0.0.0.0 -port 8080 -c config.json
Connect your client to http://localhost:9092/sse
for the event stream.
For development environments with database containers:
# docker-compose.yml
version: '3'
services:
db-mcp-server:
image: freepeak/db-mcp-server:latest
ports:
- "9092:9092"
volumes:
- ./config.json:/app/my-config.json
environment:
- TRANSPORT_MODE=sse
- CONFIG_PATH=/app/my-config.json
# Alternative using entrypoint
# entrypoint: ["/app/server"]
# command: ["-t", "sse", "-c", "/app/my-config.json"]
depends_on:
- mysql
- postgres
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: rootpassword
MYSQL_DATABASE: testdb
MYSQL_USER: user
MYSQL_PASSWORD: password
ports:
- "3306:3306"
postgres:
image: postgres:17
environment:
POSTGRES_DB: testdb
POSTGRES_USER: user
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
Create a config.json
file with your database connections:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "localhost",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "postgres1",
"type": "postgres",
"host": "localhost",
"port": 5432,
"name": "db2",
"user": "user2",
"password": "password2"
}
]
}
The server supports various command-line options:
# Basic options
./server -t <transport> -c <config-file>
# Available transports: stdio, sse
# For SSE transport, additional options:
./server -t sse -host <hostname> -port <port> -c <config-file>
# Direct database configuration:
./server -t stdio -db-config '{"connections":[...]}'
# Environment variable configuration:
export DB_CONFIG='{"connections":[...]}'
./server -t stdio
For each connected database (e.g., "mysql1", "mysql2"), the server creates:
The server automatically generates tools with names following this format:
<tool_type>_<database_id>
Where:
<tool_type>
: One of: query, execute, transaction, schema, performance<database_id>
: The ID of the database as defined in your configurationExample tool names for a database with ID "mysql1":
query_mysql1
execute_mysql1
transaction_mysql1
schema_mysql1
performance_mysql1
query_<dbid>
: Execute SQL queries on the specified database
{
"query": "SELECT * FROM users WHERE age > ?",
"params": [30]
}
execute_<dbid>
: Execute SQL statements (INSERT, UPDATE, DELETE)
{
"statement": "INSERT INTO users (name, email) VALUES (?, ?)",
"params": ["John Doe", "john@example.com"]
}
transaction_<dbid>
: Manage database transactions
// Begin transaction
{
"action": "begin",
"readOnly": false
}
// Execute within transaction
{
"action": "execute",
"transactionId": "<from begin response>",
"statement": "UPDATE users SET active = ? WHERE id = ?",
"params": [true, 42]
}
// Commit transaction
{
"action": "commit",
"transactionId": "<from begin response>"
}
schema_<dbid>
: Get database schema information
{
"random_string": "dummy"
}
performance_<dbid>
: Analyze query performance
{
"action": "analyzeQuery",
"query": "SELECT * FROM users WHERE name LIKE ?"
}
list_databases
: List all configured database connections
{}
// Query the first database
{
"name": "query_mysql1",
"parameters": {
"query": "SELECT * FROM users LIMIT 5"
}
}
// Query the second database
{
"name": "query_mysql2",
"parameters": {
"query": "SELECT * FROM products LIMIT 5"
}
}
// Begin transaction
{
"name": "transaction_mysql1",
"parameters": {
"action": "begin"
}
}
// Response contains transactionId
// Execute within transaction
{
"name": "transaction_mysql1",
"parameters": {
"action": "execute",
"transactionId": "tx_12345",
"statement": "INSERT INTO orders (user_id, product_id) VALUES (?, ?)",
"params": [1, 2]
}
}
// Commit transaction
{
"name": "transaction_mysql1",
"parameters": {
"action": "commit",
"transactionId": "tx_12345"
}
}
We're committed to expanding DB MCP Server to support a wide range of database systems:
config.json
mountpoint for /app/config.json: not a directory
, it's because the container already has a file at that path. Mount to a different path (e.g., /app/my-config.json
) and update your configuration accordingly.-e TRANSPORT_MODE=sse -e CONFIG_PATH=/app/my-config.json
--entrypoint /app/server freepeak/db-mcp-server -t sse -c /app/my-config.json
freepeak/db-mcp-server /bin/sh -c "/app/server -t sse -c /app/my-config.json"
The server writes logs to:
./logs/db-mcp-server.log
Enable debug logging with the -debug
flag:
./server -t sse -debug -c config.json
Contributions are welcome! Here's how you can help:
git checkout -b new-feature
git commit -am 'Add new feature'
git push origin new-feature
Please ensure your code follows our coding standards and includes appropriate tests.
This project is licensed under the MIT License - see the LICENSE file for details.
The MCP server registers tools with names that match the format Cursor expects. The tool names follow this format:
mcp_<servername>_<tooltype>_<dbID>
For example: mcp_mysql1_db_mcp_server_stdio_schema_mysql1_db
The server uses the name mysql1_db_mcp_server_stdio
by default, which should match your Cursor configuration in the mcp.json
file.
In your Cursor configuration (~/.cursor/mcp.json
), you should have a configuration like:
{
"mcpServers": {
"multidb": {
"command": "/path/to/db-mcp-server/server",
"args": [
"-t",
"stdio",
"-c",
"/path/to/database_config.json"
]
}
}
}
The server will automatically register tools with simple names that match the database identifiers in your configuration.
Once your DB MCP Server is running and properly configured in Cursor, you can use the MCP tools in your AI assistant conversations. The tools follow this naming pattern:
mcp_<server_name>_<tool_type>_<database_id>
Where:
<server_name>
is the name defined in your .cursor/mcp.json (e.g., "multidb")<tool_type>
is one of: query, execute, transaction, schema, performance, list_databases<database_id>
is the database ID from your configuration (not needed for list_databases)For a server named "multidb" with a database ID "mysql1":
mcp_multidb_list_databases
mcp_multidb_query_mysql1
Query: SELECT * FROM users LIMIT 10
mcp_multidb_schema_mysql1
mcp_multidb_execute_mysql1
Statement: INSERT INTO users (name, email) VALUES ('John Doe', 'john@example.com')
mcp_multidb_transaction_mysql1
Action: begin
If the AI assistant can't call the MCP tools:
ps aux | grep server
)The DB MCP Server fully supports OpenAI's Agents SDK, allowing you to create AI agents that can interact with databases directly.
pip install openai-agents
Here's how to integrate the DB MCP Server with an OpenAI Agent:
from openai import OpenAI
from agents.agent import Agent, ModelSettings
from agents.tools.mcp_server import MCPServerSse, MCPServerSseParams
# Connect to the MCP server
db_server = MCPServerSse(
params=MCPServerSseParams(
url="http://localhost:9095/sse", # URL to your running DB MCP server
schema={
"params": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"description": {"type": "string"},
"parameters": {"type": "object"}
}
}
}
}
),
)
# Create the agent with access to database tools
agent = Agent(
name="Database Agent",
model="gpt-4o",
model_settings=ModelSettings(temperature=0.1),
instructions="""
You are a database helper agent. You can execute SQL queries,
manage database transactions, and explore schema information.
""",
mcp_servers=[db_server],
)
# Now the agent can be used to interact with your databases through the OpenAI API
The repository includes a test script to verify compatibility with the OpenAI Agents SDK:
# Run the test script
./test_tools/openai-agent-sdk-test/run_test.sh
The script will:
If you encounter issues:
{
"mcpServers": {
"stdio-db-mcp-server": {
"env": {},
"args": [
"-t",
"stdio",
"-c",
"/path/to/config.json"
],
"command": "/path/to/db-mcp-server/server"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.