MCPHub is an embeddable Model Context Protocol (MCP) solution for AI services. It enables seamless integration of MCP servers into any AI framework, allowing developers to easily configure, set up, and manage MCP servers within their applications. Whether you're using OpenAI Agents, LangChain, or Autogen, MCPHub provides a unified way to connect your AI services with MCP tools and resources.
Ensure you have the following tools installed:
# Install uv (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install git (for repository cloning)
sudo apt-get install git # Ubuntu/Debian
brew install git # macOS
# Install npx (comes with Node.js)
npm install -g npx
# Install MCPHub
pip install mcphub # Basic installation
# Optional: Install with framework-specific dependencies
pip install mcphub[openai] # For OpenAI Agents integration
pip install mcphub[langchain] # For LangChain integration
pip install mcphub[autogen] # For Autogen integration
pip install mcphub[all] # Install all optional dependencies
Create a .mcphub.json
file in your project root:
{
"mcpServers": {
"sequential-thinking-mcp": {
"package_name": "smithery-ai/server-sequential-thinking",
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@smithery-ai/server-sequential-thinking"
]
}
}
}
import asyncio
import json
from agents import Agent, Runner
from mcphub import MCPHub
async def main():
"""
Example of using MCPHub to integrate MCP servers with OpenAI Agents.
This example demonstrates:
1. Initializing MCPHub
2. Fetching and using an MCP server
3. Listing available tools
4. Creating and running an agent with MCP tools
"""
# Step 1: Initialize MCPHub
# MCPHub will automatically:
# - Find .mcphub.json in your project
# - Load server configurations
# - Set up servers (clone repos, run setup scripts if needed)
hub = MCPHub()
# Step 2: Create an MCP server instance using async context manager
# Parameters:
# - mcp_name: The name of the server from your .mcphub.json
# - cache_tools_list: Cache the tools list for better performance
async with hub.fetch_openai_mcp_server(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
) as server:
# Step 3: List available tools from the MCP server
# This shows what capabilities are available to your agent
tools = await server.list_tools()
# Pretty print the tools for better readability
tools_dict = [
dict(tool) if hasattr(tool, "__dict__") else tool for tool in tools
]
print("Available MCP Tools:")
print(json.dumps(tools_dict, indent=2))
# Step 4: Create an OpenAI Agent with MCP server
# The agent can now use all tools provided by the MCP server
agent = Agent(
name="Assistant",
instructions="Use the available tools to accomplish the given task",
mcp_servers=[server] # Provide the MCP server to the agent
)
# Step 5: Run your agent with a complex task
# The agent will automatically have access to all MCP tools
complex_task = """Please help me analyze the following complex problem:
We need to design a new feature for our product that balances user privacy
with data collection for improving the service. Consider the ethical implications,
technical feasibility, and business impact. Break down your thinking process
step by step, and provide a detailed recommendation with clear justification
for each decision point."""
# Execute the task and get the result
result = await Runner.run(agent, complex_task)
print("\nAgent Response:")
print(result)
if __name__ == "__main__":
# Run the async main function
asyncio.run(main())
.mcphub.json
configuration fileConfigure your MCP servers in .mcphub.json
:
{
"mcpServers": {
// TypeScript-based MCP server using NPX
"sequential-thinking-mcp": {
"package_name": "smithery-ai/server-sequential-thinking", // NPM package name
"command": "npx", // Command to run server
"args": [ // Command arguments
"-y",
"@smithery/cli@latest",
"run",
"@smithery-ai/server-sequential-thinking"
]
},
// Python-based MCP server from GitHub
"azure-storage-mcp": {
"package_name": "mashriram/azure_mcp_server", // Package identifier
"repo_url": "https://github.com/mashriram/azure_mcp_server", // GitHub repository
"command": "uv", // Python package manager
"args": ["run", "mcp_server_azure_cmd"], // Run command
"setup_script": "uv pip install -e .", // Installation script
"env": { // Environment variables
"AZURE_STORAGE_CONNECTION_STRING": "${AZURE_STORAGE_CONNECTION_STRING}",
"AZURE_STORAGE_CONTAINER_NAME": "${AZURE_STORAGE_CONTAINER_NAME}",
"AZURE_STORAGE_BLOB_NAME": "${AZURE_STORAGE_BLOB_NAME}"
}
}
}
}
npx
Provides adapters for popular AI frameworks:
from mcphub import MCPHub
async def framework_quick_examples():
hub = MCPHub()
# 1. OpenAI Agents Integration
async with hub.fetch_openai_mcp_server(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
) as server:
# Use server with OpenAI agents
agent = Agent(
name="Assistant",
mcp_servers=[server]
)
# 2. LangChain Tools Integration
langchain_tools = await hub.fetch_langchain_mcp_tools(
mcp_name="sequential-thinking-mcp",
cache_tools_list=True
)
# Use tools with LangChain
# 3. Autogen Adapters Integration
autogen_adapters = await hub.fetch_autogen_mcp_adapters(
mcp_name="sequential-thinking-mcp"
)
# Use adapters with Autogen
Discover and manage MCP server tools:
from mcphub import MCPHub
async def tool_management():
hub = MCPHub()
# List all servers
servers = hub.list_servers()
# List all tools from a specific MCP server
tools = await hub.list_tools(mcp_name="sequential-thinking-mcp")
# Print tool information
for tool in tools:
print(f"Tool Name: {tool.name}")
print(f"Description: {tool.description}")
print(f"Parameters: {tool.parameters}")
print("---")
# Tools can be:
# - Cached for better performance using cache_tools_list=True
# - Converted to framework-specific formats automatically
# - Used directly with AI frameworks through adapters
MCPHub simplifies the integration of Model Context Protocol (MCP) servers into AI applications through four main components:
Params Hub
.mcphub.json
MCP Servers Manager
MCP Client
list_tools
: Discovers available server toolscall_tool
: Executes server toolsFramework Adapters
Configuration & Setup
Communication
Integration
This architecture provides a seamless way to integrate MCP capabilities into any AI application while maintaining clean separation of concerns and framework flexibility.
Run the unit tests with pytest:
pytest tests/ -v
This project uses GitHub Actions for continuous integration and deployment:
Automated Testing: Tests are run on Python 3.10, 3.11, and 3.12 for every push to main and release branches and for pull requests.
Automatic Version Bumping and Tagging: When code is pushed to the release
branch:
pyproject.toml
v0.1.2
) is created for the releasePyPI Publishing: When code is pushed to the release
branch and tests pass, the package is automatically built and published to PyPI.
To enable automatic PyPI deployment, you need to add a PyPI API token as a GitHub Secret:
PYPI_API_TOKEN
with the token value from PyPIWe welcome contributions! Please check out our Contributing Guide for guidelines on how to proceed.
This project is licensed under the MIT License - see the LICENSE file for details.
{
"mcpServers": {
"sequential-thinking-mcp": {
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@smithery-ai/server-sequential-thinking"
],
"command": "npx"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.