MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.
MCPOmni Connect
├── Transport Layer
│ ├── Stdio Transport
│ ├── SSE Transport
│ └── Docker Integration
├── Session Management
│ ├── Multi-Server Orchestration
│ └── Connection Lifecycle Management
├── Tool Management
│ ├── Dynamic Tool Discovery
│ ├── Cross-Server Tool Routing
│ └── Tool Execution Engine
└── AI Integration
├── LLM Processing
├── Context Management
└── Response Generation
# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json
# start the cli running the command ensure your api key is export or create .env
mcpomni_connect
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
├── unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Configure your servers in servers_config.json
** Start Client**
# Start the cient
uv run src/main.py pr python src/main.py
{
"LLM": {
"provider": "openai", // Supports: "openai", "openrouter", "groq"
"model": "gpt-4", // Any model from supported providers
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000, // Maximu of the model context length
"top_p": 0
},
"mcpServers": {
"filesystem-server": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"/path/to/files"
]
},
"sse-server": {
"type": "sse",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
},
"docker-server": {
"command": "docker",
"args": ["run", "-i", "--rm", "mcp/server"]
}
}
}
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
Orchestrator Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
For additional support, please:
We welcome contributions! See our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by the MCPOmni Connect Team
MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.
MCPOmni Connect
├── Transport Layer
│ ├── Stdio Transport
│ ├── SSE Transport
│ └── Docker Integration
├── Session Management
│ ├── Multi-Server Orchestration
│ └── Connection Lifecycle Management
├── Tool Management
│ ├── Dynamic Tool Discovery
│ ├── Cross-Server Tool Routing
│ └── Tool Execution Engine
└── AI Integration
├── LLM Processing
├── Context Management
└── Response Generation
# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json
# start the cli running the command ensure your api key is export or create .env
mcpomni_connect
# Run all tests with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_specific_file.py -v
# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing
tests/
├── unit/ # Unit tests for individual components
Installation
# Clone the repository
git clone https://github.com/Abiorh001/mcp_omni_connect.git
cd mcp_omni_connect
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv sync
Configuration
# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Configure your servers in servers_config.json
** Start Client**
# Start the cient
uv run src/main.py pr python src/main.py
{
"LLM": {
"provider": "openai", // Supports: "openai", "openrouter", "groq"
"model": "gpt-4", // Any model from supported providers
"temperature": 0.5,
"max_tokens": 5000,
"max_context_length": 30000, // Maximu of the model context length
"top_p": 0
},
"mcpServers": {
"filesystem-server": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"/path/to/files"
]
},
"sse-server": {
"type": "sse",
"url": "http://localhost:3000/mcp",
"headers": {
"Authorization": "Bearer token"
},
},
"docker-server": {
"command": "docker",
"args": ["run", "-i", "--rm", "mcp/server"]
}
}
}
/tools
- List all available tools across servers/prompts
- View available prompts/prompt:<name>/<args>
- Execute a prompt with arguments/resources
- List available resources/resource:<uri>
- Access and analyze a resource/debug
- Toggle debug mode/refresh
- Update server capabilities/memory
- Toggle Redis memory persistence (on/off)/mode:auto
- Switch to autonomous agentic mode/mode:chat
- Switch back to interactive chat mode# Enable Redis memory persistence
/memory
# Check memory status
Memory persistence is now ENABLED using Redis
# Disable memory persistence
/memory
# Check memory status
Memory persistence is now DISABLED
# Switch to autonomous mode
/mode:auto
# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.
# Switch back to chat mode
/mode:chat
# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.
Chat Mode (Default)
Autonomous Mode
# List all available prompts
/prompts
# Basic prompt usage
/prompt:weather/location=tokyo
# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25
# JSON format for complex arguments
/prompt:analyze-data/{
"dataset": "sales_2024",
"metrics": ["revenue", "growth"],
"filters": {
"region": "europe",
"period": "q1"
}
}
# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
"price_range": {"min": 500, "max": 1000},
"features": ["5G", "wireless-charging"],
"markets": ["US", "EU", "Asia"]
}
The client intelligently:
# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"
# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results
# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"
# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary
Connection Issues
Error: Could not connect to MCP server
servers_config.json
API Key Issues
Error: Invalid API key
.env
Redis Connection
Error: Could not connect to Redis
.env
Tool Execution Failures
Error: Tool execution failed
Enable debug mode for detailed logging:
/debug
For additional support, please:
We welcome contributions! See our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by the MCPOmni Connect Team
{
"mcpServers": {
"docker-server": {
"env": {},
"args": [
"run",
"-i",
"--rm",
"mcp/server"
],
"command": "docker"
},
"filesystem-server": {
"env": {},
"args": [
"@modelcontextprotocol/server-filesystem",
"/path/to/files"
],
"command": "npx"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.