mcp_connect
Abiorh001/mcp_connectUpdated Apr 20029

Remote#CLI#AI Integration#Server ManagementLicense: MIT LicenseLanguage: Python

🚀 MCPOmni Connect - Universal Gateway to MCP Servers

PyPI Downloads Python Version License Tests PyPI version Last Commit Open Issues Pull Requests

MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.

✨ Key Features

🔌 Universal Connectivity

  • Multi-Protocol Support
    • Native support for stdio transport
    • Server-Sent Events (SSE) for real-time communication
    • Docker container integration
    • NPX package execution
    • Extensible transport layer for future protocols
  • ReAct Agentic Mode
    • Autonomous task execution without human intervention
    • Advanced reasoning and decision-making capabilities
    • Seamless switching between chat and agentic modes
    • Self-guided tool selection and execution
    • Complex task decomposition and handling
  • Orchestrator Agent Mode
    • Advanced planning for complex multi-step tasks
    • Intelligent task delegation across multiple MCP servers
    • Dynamic agent coordination and communication
    • Automated subtask management and execution

🧠 AI-Powered Intelligence

  • Advanced LLM Integration
    • Seamless OpenAI models integration
    • Seamless OpenRouter models integration
    • Seamless Groq models integration
    • Seamless Gemini models integration
    • Seamless DeepSeek models integration
    • Dynamic system prompts based on available capabilities
    • Intelligent context management
    • Automatic tool selection and chaining
    • Universal model support through custom ReAct Agent
      • Handles models without native function calling
      • Dynamic function execution based on user requests
      • Intelligent tool orchestration

🔒 Security & Privacy

  • Explicit User Control
    • All tool executions require explicit user approval in chat mode
    • Clear explanation of tool actions before execution
    • Transparent disclosure of data access and usage
  • Data Protection
    • Strict data access controls
    • Server-specific data isolation
    • No unauthorized data exposure
  • Privacy-First Approach
    • Minimal data collection
    • User data remains on specified servers
    • No cross-server data sharing without consent
  • Secure Communication
    • Encrypted transport protocols
    • Secure API key management
    • Environment variable protection

💾 Memory Management

  • Redis-Powered Persistence
    • Long-term conversation memory storage
    • Session persistence across restarts
    • Configurable memory retention
    • Easy memory toggle with commands
  • Chat History File Storage
    • Save complete chat conversations to files
    • Load previous conversations from saved files
    • Continue conversations from where you left off
    • Persistent chat history across sessions
    • File-based backup and restoration of conversations
  • Intelligent Context Management
    • Automatic context pruning
    • Relevant information retrieval
    • Memory-aware responses
    • Cross-session context maintenance

💬 Prompt Management

  • Advanced Prompt Handling
    • Dynamic prompt discovery across servers
    • Flexible argument parsing (JSON and key-value formats)
    • Cross-server prompt coordination
    • Intelligent prompt validation
    • Context-aware prompt execution
    • Real-time prompt responses
    • Support for complex nested arguments
    • Automatic type conversion and validation
  • Client-Side Sampling Support
    • Dynamic sampling configuration from client
    • Flexible LLM response generation
    • Customizable sampling parameters
    • Real-time sampling adjustments

🛠️ Tool Orchestration

  • Dynamic Tool Discovery & Management
    • Automatic tool capability detection
    • Cross-server tool coordination
    • Intelligent tool selection based on context
    • Real-time tool availability updates

📦 Resource Management

  • Universal Resource Access
    • Cross-server resource discovery
    • Unified resource addressing
    • Automatic resource type detection
    • Smart content summarization

🔄 Server Management

  • Advanced Server Handling
    • Multiple simultaneous server connections
    • Automatic server health monitoring
    • Graceful connection management
    • Dynamic capability updates

🏗️ Architecture

Core Components

MCPOmni Connect
├── Transport Layer
│   ├── Stdio Transport
│   ├── SSE Transport
│   └── Docker Integration
├── Session Management
│   ├── Multi-Server Orchestration
│   └── Connection Lifecycle Management
├── Tool Management
│   ├── Dynamic Tool Discovery
│   ├── Cross-Server Tool Routing
│   └── Tool Execution Engine
└── AI Integration
    ├── LLM Processing
    ├── Context Management
    └── Response Generation

🚀 Getting Started

Prerequisites

  • Python 3.10+
  • LLM API key
  • UV package manager (recommended)
  • Redis server (optional, for persistent memory)

Install using package manager

# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect

Configuration

# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json

Start CLI

# start the cli running the command ensure your api key is export or create .env
mcpomni_connect

🧪 Testing

Running Tests

# Run all tests with verbose output
pytest tests/ -v

# Run specific test file
pytest tests/test_specific_file.py -v

# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing

Test Structure

tests/
├── unit/           # Unit tests for individual components

Development Quick Start

  1. Installation

    # Clone the repository
    git clone https://github.com/Abiorh001/mcp_omni_connect.git
    cd mcp_omni_connect
    
    # Create and activate virtual environment
    uv venv
    source .venv/bin/activate
    
    # Install dependencies
    uv sync
    
  2. Configuration

    # Set up environment variables
    echo "LLM_API_KEY=your_key_here" > .env
    
    # Configure your servers in servers_config.json
    
  3. ** Start Client**

    # Start the cient
    uv run src/main.py pr python src/main.py
    

Server Configuration Examples

{   
    "LLM": {
        "provider": "openai",  // Supports: "openai", "openrouter", "groq"
        "model": "gpt-4",      // Any model from supported providers
        "temperature": 0.5,
        "max_tokens": 5000,
        "max_context_length": 30000, // Maximu of the model context length
        "top_p": 0
    },
    "mcpServers": {
        "filesystem-server": {
            "command": "npx",
            "args": [
                "@modelcontextprotocol/server-filesystem",
                "/path/to/files"
            ]
        },
        "sse-server": {
            "type": "sse",
            "url": "http://localhost:3000/mcp",
            "headers": {
                "Authorization": "Bearer token"
            },
        },
        "docker-server": {
            "command": "docker",
            "args": ["run", "-i", "--rm", "mcp/server"]
        }
    }
}

🎯 Usage

Interactive Commands

  • /tools - List all available tools across servers
  • /prompts - View available prompts
  • /prompt:<name>/<args> - Execute a prompt with arguments
  • /resources - List available resources
  • /resource:<uri> - Access and analyze a resource
  • /debug - Toggle debug mode
  • /refresh - Update server capabilities
  • /memory - Toggle Redis memory persistence (on/off)
  • /mode:auto - Switch to autonomous agentic mode
  • /mode:chat - Switch back to interactive chat mode

Memory and Chat History

# Enable Redis memory persistence
/memory

# Check memory status
Memory persistence is now ENABLED using Redis

# Disable memory persistence
/memory

# Check memory status
Memory persistence is now DISABLED

Operation Modes

# Switch to autonomous mode
/mode:auto

# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.

# Switch back to chat mode
/mode:chat

# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.

Mode Differences

  • Chat Mode (Default)

    • Requires explicit approval for tool execution
    • Interactive conversation style
    • Step-by-step task execution
    • Detailed explanations of actions
  • Autonomous Mode

    • Independent task execution
    • Self-guided decision making
    • Automatic tool selection and chaining
    • Progress updates and final results
    • Complex task decomposition
    • Error handling and recovery
  • Orchestrator Mode

    • Advanced planning for complex multi-step tasks
    • Strategic delegation across multiple MCP servers
    • Intelligent agent coordination and communication
    • Parallel task execution when possible
    • Dynamic resource allocation
    • Sophisticated workflow management
    • Real-time progress monitoring across agents
    • Adaptive task prioritization

Prompt Management

# List all available prompts
/prompts

# Basic prompt usage
/prompt:weather/location=tokyo

# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25

# JSON format for complex arguments
/prompt:analyze-data/{
    "dataset": "sales_2024",
    "metrics": ["revenue", "growth"],
    "filters": {
        "region": "europe",
        "period": "q1"
    }
}

# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
    "price_range": {"min": 500, "max": 1000},
    "features": ["5G", "wireless-charging"],
    "markets": ["US", "EU", "Asia"]
}

Advanced Prompt Features

  • Argument Validation: Automatic type checking and validation
  • Default Values: Smart handling of optional arguments
  • Context Awareness: Prompts can access previous conversation context
  • Cross-Server Execution: Seamless execution across multiple MCP servers
  • Error Handling: Graceful handling of invalid arguments with helpful messages
  • Dynamic Help: Detailed usage information for each prompt

AI-Powered Interactions

The client intelligently:

  • Chains multiple tools together
  • Provides context-aware responses
  • Automatically selects appropriate tools
  • Handles errors gracefully
  • Maintains conversation context

Model Support

  • OpenAI Models
    • Full support for all OpenAI models
    • Native function calling for compatible models
    • ReAct Agent fallback for older models
  • OpenRouter Models
    • Access to all OpenRouter-hosted models
    • Unified interface for model interaction
    • Automatic capability detection
  • Groq Models
    • Support for all Groq models
    • Ultra-fast inference capabilities
    • Seamless integration with tool system
  • Universal Model Support
    • Custom ReAct Agent for models without function calling
    • Dynamic tool execution based on model capabilities
    • Intelligent fallback mechanisms

🔧 Advanced Features

Tool Orchestration

# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"

# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results

Resource Analysis

# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"

# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary

Demo

mcp_client_new1-MadewithClipchamp-ezgif com-optimize

🔍 Troubleshooting

Common Issues and Solutions

  1. Connection Issues

    Error: Could not connect to MCP server
    
    • Check if the server is running
    • Verify server configuration in servers_config.json
    • Ensure network connectivity
    • Check server logs for errors
  2. API Key Issues

    Error: Invalid API key
    
    • Verify API key is correctly set in .env
    • Check if API key has required permissions
    • Ensure API key is for correct environment (production/development)
  3. Redis Connection

    Error: Could not connect to Redis
    
    • Verify Redis server is running
    • Check Redis connection settings in .env
    • Ensure Redis password is correct (if configured)
  4. Tool Execution Failures

    Error: Tool execution failed
    
    • Check tool availability on connected servers
    • Verify tool permissions
    • Review tool arguments for correctness

Debug Mode

Enable debug mode for detailed logging:

/debug

For additional support, please:

  1. Check the Issues page
  2. Review closed issues for similar problems
  3. Open a new issue with detailed information if needed

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

📬 Contact & Support


Built with ❤️ by the MCPOmni Connect Team

🚀 MCPOmni Connect - Universal Gateway to MCP Servers

Python Version License Tests PyPI version Last Commit Open Issues Pull Requests

MCPOmni Connect is a powerful, universal command-line interface (CLI) that serves as your gateway to the Model Context Protocol (MCP) ecosystem. It seamlessly integrates multiple MCP servers, AI models, and various transport protocols into a unified, intelligent interface.

✨ Key Features

🔌 Universal Connectivity

  • Multi-Protocol Support
    • Native support for stdio transport
    • Server-Sent Events (SSE) for real-time communication
    • Docker container integration
    • NPX package execution
    • Extensible transport layer for future protocols
  • Orchestrator Agent Mode
    • Advanced planning for complex multi-step tasks
    • Intelligent task delegation across multiple MCP servers
    • Dynamic agent coordination and communication
    • Automated subtask management and execution

🧠 AI-Powered Intelligence

  • Advanced LLM Integration
    • Seamless OpenAI models integration
    • Seamless OpenRouter models integration
    • Seamless Groq models integration
    • Seamless Gemini models integration
    • Seamless DeepSeek models integration
    • Dynamic system prompts based on available capabilities
    • Intelligent context management
    • Automatic tool selection and chaining
    • Universal model support through custom ReAct Agent
      • Handles models without native function calling
      • Dynamic function execution based on user requests
      • Intelligent tool orchestration
  • Agentic Mode
    • Autonomous task execution without human intervention
    • Advanced reasoning and decision-making capabilities
    • Seamless switching between chat and agentic modes
    • Self-guided tool selection and execution
    • Complex task decomposition and handling

🔒 Security & Privacy

  • Explicit User Control
    • All tool executions require explicit user approval in chat mode
    • Clear explanation of tool actions before execution
    • Transparent disclosure of data access and usage
  • Data Protection
    • Strict data access controls
    • Server-specific data isolation
    • No unauthorized data exposure
  • Privacy-First Approach
    • Minimal data collection
    • User data remains on specified servers
    • No cross-server data sharing without consent
  • Secure Communication
    • Encrypted transport protocols
    • Secure API key management
    • Environment variable protection

💾 Memory Management

  • Redis-Powered Persistence
    • Long-term conversation memory storage
    • Session persistence across restarts
    • Configurable memory retention
    • Easy memory toggle with commands
  • Chat History File Storage
    • Save complete chat conversations to files
    • Load previous conversations from saved files
    • Continue conversations from where you left off
    • Persistent chat history across sessions
    • File-based backup and restoration of conversations
  • Intelligent Context Management
    • Automatic context pruning
    • Relevant information retrieval
    • Memory-aware responses
    • Cross-session context maintenance

💬 Prompt Management

  • Advanced Prompt Handling
    • Dynamic prompt discovery across servers
    • Flexible argument parsing (JSON and key-value formats)
    • Cross-server prompt coordination
    • Intelligent prompt validation
    • Context-aware prompt execution
    • Real-time prompt responses
    • Support for complex nested arguments
    • Automatic type conversion and validation
  • Client-Side Sampling Support
    • Dynamic sampling configuration from client
    • Flexible LLM response generation
    • Customizable sampling parameters
    • Real-time sampling adjustments

🛠️ Tool Orchestration

  • Dynamic Tool Discovery & Management
    • Automatic tool capability detection
    • Cross-server tool coordination
    • Intelligent tool selection based on context
    • Real-time tool availability updates

📦 Resource Management

  • Universal Resource Access
    • Cross-server resource discovery
    • Unified resource addressing
    • Automatic resource type detection
    • Smart content summarization

🔄 Server Management

  • Advanced Server Handling
    • Multiple simultaneous server connections
    • Automatic server health monitoring
    • Graceful connection management
    • Dynamic capability updates

🏗️ Architecture

Core Components

MCPOmni Connect
├── Transport Layer
│   ├── Stdio Transport
│   ├── SSE Transport
│   └── Docker Integration
├── Session Management
│   ├── Multi-Server Orchestration
│   └── Connection Lifecycle Management
├── Tool Management
│   ├── Dynamic Tool Discovery
│   ├── Cross-Server Tool Routing
│   └── Tool Execution Engine
└── AI Integration
    ├── LLM Processing
    ├── Context Management
    └── Response Generation

🚀 Getting Started

Prerequisites

  • Python 3.10+
  • LLM API key
  • UV package manager (recommended)
  • Redis server (optional, for persistent memory)

Install using package manager

# with uv recommended
uv add mcpomni-connect
# using pip
pip install mcpomni-connect

Configuration

# Set up environment variables
echo "LLM_API_KEY=your_key_here" > .env
# Optional: Configure Redis connection
echo "REDIS_HOST=localhost" >> .env
echo "REDIS_PORT=6379" >> .env
echo "REDIS_DB=0" >> .env"
# Configure your servers in servers_config.json

Start CLI

# start the cli running the command ensure your api key is export or create .env
mcpomni_connect

🧪 Testing

Running Tests

# Run all tests with verbose output
pytest tests/ -v

# Run specific test file
pytest tests/test_specific_file.py -v

# Run tests with coverage report
pytest tests/ --cov=src --cov-report=term-missing

Test Structure

tests/
├── unit/           # Unit tests for individual components

Development Quick Start

  1. Installation

    # Clone the repository
    git clone https://github.com/Abiorh001/mcp_omni_connect.git
    cd mcp_omni_connect
    
    # Create and activate virtual environment
    uv venv
    source .venv/bin/activate
    
    # Install dependencies
    uv sync
    
  2. Configuration

    # Set up environment variables
    echo "LLM_API_KEY=your_key_here" > .env
    
    # Configure your servers in servers_config.json
    
  3. ** Start Client**

    # Start the cient
    uv run src/main.py pr python src/main.py
    

Server Configuration Examples

{   
    "LLM": {
        "provider": "openai",  // Supports: "openai", "openrouter", "groq"
        "model": "gpt-4",      // Any model from supported providers
        "temperature": 0.5,
        "max_tokens": 5000,
        "max_context_length": 30000, // Maximu of the model context length
        "top_p": 0
    },
    "mcpServers": {
        "filesystem-server": {
            "command": "npx",
            "args": [
                "@modelcontextprotocol/server-filesystem",
                "/path/to/files"
            ]
        },
        "sse-server": {
            "type": "sse",
            "url": "http://localhost:3000/mcp",
            "headers": {
                "Authorization": "Bearer token"
            },
        },
        "docker-server": {
            "command": "docker",
            "args": ["run", "-i", "--rm", "mcp/server"]
        }
    }
}

🎯 Usage

Interactive Commands

  • /tools - List all available tools across servers
  • /prompts - View available prompts
  • /prompt:<name>/<args> - Execute a prompt with arguments
  • /resources - List available resources
  • /resource:<uri> - Access and analyze a resource
  • /debug - Toggle debug mode
  • /refresh - Update server capabilities
  • /memory - Toggle Redis memory persistence (on/off)
  • /mode:auto - Switch to autonomous agentic mode
  • /mode:chat - Switch back to interactive chat mode

Memory and Chat History

# Enable Redis memory persistence
/memory

# Check memory status
Memory persistence is now ENABLED using Redis

# Disable memory persistence
/memory

# Check memory status
Memory persistence is now DISABLED

Operation Modes

# Switch to autonomous mode
/mode:auto

# System confirms mode change
Now operating in AUTONOMOUS mode. I will execute tasks independently.

# Switch back to chat mode
/mode:chat

# System confirms mode change
Now operating in CHAT mode. I will ask for approval before executing tasks.

Mode Differences

  • Chat Mode (Default)

    • Requires explicit approval for tool execution
    • Interactive conversation style
    • Step-by-step task execution
    • Detailed explanations of actions
  • Autonomous Mode

    • Independent task execution
    • Self-guided decision making
    • Automatic tool selection and chaining
    • Progress updates and final results
    • Complex task decomposition
    • Error handling and recovery

Prompt Management

# List all available prompts
/prompts

# Basic prompt usage
/prompt:weather/location=tokyo

# Prompt with multiple arguments depends on the server prompt arguments requirements
/prompt:travel-planner/from=london/to=paris/date=2024-03-25

# JSON format for complex arguments
/prompt:analyze-data/{
    "dataset": "sales_2024",
    "metrics": ["revenue", "growth"],
    "filters": {
        "region": "europe",
        "period": "q1"
    }
}

# Nested argument structures
/prompt:market-research/target=smartphones/criteria={
    "price_range": {"min": 500, "max": 1000},
    "features": ["5G", "wireless-charging"],
    "markets": ["US", "EU", "Asia"]
}

Advanced Prompt Features

  • Argument Validation: Automatic type checking and validation
  • Default Values: Smart handling of optional arguments
  • Context Awareness: Prompts can access previous conversation context
  • Cross-Server Execution: Seamless execution across multiple MCP servers
  • Error Handling: Graceful handling of invalid arguments with helpful messages
  • Dynamic Help: Detailed usage information for each prompt

AI-Powered Interactions

The client intelligently:

  • Chains multiple tools together
  • Provides context-aware responses
  • Automatically selects appropriate tools
  • Handles errors gracefully
  • Maintains conversation context

Model Support

  • OpenAI Models
    • Full support for all OpenAI models
    • Native function calling for compatible models
    • ReAct Agent fallback for older models
  • OpenRouter Models
    • Access to all OpenRouter-hosted models
    • Unified interface for model interaction
    • Automatic capability detection
  • Groq Models
    • Support for all Groq models
    • Ultra-fast inference capabilities
    • Seamless integration with tool system
  • Universal Model Support
    • Custom ReAct Agent for models without function calling
    • Dynamic tool execution based on model capabilities
    • Intelligent fallback mechanisms

🔧 Advanced Features

Tool Orchestration

# Example of automatic tool chaining if the tool is available in the servers connected
User: "Find charging stations near Silicon Valley and check their current status"

# Client automatically:
1. Uses Google Maps API to locate Silicon Valley
2. Searches for charging stations in the area
3. Checks station status through EV network API
4. Formats and presents results

Resource Analysis

# Automatic resource processing
User: "Analyze the contents of /path/to/document.pdf"

# Client automatically:
1. Identifies resource type
2. Extracts content
3. Processes through LLM
4. Provides intelligent summary

Demo

mcp_client_new1-MadewithClipchamp-ezgif com-optimize

🔍 Troubleshooting

Common Issues and Solutions

  1. Connection Issues

    Error: Could not connect to MCP server
    
    • Check if the server is running
    • Verify server configuration in servers_config.json
    • Ensure network connectivity
    • Check server logs for errors
  2. API Key Issues

    Error: Invalid API key
    
    • Verify API key is correctly set in .env
    • Check if API key has required permissions
    • Ensure API key is for correct environment (production/development)
  3. Redis Connection

    Error: Could not connect to Redis
    
    • Verify Redis server is running
    • Check Redis connection settings in .env
    • Ensure Redis password is correct (if configured)
  4. Tool Execution Failures

    Error: Tool execution failed
    
    • Check tool availability on connected servers
    • Verify tool permissions
    • Review tool arguments for correctness

Debug Mode

Enable debug mode for detailed logging:

/debug

For additional support, please:

  1. Check the Issues page
  2. Review closed issues for similar problems
  3. Open a new issue with detailed information if needed

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

📬 Contact & Support


Built with ❤️ by the MCPOmni Connect Team

Installation

Claude
Claude
Cursor
Cursor
Windsurf
Windsurf
Cline
Cline
Witsy
Witsy
Spin AI
Spin AI
Run locally with the following command:
Terminal
Add the following config to your client:
JSON
{
  "mcpServers": {
    "docker-server": {
      "env": {},
      "args": [
        "run",
        "-i",
        "--rm",
        "mcp/server"
      ],
      "command": "docker"
    },
    "filesystem-server": {
      "env": {},
      "args": [
        "@modelcontextprotocol/server-filesystem",
        "/path/to/files"
      ],
      "command": "npx"
    }
  }
}

MCPLink

Seamless access to top MCP servers powering the future of AI integration.

© 2025 MCPLink. All rights reserved.
discordgithubdiscord