Light MCP Agents is a lightweight framework for building and orchestrating AI agents using the Model Context Protocol (MCP). It enables the creation of hierarchical agent systems where specialized agents can delegate tasks, share capabilities, and collaborate to solve complex problems. With a configuration-driven approach, you can quickly build sophisticated agent networks without extensive coding in a composable architecture.
✅ Create agents connecting to any MCP Server with a simple configuration file.
✅ Create multi-agent workflows with no additional code. Just one config file per agent.
✅ Easily share your agents configurations with others to run.
# Clone the repository
git clone https://github.com/nicozumarraga/light-mcp-agents.git
cd light-mcp-agents
# Create a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the base agent example (simple client agent)
python src/agent/agent_runner.py --config examples/base_agent/base_agent_config.json
If successful, you'll be able to chat with the agent and use its tools!
In one terminal, start the research agent in server mode:
python src/agent/agent_runner.py --config examples/orchestrator_researcher/research_agent_config.json --server-mode
In a second terminal, start the orchestrator that connects to the research agent:
python src/agent/agent_runner.py --config examples/orchestrator_researcher/master_orchestrator_config.json
Now you can ask the orchestrator to research topics for you, and it will delegate to the specialized research agent or use its tools directly.
Claude Desktop supports MCP agents through its configuration system. You can configure and run your agents directly in Claude, enabling it to use your custom capabilities.
Locate your Claude Desktop configuration file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
~/.config/Claude/claude_desktop_config.json
Add your agent configuration to the mcpServers
section:
{
"mcpServers": {
"research-agent": {
"command": "/bin/bash",
"args": ["-c", "/path/to/your/venv/bin/python /path/to/your/agent_runner.py --config=/path/to/your/agent_config.json --server-mode"],
"env": {
"PYTHONPATH": "/path/to/your/project",
"PATH": "/path/to/your/venv/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
}
}
}
}
The repository includes example agents in the examples/
directory:
Base Agent (examples/base_agent/
)
Orchestrator-Researcher (examples/orchestrator_researcher/
)
🤝 Want to submit your own example architectures for the community? Reach out!
python src/agent/agent_runner.py --config <config_file_path>
# Run in server mode (makes it available to other agents)
python src/agent/agent_runner.py --config <config_file_path> --server-mode
# Run with a custom server name
python src/agent/agent_runner.py --config <config_file_path> --server-mode --server-name "my-custom-server"
Create a configuration file for your agent (e.g., my_agent_config.json
):
{
"agent_name": "my-agent",
"llm_provider": "groq",
"llm_api_key": "YOUR_API_KEY_HERE",
"server_mode": false,
"servers": {
"tool-server": {
"command": "npx",
"args": ["@anthropic-ai/mcp-tooling", "start", "-q"],
"env": {
"PORT": "3001"
}
}
}
}
To create an agent with special capabilities:
{
"agent_name": "research-agent",
"llm_provider": "groq",
"llm_api_key": "YOUR_API_KEY_HERE",
"server_mode": true,
"server_name": "research-agent-server",
"capabilities": [
{
"name": "summarize_document",
"description": "Summarize a document in a concise way",
"input_schema": {
"type": "object",
"properties": {
"document_text": {
"type": "string",
"description": "The text of the document to summarize"
},
"max_length": {
"type": "integer",
"description": "Maximum length of the summary in words",
"default": 200
}
},
"required": ["document_text"]
},
"prompt_template": "Summarize the following document in {max_length} words or fewer:\n\n{document_text}"
}
],
"servers": {
"brave-search": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "BRAVE_API_KEY", "mcp/brave-search"],
"env": {
"BRAVE_API_KEY": "YOUR_BRAVE_API_KEY"
}
}
}
}
Create an orchestrator that delegates to other agents:
{
"agent_name": "master-orchestrator",
"llm_provider": "groq",
"llm_api_key": "YOUR_API_KEY_HERE",
"server_mode": false,
"servers": {
"research-agent": {
"command": "python",
"args": ["src/agent/agent_runner.py", "--config=research_agent_config.json", "--server-mode"],
"env": {}
},
"kanye": {
"command": "npx",
"args": ["-y", "kanye-mcp"]
}
}
}
Capabilities are high-level functions that require LLM reasoning. When a capability is called:
When you ask the orchestrator to research a topic:
Here's an excerpt from the logs showing this in action:
2025-03-19 10:17:46,252 - INFO - agent:master-orchestrator - Executing tool: research_topic
2025-03-19 10:17:46,252 - INFO - agent:master-orchestrator - With arguments: {'topic': 'quantum computing', 'focus_areas': 'recent breakthroughs'}
2025-03-19 10:17:46,261 - INFO - mcp-server-wrapper:research-agent-server - Executing as capability: research_topic
2025-03-19 10:17:46,262 - INFO - agent:research-agent - Executing capability: research_topic with arguments: {'topic': 'quantum computing', 'focus_areas': 'recent breakthroughs'}
2025-03-19 10:17:46,973 - INFO - agent:research-agent - Executing tool: brave_web_search
2025-03-19 10:17:46,973 - INFO - agent:research-agent - With arguments: {'query': 'quantum computing recent breakthroughs', 'count': 10}
2025-03-19 10:17:49,839 - INFO - agent:research-agent - Capability research_topic execution completed
The recursive architecture is built on the MCP (Model Context Protocol) standard and enables:
The MCP Server resource management practices in this project were inspired by mcp-agents.
Seamless access to top MCP servers powering the future of AI integration.