A template implementation of the Model Context Protocol (MCP) server for managing tasks and projects. This server provides a comprehensive task management system with support for project organization, task tracking, and PRD parsing.
This project demonstrates how to build an MCP server that enables AI agents to manage tasks, track project progress, and break down Product Requirements Documents (PRDs) into actionable tasks. It serves as a practical template for creating your own MCP servers with task management capabilities.
The implementation follows the best practices laid out by Anthropic for building MCP servers, allowing seamless integration with any MCP-compatible client.
The server provides several essential task management tools:
Task Management
create_task_file
: Create new project task filesadd_task
: Add tasks to projects with descriptions and subtasksupdate_task_status
: Update the status of tasks and subtasksget_next_task
: Get the next uncompleted task from a projectProject Planning
parse_prd
: Convert PRDs into structured tasks automaticallyexpand_task
: Break down tasks into smaller, manageable subtasksestimate_task_complexity
: Estimate task complexity and time requirementsget_task_dependencies
: Track task dependenciesDevelopment Support
generate_task_file
: Generate file templates based on task descriptionssuggest_next_actions
: Get AI-powered suggestions for next stepsInstall uv if you don't have it:
pip install uv
Clone this repository:
git clone https://github.com/coleam00/mcp-mem0.git
cd mcp-mem0
Install dependencies:
uv pip install -e .
Create a .env
file based on .env.example
:
cp .env.example .env
Configure your environment variables in the .env
file (see Configuration section)
Build the Docker image:
docker build -t mcp/mem0 --build-arg PORT=8050 .
Create a .env
file based on .env.example
and configure your environment variables
The following environment variables can be configured in your .env
file:
Variable | Description | Example |
---|---|---|
TRANSPORT | Transport protocol (sse or stdio) | sse |
HOST | Host to bind to when using SSE transport | 0.0.0.0 |
PORT | Port to listen on when using SSE transport | 8050 |
# Set TRANSPORT=sse in .env then:
python3 src/main.py
The server will start on the configured host and port (default: http://0.0.0.0:8050).
docker build -t task-manager-mcp .
docker run --env-file .env -p 8050:8050 task-manager-mcp
await mcp.create_task_file(project_name="my-project")
await mcp.add_task(
project_name="my-project",
title="Setup Development Environment",
description="Configure the development environment with required tools",
subtasks=[
"Install dependencies",
"Configure linters",
"Set up testing framework"
]
)
await mcp.parse_prd(
project_name="my-project",
prd_content="# Your PRD content..."
)
await mcp.update_task_status(
project_name="my-project",
task_title="Setup Development Environment",
subtask_title="Install dependencies",
status="done"
)
next_task = await mcp.get_next_task(project_name="my-project")
await mcp.expand_task(
project_name="my-project",
task_title="Implement Authentication"
)
await mcp.generate_task_file(
project_name="my-project",
task_title="User Authentication"
)
complexity = await mcp.estimate_task_complexity(
project_name="my-project",
task_title="User Authentication"
)
suggestions = await mcp.suggest_next_actions(
project_name="my-project",
task_title="User Authentication"
)
To connect to the server using SSE transport, use this configuration:
{
"mcpServers": {
"task-manager": {
"transport": "sse",
"url": "http://localhost:8050/sse"
}
}
}
For stdio transport, use this configuration:
{
"mcpServers": {
"task-manager": {
"command": "python3",
"args": ["src/main.py"],
"env": {
"TRANSPORT": "stdio",
"LLM_PROVIDER": "openai",
"LLM_API_KEY": "YOUR-API-KEY",
"LLM_CHOICE": "gpt-4"
}
}
}
}
This template provides a foundation for building more complex task management MCP servers. To extend it:
@mcp.tool()
decoratorSeamless access to top MCP servers powering the future of AI integration.