This is a Model Context Protocol (MCP) server for interacting with the Meshy AI API. It provides tools for generating 3D models from text and images, applying textures, and remeshing models.
Clone this repository:
git clone https://github.com/pasie15/scenario.com-mcp-server
cd meshy-ai-mcp-server
(Recommended) Set up a virtual environment:
Using venv:
python -m venv .venv
# On Windows
.\.venv\Scripts\activate
# On macOS/Linux
source .venv/bin/activate
Using Conda:
conda create --name meshy-mcp python=3.9 # Or your preferred Python version
conda activate meshy-mcp
Install the MCP package:
pip install mcp
Install dependencies:
pip install -r requirements.txt
Create a .env
file with your Meshy AI API key:
cp .env.example .env
# Edit .env and add your API key
You can start the server directly with Python:
python src/server.py
Or using the MCP CLI:
mcp run config.json
Add this MCP server configuration to your Cline/Roo-Cline/Cursor/VS Code settings (e.g., .vscode/settings.json
or user settings):
{
"mcpServers": {
"meshy-ai": {
"command": "python",
"args": [
"path/to/your/meshy-ai-mcp-server/src/server.py" // <-- Make sure this path is correct!
],
"disabled": false,
"autoApprove": [],
"alwaysAllow": []
}
}
}
For development and debugging, run the server using mcp dev
:
mcp dev src/server.py
When running with mcp dev
, you'll see output like:
Starting MCP inspector...
⚙️ Proxy server listening on port 6277
🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀
New SSE connection
You can open the inspector URL in your browser to monitor MCP communication.
The server provides the following tools:
create_text_to_3d_task
: Generate a 3D model from a text promptcreate_image_to_3d_task
: Generate a 3D model from an imagecreate_text_to_texture_task
: Apply textures to a 3D model using text promptscreate_remesh_task
: Remesh and optimize a 3D modelretrieve_text_to_3d_task
: Get details of a Text to 3D taskretrieve_image_to_3d_task
: Get details of an Image to 3D taskretrieve_text_to_texture_task
: Get details of a Text to Texture taskretrieve_remesh_task
: Get details of a Remesh tasklist_text_to_3d_tasks
: List Text to 3D taskslist_image_to_3d_tasks
: List Image to 3D taskslist_text_to_texture_tasks
: List Text to Texture taskslist_remesh_tasks
: List Remesh tasksstream_text_to_3d_task
: Stream updates for a Text to 3D taskstream_image_to_3d_task
: Stream updates for an Image to 3D taskstream_text_to_texture_task
: Stream updates for a Text to Texture taskstream_remesh_task
: Stream updates for a Remesh taskget_balance
: Check your Meshy AI account balanceThe server also provides the following resources:
health://status
: Health check endpointtask://{task_type}/{task_id}
: Access task details by type and IDThe server can be configured using environment variables:
MESHY_API_KEY
: Your Meshy AI API key (required)MCP_PORT
: Port for the MCP server to listen on (default: 8081)TASK_TIMEOUT
: Maximum time to wait for a task to complete when streaming (default: 300 seconds)from mcp.client import MCPClient
client = MCPClient()
result = client.use_tool(
"meshy-ai",
"create_text_to_3d_task",
{
"request": {
"mode": "preview",
"prompt": "a monster mask",
"art_style": "realistic",
"should_remesh": True
}
}
)
print(f"Task ID: {result['id']}")
from mcp.client import MCPClient
client = MCPClient()
task_id = "your-task-id"
result = client.use_tool(
"meshy-ai",
"retrieve_text_to_3d_task",
{
"task_id": task_id
}
)
print(f"Status: {result['status']}")
This project is licensed under the MIT License - see the LICENSE file for details.
{
"mcpServers": {
"meshy-ai": {
"env": {},
"args": [
"path/to/your/meshy-ai-mcp-server/src/server.py"
],
"command": "python"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.