The ultimate vibe coding tool
An MCP server that generates music based on your coding context. It supports multiple audio generation backends including Stable Audio API and Udio to create short music clips that match what you're working on, with built-in playback and crossfading between tracks.
Clone this repository
Install dependencies:
npm install
Install FFmpeg (required for audio playback):
# macOS
brew install ffmpeg
# Ubuntu/Debian
sudo apt-get install ffmpeg
# Windows (using Chocolatey)
choco install ffmpeg
Build the project:
npm run build
Set up your API keys in the .env
file:
STABLE_AUDIO_KEY=your_stable_audio_key_here
PIAPI_KEY=your_piapi_key_here
Note: You need at least one of these keys to use the MCP server. If you have both, the server will choose the appropriate one based on the generation mode.
To use this as an MCP server with Cline, you need to add it to your MCP settings file:
Edit your MCP settings file:
# For VSCode
vim ~/Library/Application\ Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
# For Cursor
vim ~/Library/Application\ Support/Cursor/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
# For Claude Desktop
vim ~/Library/Application\ Support/Claude/claude_desktop_config.json
Add the following configuration:
{
"mcpServers": {
"vibe-mcp": {
"autoApprove": [
"start_vibe_session",
"generate_more_music",
"stop_vibe_session"
],
"command": "node",
"args": ["/path/to/vibe-mcp/build/index.js"],
"env": {
"STABLE_AUDIO_KEY": "your_stable_audio_key_here",
"PIAPI_KEY": "your_piapi_key_here"
}
}
}
}
Optional command-line arguments:
lyrical
to enable lyrical mode by default: ["/path/to/vibe-mcp/build/index.js", "lyrical"]
diffrhythm
to use the DiffRhythm generator: ["/path/to/vibe-mcp/build/index.js", "lyrical diffrhythm"]
Restart Cline or Claude Desktop
The MCP server provides three tools:
Starts a new music generation session based on the current coding context.
Parameters:
code
(required): The current coding context from the user's sessiongenre
(optional): Music genre to generate (e.g., "lo-fi house", "synthwave", "ambient")mode
(optional): The mode to generate music in (e.g., 'instrumental', 'lyrical')language
(optional): The programming language of the code (e.g., 'javascript', 'python', 'rust')Returns:
Generates more music as the previous chunk is almost finished.
Parameters:
code
(required): The current coding context from the user's sessiongenre
(optional): Music genre to generate (defaults to the genre used in the session)mode
(optional): The mode to generate music in (defaults to the mode used in the session)language
(optional): The programming language of the code (defaults to the language used in the session)Returns:
Stops the music generation session.
Parameters:
Returns:
The MCP server supports multiple audio generators:
Uses the Stability AI API to generate instrumental music. Requires a STABLE_AUDIO_KEY
.
Uses the PiAPI Udio service to generate both instrumental and lyrical music. Requires a PIAPI_KEY
.
Uses the PiAPI DiffRhythm service for specialized music generation. Requires a PIAPI_KEY
and the diffrhythm
command-line argument.
The MCP server supports two generation modes:
Generates music without lyrics. This is the default mode and can use either the Stability AI API or PiAPI service.
Generates music with AI-generated lyrics based on your code context. This mode requires the PiAPI Udio service and can be enabled by:
mode: "lyrical"
in the tool parameterslyrical
command-line argument when starting the serverlyrical diffrhythm
to the command-line argumentsMIT
Seamless access to top MCP servers powering the future of AI integration.