A TypeScript-based MCP server that implements a memory system for LLMs. It provides tools for chatting with different LLM providers while maintaining conversation history.
chat
- Send a message to the current LLM provider
get_memory
- Retrieve conversation history
limit
parameter to specify number of memories to retrievelimit: null
for unlimited memory retrievalclear_memory
- Clear conversation history
use_provider
- Switch between different LLM providers
use_model
- Switch to a different model for the current provider
claude-3-haiku
: Fastest response times, ideal for tasks like customer support and content moderationclaude-3-sonnet
: Balanced performance for general-purpose useclaude-3-opus
: Advanced model for complex reasoning and high-performance tasksclaude-3.5-haiku
: Enhanced speed and cost-effectivenessclaude-3.5-sonnet
: Superior performance with computer interaction capabilitiesInstall dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"letta-memgpt": {
"command": "/path/to/memgpt-server/build/index.js",
"env": {
"OPENAI_API_KEY": "your-openai-key",
"ANTHROPIC_API_KEY": "your-anthropic-key",
"OPENROUTER_API_KEY": "your-openrouter-key"
}
}
}
}
OPENAI_API_KEY
- Your OpenAI API keyANTHROPIC_API_KEY
- Your Anthropic API keyOPENROUTER_API_KEY
- Your OpenRouter API keySince MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
{ "limit": null }
with the get_memory
tool to retrieve all stored memories{ "limit": n }
to retrieve the n most recent memoriesSeamless access to top MCP servers powering the future of AI integration.