The Kafka MCP Server is a Model Context Protocol (MCP) server that provides integration with Apache Kafka to enable interaction with a Kafka cluster using natural language (LLMs).
You will either need Docker or Golang to run the MCP server locally.
You need to have access to a Kafka cluster. Following the quickstart doc:
docker pull apache/kafka:4.0.0
docker run -p 9092:9092 apache/kafka:4.0.0
Kafka is available on localhost:9092
now.
{
"mcpServers": {
"kafka": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"KAFKA_MCP_BOOTSTRAP_SERVERS",
"ghcr.io/cefboud/kafka-mcp-server"
],
"env": {
"KAFKA_MCP_BOOTSTRAP_SERVERS": "localhost:9092"
}
}
}
}
cd <workdir>
git clone https://github.com/CefBoud/kafka-mcp-server.git
cd kafka-mcp-server
go build -o kafka-mcp-server cmd/kafka-mcp-server/main.go
{
"mcpServers": {
"kafka": {
"command": "<workdir>/kafka-mcp-server/kafka-mcp-server",
"args": [
"stdio",
],
"env": {
"KAFKA_MCP_BOOTSTRAP_SERVERS": "localhost:9092"
}
}
}
}
Options:
--bootstrap-servers string Comma-separated list of the Kafka servers to connect to.
--enable-command-logging When enabled, the server will log all command requests and responses to the log file
--log-file string Path to log file
--read-only Restrict the server to read-only operations
All options can be passed as environment variables, uppercased, with hyphens replaced by underscores, and prefixed with MCP_KAFKA_
e.g., --bootstrap-servers
becomes MCP_KAFKA_BOOTSTRAP_SERVERS
.
Running many sequential tools, especially when each depends on the output of the previous, can be tedious and time-consuming. This requires multiple round-trips between the client and server/
MultiplexTool solves this by allowing the client to batch a list of tool calls into a single request, executing them in order. It supports dynamic dependencies between tools by letting you reference earlier outputs using prompt-based placeholders.
If a tool input depends on a previous result, the client uses the PROMPT_ARGUMENT:
format to generate that input dynamically via a prompt to an LLM (currently only Gemini is supported).
Example:
"userId": "PROMPT_ARGUMENT: the ID of the created user"
CLI Flags:
--enable-multiplex
: Enables multiplexing of tool calls.--multiplex-model
: Specifies the model (e.g. gemini
) used to infer PROMPT_ARGUMENT
s. Requires GEMINI_API_KEY
env variable.{
"mcpServers": {
"kafka": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"KAFKA_MCP_BOOTSTRAP_SERVERS",
"ghcr.io/cefboud/kafka-mcp-server",
"--enable-multiplex",
"--multiplex-model",
"gemini"
],
"env": {
"KAFKA_MCP_BOOTSTRAP_SERVERS": "localhost:9092",
"GEMINI_API_KEY": "....."
}
}
}
}
Seamless access to top MCP servers powering the future of AI integration.