
Ollama MCP Server
Integrate Ollama's local LLM capabilities into your applications seamlessly. Access a full range of Ollama functionalities through a clean MCP interface, enabling model management and execution with ease. Enjoy the benefits of running AI models locally while maintaining control and privacy.
Installation
Installing for Claude Desktop
Manual Configuration Required
This MCP server requires manual configuration. Run the command below to open your configuration file:
npx mcpbar@latest edit -c claude
This will open your configuration file where you can add the Ollama MCP Server MCP server manually.
Ollama MCP Server
π A powerful bridge between Ollama and the Model Context Protocol (MCP), enabling seamless integration of Ollama's local LLM capabilities into your MCP-powered applications.
π Features
Complete Ollama Integration
- Full API Coverage: Access all essential Ollama functionality through a clean MCP interface
- OpenAI-Compatible Chat: Drop-in replacement for OpenAI's chat completion API
- Local LLM Power: Run AI models locally with full control and privacy
Core Capabilities
-
π Model Management
- Pull models from registries
- Push models to registries
- List available models
- Create custom models from Modelfiles
- Copy and remove models
-
π€ Model Execution
- Run models with customizable prompts
- Chat completion API with system/user/assistant roles
- Configurable parameters (temperature, timeout)
- Raw mode support for direct responses
-
π Server Control
- Start and manage Ollama server
- View detailed model information
- Error handling and timeout management
π Getting Started
Prerequisites
- Ollama installed on your system
- Node.js and npm/pnpm
Installation
- Install dependencies:
pnpm install
- Build the server:
pnpm run build
Configuration
Add the server to your MCP configuration:
For Claude Desktop:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"ollama": {
"command": "node",
"args": ["/path/to/ollama-server/build/index.js"],
"env": {
"OLLAMA_HOST": "http://127.0.0.1:11434" // Optional: customize Ollama API endpoint
}
}
}
}
π Usage Examples
Pull and Run a Model
// Pull a model
await mcp.use_mcp_tool({
server_name: "ollama",
tool_name: "pull",
arguments: {
name: "llama2"
}
});
// Run the model
await mcp.use_mcp_tool({
server_name: "ollama",
tool_name: "run",
arguments: {
name: "llama2",
prompt: "Explain quantum computing in simple terms"
}
});
Chat Completion (OpenAI-compatible)
await mcp.use_mcp_tool({
server_name: "ollama",
tool_name: "chat_completion",
arguments: {
model: "llama2",
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "What is the meaning of life?"
}
],
temperature: 0.7
}
});
Create Custom Model
await mcp.use_mcp_tool({
server_name: "ollama",
tool_name: "create",
arguments: {
name: "custom-model",
modelfile: "./path/to/Modelfile"
}
});
π§ Advanced Configuration
OLLAMA_HOST
: Configure custom Ollama API endpoint (default: http://127.0.0.1:11434)- Timeout settings for model execution (default: 60 seconds)
- Temperature control for response randomness (0-2 range)
π€ Contributing
Contributions are welcome! Feel free to:
- Report bugs
- Suggest new features
- Submit pull requests
π License
MIT License - feel free to use in your own projects!
Built with β€οΈ for the MCP ecosystem
Stars
60Forks
11Last commit
6 months agoRepository age
6 months
Auto-fetched from GitHub .
MCP servers similar to Ollama MCP Server:

Β
Stars
Forks
Last commit

Β
Stars
Forks
Last commit

Β
Stars
Forks
Last commit