by bestruirui
One Hub All LLMs For You | 为个人打造的 LLM API 聚合服务
# Add to your Claude Code skills
git clone https://github.com/bestruirui/octopusA Simple, Beautiful, and Elegant LLM API Aggregation & Load Balancing Service for Individuals
English | 简体中文
Run directly:
docker run -d --name octopus -v /path/to/data:/app/data -p 8080:8080 bestrui/octopus
Or use docker compose:
wget https://raw.githubusercontent.com/bestruirui/octopus/refs/heads/dev/docker-compose.yml
docker compose up -d
Download the binary for your platform from , then run:
No comments yet. Be the first to share your thoughts!
./octopus start
Requirements:
# Clone the repository
git clone https://github.com/bestruirui/octopus.git
cd octopus
# Build frontend
cd web && pnpm install && pnpm run build && cd ..
# Move frontend assets to static directory
mv web/out static/
# Start the backend service
go run main.go start
💡 Tip: The frontend build artifacts are embedded into the Go binary, so you must build the frontend before starting the backend.
Development Mode
cd web && pnpm install && NEXT_PUBLIC_API_BASE_URL="http://127.0.0.1:8080" pnpm run dev
## Open a new terminal, start the backend service
go run main.go start
## Access the frontend at
http://localhost:3000
After first launch, visit http://localhost:8080 and log in to the management panel with:
adminadmin⚠️ Security Notice: Please change the default password immediately after first login.
The configuration file is located at data/config.json by default and is automatically generated on first startup.
Complete Configuration Example:
{
"server": {
"host": "0.0.0.0",
"port": 8080
},
"database": {
"type": "sqlite",
"path": "data/data.db"
},
"log": {
"level": "info"
}
}
Configuration Options:
| Option | Description | Default |
|--------|-------------|---------|
| server.host | Listen address | 0.0.0.0 |
| server.port | Server port | 8080 |
| database.type | Database type | sqlite |
| database.path | Database connection string | data/data.db |
| log.level | Log level | info |
Database Configuration:
Three database types are supported:
| Type | database.type | database.path Format |
|------|-----------------|-----------------------|
| SQLite | sqlite | data/data.db |
| MySQL | mysql | user:password@tcp(host:port)/dbname |
| PostgreSQL | postgres | postgresql://user:password@host:port/dbname?sslmode=disable |
MySQL Configuration Example:
{
"database": {
"type": "mysql",
"path": "root:password@tcp(127.0.0.1:3306)/octopus"
}
}
PostgreSQL Configuration Example:
{
"database": {
"type": "postgres",
"path": "postgresql://user:password@localhost:5432/octopus?sslmode=disable"
}
}
💡 Tip: MySQL and PostgreSQL require manual database creation. The application will automatically create the table structure.
All configuration options can be overridden via environment variables using the format OCTOPUS_ + configuration path (joined with _):
| Environment Variable | Configuration Option |
|---------------------|---------------------|
| OCTOPUS_SERVER_PORT | server.port |
| OCTOPUS_SERVER_HOST | server.host |
| OCTOPUS_DATABASE_TYPE | database.type |
| OCTOPUS_DATABASE_PATH | database.path |
| OCTOPUS_LOG_LEVEL | log.level |
| OCTOPUS_GITHUB_PAT | For rate limiting when getting the latest version (optional) |
| OCTOPUS_RELAY_MAX_SSE_EVENT_SIZE | Maximum SSE event size (optional) |
Channels are the basic configuration units for connecting to LLM providers.
Base URL Guide:
The program automatically appends API paths based on channel type. You only need to provide the base URL:
| Channel Type | Auto-appended Path | Base URL | Full Request URL Example |
|--------------|-------------------|----------|--------------------------|
| OpenAI Chat | /chat/completions | https://api.openai.com/v1 | https://api.openai.com/v1/chat/completions |
| OpenAI Responses | /responses | https://api.openai.com/v1 | https://api.openai.com/v1/responses |
| Anthropic | /messages | https://api.anthropic.com/v1 | https://api.anthropic.com/v1/messages |
| Gemini | /models/:model:generateContent | https://generativelanguage.googleapis.com/v1beta | https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent |
💡 Tip: No need to include specific API endpoint paths in the Base URL - the program handles this automatically.
Groups aggregate multiple channels into a unified external model name.
Core Concepts:
model parameter to the group nameLoad Balancing Modes:
| Mode | Description | |------|-------------| | 🔄 Round Robin | Cycles through channels sequentially for each request | | 🎲 Random | Randomly selects an available channel for each request | | 🛡️ Failover | Prioritizes high-priority channels, switches to lower priority only on failure | | ⚖️ Weighted | Distributes requests based on configured channel weights |
💡 Example: Create a group named
gpt-4o, add multiple providers' GPT-4o channels to it, then access all channels via a unifiedmodel: gpt-4o.
Manage model pricing information in the system.
Data Sources:
Price Priority:
| Priority | Source | Description | |:--------:|--------|-------------| | 🥇 High | This Page | Prices set by user in price management page | | 🥈 Low | models.dev | Auto-synced default prices |
💡 Tip: To override a model's default price, simply set a custom price for it in the price management page.
Global system configuration.
Statistics Save Interval (minutes):
Since the program handles numerous statistics, writing to the database on every request would impact read/write performance. The program uses this strategy:
⚠️ Important: When exiting the program, use proper shutdown methods (like
Ctrl+Cor sendingSIGTERMsignal) to ensure in-memory statistics are correctly written to the database. Do NOT usekill -9or other forced termination methods, as this may result in statistics data loss.