One API - LLM API Management and Distribution System
Project Overview
One API is an open-source LLM API management and distribution system that supports major models such as OpenAI, Azure, Anthropic Claude, Google Gemini, DeepSeek, and ChatGLM. It provides unified API adaptation and can be used for key management and secondary distribution. The project offers a single executable file, supports Docker images, and enables one-click deployment for out-of-the-box use.
Core Features
📋 Multi-Model Support
The project supports numerous major large language model providers:
- OpenAI Series: ChatGPT series models (supports Azure OpenAI API)
- Anthropic: Claude series models (supports AWS Claude)
- Google: PaLM2/Gemini series models
- other: Other series models
🔧 Core Functionality
API Management and Distribution
- Supports configuring mirrors and numerous third-party proxy services
- Supports accessing multiple channels through load balancing
- Supports stream mode, enabling typewriter effect through streaming transmission
- Supports multi-machine deployment
- Supports automatic retry on failure
- Supports image generation interfaces
User and Permission Management
- Token Management: Set token expiration time, quota, allowed IP ranges, and allowed model access
- Redemption Code Management: Supports batch generation and export of redemption codes, which can be used to recharge accounts
- User Grouping: Supports user grouping and channel grouping, allowing different multipliers to be set for different groups
- Channel Management: Batch creation of channels, supports setting model lists for channels
Monitoring and Statistics
- Supports viewing quota details
- Supports user referral rewards
- Supports displaying quota in US dollars
- Can push alarm information to various Apps in conjunction with Message Pusher
Custom Features
- Supports publishing announcements, setting recharge links, and setting initial quota for new users
- Supports model mapping, redirecting user's requested model
- Supports customizing system name, logo, and footer
- Supports customizing homepage and about page
- Supports calling management API through system access tokens
🔐 Security and Authentication
Multiple Login Methods
- Email login and registration (supports registration email whitelist) and password reset via email
- Feishu authorization login
- GitHub authorization login
- WeChat Official Account authorization (requires additional deployment of WeChat Server)
Security Features
- Supports Cloudflare Turnstile user verification
- Supports theme switching
- Supports Cloudflare AI Gateway
Deployment Methods
Docker Deployment (Recommended)
Using SQLite
docker run --name one-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api
Using MySQL
docker run --name one-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/one-api:/data justsong/one-api
Docker Compose Deployment
# Currently supports MySQL startup, data is stored in the ./data/mysql folder
docker-compose up -d
# View deployment status
docker-compose ps
Manual Deployment
- Download the executable file from GitHub Releases or compile from source code:
git clone https://github.com/songquanpeng/one-api.git
# Build frontend
cd one-api/web/default
npm install
npm run build
# Build backend
cd ../..
go mod download
go build -ldflags "-s -w" -o one-api
- Run:
chmod u+x one-api
./one-api --port 3000 --log-dir ./logs
Cloud Platform Deployment
Zeabur Deployment
- Fork the code repository
- Create a Project in Zeabur and add a MySQL service
- Configure environment variables
PORT=3000
and SQL_DSN
- Deploy and configure the domain name
Render Deployment
- Directly deploy the Docker image
- No need to fork the repository
Configuration Instructions
Environment Variable Configuration
Database Configuration
SQL_DSN
: Database connection string (recommended to use MySQL or PostgreSQL)
LOG_SQL_DSN
: Independent database connection for the log table
Cache Configuration
REDIS_CONN_STRING
: Redis connection string, used for caching
MEMORY_CACHE_ENABLED
: Enable memory cache
SYNC_FREQUENCY
: Database synchronization frequency (seconds)
Cluster Configuration
SESSION_SECRET
: Fixed session key
NODE_TYPE
: Node type (master/slave)
FRONTEND_BASE_URL
: Frontend redirect address
Security Configuration
GLOBAL_API_RATE_LIMIT
: API rate limit
GLOBAL_WEB_RATE_LIMIT
: Web rate limit
RELAY_TIMEOUT
: Relay timeout setting
Command Line Parameters
--port <port_number>
: Specify the port number (default 3000)
--log-dir <log_dir>
: Specify the log folder
--version
: Print version number
--help
: View help
Usage Method
- Initial Login: Use the default account
root
, password 123456
- Channel Configuration: Add API Key on the channel page
- Token Creation: Create an access token on the token page
- Client Configuration: Set the API Base to the One API deployment address, and the API Key to the generated token
API Usage Example
# OpenAI official library configuration
OPENAI_API_KEY="sk-xxxxxx"
OPENAI_API_BASE="https://<HOST>:<PORT>/v1"
Channel Specification
You can specify the use of a specific channel by adding the channel ID after the token:
Authorization: Bearer ONE_API_KEY-CHANNEL_ID
Architecture Design
User → One API → OpenAI/Azure/Claude/Gemini, etc. multiple providers
One API acts as an intermediate layer, unifying the API formats of different providers, providing:
- Load balancing
- Request relay and format conversion
- User management and permission control
- Usage statistics and billing
Integration Examples
ChatGPT Next Web
docker run --name chat-next-web -d -p 3001:3000 yidadaa/chatgpt-next-web
Set the interface address and API Key on the page.
ChatGPT Web
docker run --name chatgpt-web -d -p 3002:3002 -e OPENAI_API_BASE_URL=https://openai.justsong.cn -e OPENAI_API_KEY=sk-xxx chenzhaoyu94/chatgpt-web
License
This project is open source under the MIT license, requiring attribution and a link to this project to be retained at the bottom of the page.
Project Address
