A simple HTTP proxy server designed specifically for forwarding and logging OpenAI API requests.
- HTTP/HTTPS proxy forwarding support
- Automatic logging of all requests and responses
- Support for modifying model names in requests
- Detailed request logging
- Node.js
- Environment variables configuration (see below)
Variable | Description | Default Value |
---|---|---|
PORT | Proxy server listening port | 8080 |
LOG_DIR | Log files directory | ./logs |
MODEL_NAME | Model name to replace (optional) | - |
API_SERVER | Target API server address (format: https://domain.com, example: https://api.openai.com) | Required |
API_KEY | OpenAI API key to override the original request (optional) | - |
- Install dependencies:
npm install
- Configure environment variables:
export API_SERVER="https://api.openai.com"
export PORT=8080
export LOG_DIR="./logs"
export MODEL_NAME="gpt-4" # optional
- Start the server:
npm start
- All requests and responses are logged to the directory specified by LOG_DIR
- Log file naming format:
sequence_YYYY-MM-DD_HHMMSS.log
- Request body is logged to
sequence_YYYY-MM-DD_HHMMSS_request.json
- Each log file contains complete request headers, request body, response headers, and response body
MIT License
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a new branch for your feature or bugfix
- Make your changes and commit them with clear messages
- Push your branch to your fork
- Submit a pull request to the main repository
Please ensure your code follows the project's style guidelines and includes appropriate tests.