Geniucker / CoGPT

Hmm Copilot or GPT? Who knows. Get access to gpt-4 via copilot.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CoGPT

中文教程见 我的博客

发 Issue 和 Discussion 之前请确保你已经搜索过 IssuesDiscussions。否则会直接关闭。并且对于非 bug 和 新特性,你应该使用 Discussions 来提问。

Please make sure you have searched Issues and Discussions before creating a new one. Otherwise it will be closed directly. And for non-bug and new feature, you should use Discussions to ask questions.

Only for network programming learning purposes.

The previous version in Python is archieved in py branch, which is not stable and difficult to maintain (there are many problems with async in Python).

Features

Provide an API nearly the same as OpenAI API (with gpt-3.5-turbo and gpt-4).
The only difference is that you should use the application token of Copilot instead of OpenAI token.

Usage

Get token

Download the latest release for your platform. Unzip it and run cogpt-get-apptoken (or cogpt-get-apptoken.exe on Windows).

You can set proxy through environment variables or command line arguments. To see the help of command line arguments, run ./cogpt-get-apptoken -h.

API

  • GET /
    • Returns Hi, I'm CoGPT.
  • GET /health
    • Returns {"status":"OK"}
  • GET /v1/models
    • return available models
  • POST /v1/chat/completions
    • for chat api
  • POST /v1/embeddings
    • for embeddings api
      Pay attention that this api is not totally compatible with OpenAI API.
      For input field, OpenAI API accepts following types:

      • string: The string that will be turned into an embedding.
      • array: The array of strings that will be turned into an embedding.
      • array: The array of integers that will be turned into an embedding.
      • array: The array of arrays containing integers that will be turned into an embedding.

      Unfortunately, this service only accepts the first 2 types as well as the array of arrays containing strings.

Deploy

Warning

This service is not designed to be deployed on public network.
The best way to use is to deploy it on your own computer or local network. Or you can deploy it on public network, but only for yourself.
DO NOT share your token with others. If a token is accessed from many different IPs, it will be banned. And if too many tokens are requested from one IP, something bad may happen.
So again, ONLY for yourself.

Best approach

  1. Deploy locally on your own computer
  2. Deploy on local network for personal use or share in small group
  3. Deploy on your own server for personal use

Bad approach

  1. Provide public interface for everyone to use
    In this way, many tokens will be requested from one IP, which will cause problems.
  2. Provide public integrade (web) apps (such as ChatGPT-Next-Web)
    Makint too many requests with one token will cause problems.
  3. Deploy on with serverless services (such as Vercel)
    Serverless services will change IP frequently, and they have short lifetime.
  4. Any abuse of the service

DO NOT try any of theses approaches.

Deploy in Docker

mkdir CoGPT && cd CoGPT

Then create a docker-compose.yml file with following content:

version: '3'

services:
  cogpt-api:
    image: geniucker/cogpt:latest
    environment:
      - HOST=0.0.0.0
    ports:
      - 8080:8080
    volumes:
      - ./db:/app/db
      - ./log:/app/log
    restart: unless-stopped
    container_name: cogpt-api

If you want to use development version, replace geniucker/cogpt:latest with geniucker/cogpt:dev.

By default, the service will listen on port 8080. If you want to change the port, edit docker-compose.yml and change the port in ports section. For example, if you want to listen on port 80, change 8080:8080 to 80:8080.

Other config options can also be changed in environment section. Or more conveniently, you can edit .env file (You can copy .env.example to .env and edit it). Note that the config for db and log should be changed in volumes section in docker-compose.yml.

All config options are listed in Config.

Then run docker compose up -d to start the service.

Deploy without Docker

Download the latest release for your platform. Unzip it and run cogpt-api (or cogpt-api.exe on Windows).

By default, the service will listen on localhost:8080. For configuration, see Config.

Run as a service

Linux

For Linux based on systemd, you can follow the steps below.

First, download the latest release for Linux. Unzip it and move cogpt-api to /opt/cogpt/ and grant it executable permission.

Then copy the content of cogpt-api.service to /etc/systemd/system/cogpt-api.service.

If you need to change the config, you should edit /opt/cogpt/.env file.

Finally, run following commands to enable and start the service.

sudo systemctl enable cogpt-api
sudo systemctl start cogpt-api

Run sudo systemctl stop cogpt-api to stop the service.

Run sudo systemctl disable cogpt-api to disable the service.

MacOS

For MacOS, services are based on launchd.

First, download the latest release for MacOS. Unzip it and move cogpt-api to /opt/cogpt/ and grant it executable permission.

Then copy the content of com.cogpt-api.plist to /Library/LaunchDaemons/com.cogpt-api.plist.

If you need to change the config, you should edit /opt/cogpt/.env file.

Finally, run sudo launchctl load /Library/LaunchDaemons/com.cogpt-api.plist to start the service.

Run sudo launchctl unload /Library/LaunchDaemons/com.cogpt-api.plist to stop the service.

Windows

For Windows, we can use scheduled tasks. You can follow the steps below.

First, download the latest release for Windows. Unzip it to a directory. Let's say C:\CoGPT\.

Then create a file cogpt-api-service.ps1 in C:\CoGPT\ with copy content of cogpt-api-service.ps1 to it.

Start a PowerShell with administrator permission and run following commands.

cd C:\CoGPT\
./cogpt-api-service.ps1 enable

Here are all commands you can use. All commands should be run in PowerShell with administrator permission.

./cogpt-api-service.ps1 enable  # enable and start the service
./copgt-api-service.ps1 disable # stop and disable the service
./cogpt-api-service.ps1 start   # start the service
./cogpt-api-service.ps1 stop    # stop the service
./cogpt-api-service.ps1 restart # restart the service
./cogpt-api-service.ps1 status  # check the status of the service

Share Token

If you want to share this service with your friends, it's not safe to directly share your GitHub app token. This feature is designed for this situation. You can create map that maps so-called share token to real GitHub app token.

The first way is to set environment variable or modify .env environment variable file. You should set SHARE_TOKEN to a string like share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2. The format is share-token:real-token,share-token:real-token. You can add as many pairs as you want.

The other way is to use command line argument. You can run ./cogpt-api -share-token share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2 to start the service. You can add as many pairs as you want.

If you set as above, when you make a request with a token that starts with share-, the service will use the real token mapped to the share token. If you make a request with a token that starts with ghu_, the service will use the token directly.

Note that share tokens must start with share-. Maps that don't start with share- will be ignored.

To generate a random share token, you can download the latest release for your platform. Unzip it and run ./gen-share-token.

Config

Edit .env or set environment variables or command line arguments.

Here are the config options and their default values (.env or environment variables):

keys default description
HOST localhost Host to listen on
PORT 8080 Port to listen on
CACHE true Whether to cache tokens in sqlite database. If false, tokens will be cached in memory
CACHE_PATH db/cache.sqlite3 Path to sqlite database. Only used if CACHE is true
DEBUG false Whether to enable debug mode. If true, the service will print debug info
LOG_LEVEL info Log level.
SHARE_TOKEN "" Maps of share-token and real token. For example, SHARE_TOKEN=share-xxxxxxx1:ghu_xxxxxxx1,share-xxxxxxx2:ghu_xxxxxxx2.

For command line arguments, run ./cogpt-api -h to see the help message.

Precedence: command line arguments > environment variables > .env.

Proxy

Environment variables for proxy are also supported. They are ALL_PROXY, HTTPS_PROXY and HTTP_PROXY. You can also use command line arguments to set proxy. Run ./cogpt-api -h to see the help message. Precedence: command line arguments > environment variables (ALL_PROXY > HTTPS_PROXY > HTTP_PROXY).

Credits

License

MPL-2.0

Star History

Star History Chart

About

Hmm Copilot or GPT? Who knows. Get access to gpt-4 via copilot.

License:Mozilla Public License 2.0


Languages

Language:Go 95.9%Language:Dockerfile 2.4%Language:Makefile 1.7%