Lyra1337 / chatgpt-telegram-bot

🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ChatGPT Telegram Bot

python-version openai-version license

A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers. Ready to use with minimal configuration required.

Screenshots

demo.pdf

Features

  • Support markdown in answers
  • Reset conversation with the /reset command
  • Typing indicator while generating a response
  • Access can be restricted by specifying a list of allowed users
  • Docker support
  • Proxy support
  • (NEW!) Support multiple answers via the n_choices configuration parameter
  • (NEW!) Customizable model parameters (see configuration section)
  • (NEW!) See token usage after each answer
  • (NEW!) Multi-chat support
  • (NEW!) Image generation using DALL·E via the /image command
  • (NEW!) Transcribe audio and video messages using Whisper (may require ffmpeg)
  • (NEW!) Automatic conversation summary to avoid excessive token usage (fixes #34)
  • (NEW!) Group chat support with inline queries
    • To use this feature, enable inline queries for your bot in BotFather via the /setinline command

Additional features - help needed!

  • Add stream support (#43)
  • Handle responses longer than telegram message limit (#44)
  • Summarise conversation if (prompt_tokens + max_tokens) > 4096 (#45)

PRs are always welcome!

Prerequisites

Getting started

Configuration

Customize the configuration by copying .env.example and renaming it to .env, then editing the parameters as desired:

OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
TELEGRAM_BOT_TOKEN="YOUR_TELEGRAM_BOT_TOKEN"

# Optional parameters
ALLOWED_TELEGRAM_USER_IDS="USER_ID_1,USER_ID_2,..." # Defaults to "*" (everyone)
PROXY="YOUR_PROXY" # e.g. "http://localhost:8080", defaults to none
ASSISTANT_PROMPT="Custom prompt" # Defaults to "You are a helpful assistant."
SHOW_USAGE=true # Defaults to false
MAX_TOKENS=2000 # Defaults to 1200
MAX_HISTORY_SIZE=15 # Defaults to 10
MAX_CONVERSATION_AGE_MINUTES=120 # Defaults to 180 (3h)
VOICE_REPLY_WITH_TRANSCRIPT_ONLY=false # Defaults to true
  • OPENAI_API_KEY: Your OpenAI API key, you can get it from here
  • TELEGRAM_BOT_TOKEN: Your Telegram bot's token, obtained using BotFather (see tutorial)
  • ALLOWED_TELEGRAM_USER_IDS: A comma-separated list of Telegram user IDs that are allowed to interact with the bot (use getidsbot to find your user ID). Note: by default, everyone is allowed (*)
  • PROXY: Proxy to be used for OpenAI and Telegram bot
  • ASSISTANT_PROMPT: A system message that sets the tone and controls the behavior of the assistant
  • SHOW_USAGE: Whether to show OpenAI token usage information after each response
  • MAX_TOKENS: Upper bound on how many tokens the ChatGPT API will return
  • MAX_HISTORY_SIZE: Max number of messages to keep in memory, after which the conversation will be summarised to avoid excessive token usage (#34)
  • MAX_CONVERSATION_AGE_MINUTES: Maximum number of minutes a conversation should live, after which the conversation will be reset to avoid excessive token usage
  • VOICE_REPLY_WITH_TRANSCRIPT_ONLY: Whether to answer to voice messages with the transcript only or with a ChatGPT response of the transcript (#38)
Additional model parameters can be configured from the `main.py` file.
{
    # 'gpt-3.5-turbo' or 'gpt-3.5-turbo-0301'
    'model': 'gpt-3.5-turbo',

    # Number between 0 and 2. Higher values like 0.8 will make the output more random,
    # while lower values like 0.2 will make it more focused and deterministic. Defaults to 1
    'temperature': 1,
    
    # How many answers to generate for each input message. Defaults to 1
    'n_choices': 1,

    # Number between -2.0 and 2.0. Positive values penalize new tokens based on whether
    # they appear in the text so far, increasing the model's likelihood to talk about new topics. Defaults to 0
    'presence_penalty': 0,
    
    # Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing
    # frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Defaults to 0
    'frequency_penalty': 0,
    
    # The DALL·E generated image size. 256x256, 512x512, or 1024x1024. Defaults to 512x512
    'image_size': '512x512'
}

Check out the official API reference for more details.

Installing

  1. Clone the repository and navigate to the project directory:
git clone https://github.com/n3d1117/chatgpt-telegram-bot.git
cd chatgpt-telegram-bot

From Source

  1. Create a new virtual environment with Pipenv and install the required dependencies:
pipenv install
  1. Activate the virtual environment:
pipenv shell
  1. Use the following command to start the bot:
python main.py

Using Docker Compose

  1. Run the following command to build and run the Docker image:
docker-compose up

Credits

Disclaimer

This is a personal project and is not affiliated with OpenAI in any way.

License

This project is released under the terms of the GPL 2.0 license. For more information, see the LICENSE file included in the repository.

About

🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python

License:GNU General Public License v2.0


Languages

Language:Python 94.5%Language:Shell 4.0%Language:Dockerfile 1.5%