Prompt Engineering with Paragraph Embeddings for Generating Answers (PEPEGA)
This a code for launching Telegram bot which is capable of answering ML-related questions through GPT-3 based service
The bot is publicly available at https://t.me/PepegaAIBot
If you want to use adopt on your own bot, follow the next steps:
- Clone the repo
- Create
.env
file with the following fields:db_login
- login for admin in MongoDBdb_pass
- password for admin in MongoDBdb_name
- name for root database in MongoDBbot_api_token
- Telegram bot API tokenopenai_api_token
- OpenAI api token
- Run the bot service using
docker-compose up
For now, the embedded paragraphs consist of ~200 articles about classical ML and a bit of DL. If you want to query PEPEGA on your custom data, do the following:
- Replace
paragraphs.json
with your custom list of paragraphs which are gonna act as context for GPT-3 prompt - Use previously built
paragraphs.json
file to build pynndescent index with cosine distance, pickle it, and replace with currentindex.pkl
file