First, clone the git repository:
cd <targetdir>
git clone https://github.com/ApologiaDev/PersonalChatGPT
Then go to the directory.
cd PersonalChatCPT
Create a new conda environment:
conda env create -n <envname> -f environment.yml
Get your own OpenAI API (instruction found here). Then add
the file .env
which contains the environment variable like this:
OPENAIKEY=<OpenAIAPIKey>
To run any of the following scripts, you have to activate the conda environment by
conda activate <envname>
After you have finished running everything, to deactivate the environment, just enter
conda deactivate
First, activate the conda environment, and go to the directory of the git repository. Then type:
python run_terminal_chatgpt_35turbo.py
This will run the same ChatGPT as in the web version.
First, put all your training data files (*.txt, *.pdf etc.) in a directory. Then activate your conda environment (if you have not done so). Go to the directory of the git repository. Then type
python train_gpt_index_model.py <trainingdatadir> <modeldir>
After a certain amount of time (depending on the web speed and
the number of documents), the final trained GPT model will
be stored in the directory under your specified <modeldir>
.
There will be a few JSON files underneath it.
First, activate your conda environment (if you have not done so). Then go to the directory of the git repository. Then type
python run_customized_gpt.py <modeldir>
First, activate your conda environment (if you have not done so). Then go to the directory of the git repository. Then type
python batch_run_benchmark_questions.py <modeldir> <exceloutputpath>
Then the answers to the benchmark questions is output as an Excel file.
All the benchmark questions are found under benchmark_questions
folder.
YOu can add questions. Each question is stored in a JSON file, and the question
is in the field "question:
. All other fields in the JSON files are ignored.