A beautiful C++ libcurl / ChatGPT interface
There are numerous ChatGPT command line programs currently available. Many of them are written in Python. I wanted something a bit quicker and a bit easier to install, so I wrote this program in C++.
Ensure that you have access to a valid OpenAI API key and ensure that this API key is set as the following environment variable:
OPENAI_API_KEY="<your-api-key>"
Interested in a specific release? To download 1.0.0
, for example:
wget https://github.com/dsw7/GPTifier/archive/refs/tags/v1.0.0.tar.gz
Which will yield:
v1.0.0.tar.gz
Then run:
tar -xvf v1.0.0.tar.gz
Which will generate:
GPTifier-1.0.0
Change directories into GPTifier-1.0.0
and proceed with the next steps.
To set the product up, simply run the make
target:
make compile
The binary will be installed into whatever install directory is resolved by CMake's install().
This project uses the nlohmann/json library. The compiler must be able to
locate the json.hpp
header file. If the json.hpp
file does not exist anywhere, cmake
will print out:
-- Checking if json.hpp exists anywhere
-- Checking directory: /usr/include/c++/10
-- Checking directory: /usr/include/x86_64-linux-gnu/c++/10
-- Checking directory: /usr/include/c++/10/backward
-- Checking directory: /usr/lib/gcc/x86_64-linux-gnu/10/include
-- Checking directory: /usr/local/include
-- Checking directory: /usr/include/x86_64-linux-gnu
-- Checking directory: /usr/include
CMake Error at CMakeLists.txt:<line-number> (message):
Could not find json.hpp in any include directory
To install json.hpp
into say /usr/include
, simply run the convenience script:
./get_dependencies /usr/include/
Running the script may require elevated privileges.
This project uses the TOML++ configuration parser. The compiler must
be able to locate the toml.hpp
header file. If the toml.hpp
file does not exist anywhere, cmake
will
print out:
-- Checking if toml.hpp exists anywhere
-- Checking directory: /usr/include/c++/10
-- Checking directory: /usr/include/x86_64-linux-gnu/c++/10
-- Checking directory: /usr/include/c++/10/backward
-- Checking directory: /usr/lib/gcc/x86_64-linux-gnu/10/include
-- Checking directory: /usr/local/include
-- Checking directory: /usr/include/x86_64-linux-gnu
-- Checking directory: /usr/include
CMake Error at CMakeLists.txt:<line-number> (message):
Could not find toml.hpp in any include directory
Which is identical json.hpp
case. As before, simply run the convenience script:
./get_dependencies /usr/include/
Running the script may require elevated privileges.
This project uses {fmt} for string formatting. The compiler will abort if {fmt}
cannot be found anywhere. See Get Started for instructions on
installing {fmt}
.
This project makes reference to a "home directory" (~/.gptifier
, specifically) that must be set up prior to
running the program. To set up ~/.gptifier
, run:
./setup
This script will dump a configuration file under ~/.gptifier
. Open the file:
~/.gptifier/gptifier.toml
And apply the relevant configurations. Next, drop into the program:
gpt run
The program should start an interactive session if the configuration file was properly set up.
The compilation process will generate many build artifacts. Clean up the build artifacts by running:
make clean
This command works with OpenAI's chat completion models, such as GPT-4 Turbo and GPT-4.
Simply run gpt run
! This will begin an interactive session. Type in a prompt:
$ gpt run
------------------------------------------------------------------------------------------
Input: What is 3 + 5?
And hit Enter. The program will dispatch a request and return:
...
Results: 3 + 5 equals 8.
------------------------------------------------------------------------------------------
Export:
> Write reply to file? [y/n]:
In the above example, the user is prompted to export the completion a file. Entering y will print:
...
> Writing reply to file /home/<your-username>/.gptifier/completions.gpt
------------------------------------------------------------------------------------------
Subsequent requests will append to this file. In some cases, prompting interactively may be undesirable, such
as when running automated unit tests. To disable the y/n prompt, run gpt run
with the -u
or
--no-interactive-export
flags.
A chat completion can be run against an available model by specifying the model name using the -m
or
--model
option. For example, to create a chat completion via command line using the GPT-4 model, run:
gpt run --model gpt-4 --prompt "What is 3 + 5?"
Tip
A full list of models can be found by running the models command
Note
See Input selection for more information regarding how to pass a prompt into this command
This command converts some input text into a vector representation of the text. To use the command, run:
gpt embed
------------------------------------------------------------------------------------------
Input: Convert me to a vector!
And hit Enter. The program will dispatch a request and return:
------------------------------------------------------------------------------------------
Request: {
"input": "Convert me to a vector!",
"model": "text-embedding-ada-002"
}
...
The results will be exported to a JSON file: ~/.gptifier/embeddings.gpt
. In a nutshell, the embeddings.gpt
file will contain a vector:
Where 1536 is the dimension of the output vector corresponding to model text-embedding-ada-002
. The cosine
similarity of a set of such vectors can be used to evaluate the similarity between text.
Note
See Input selection for more information regarding how to pass embedding text into this command
This command returns a list of currently available models. Simply run:
gpt models
Which will return:
------------------------------------------------------------------------------------------
Model ID Owner Creation time
------------------------------------------------------------------------------------------
dall-e-3 system 2023-10-31 20:46:29
whisper-1 openai-internal 2023-02-27 21:13:04
davinci-002 system 2023-08-21 16:11:41
... ... ...
For certain commands, a hierarchy exists for choosing where input text comes from. The hierarchy roughly follows:
-
Check for raw input via command line option:
- If raw input is provided through a command line option, use this input
- Example:
gpt run -p "What is 3 + 5?"
orgpt embed -i "A foo that bars"
-
Check for input file specified via command line:
- If a file path is provided as a command line argument, read from this file
- Example:
gpt [run | embed] -r <filename>
-
Check for a default input file in the current working directory:
- If a file named
Inputfile
exists in the current directory, read from this file - This is analogous to a
Makefile
or perhaps aDockerfile
- If a file named
-
Read from stdin:
- If none of the above conditions are met, read input from standard input (stdin)
In the Exporting a result section, it was stated that results can be voluntarily
exported to ~/.gptifier/completions.gpt
. One may be interested in integrating this into a vim
workflow.
This can be achieved as follows. First, add the following function to ~/.vimrc
:
function OpenGPTifierResults()
let l:results_file = expand('~') . '/.gptifier/completions.gpt'
if filereadable(l:results_file)
execute 'vs' . l:results_file
else
echoerr l:results_file . ' does not exist'
endif
endfunction
Then add a command to ~/.vimrc
:
" Open GPTifier results file
command G :call OpenGPTifierResults()
The command G
will open ~/.gptifier/completions.gpt
in a separate vertical split, thus allowing for cherry
picking saved OpenAI completions into a source file, for example.
GPTifier's access to OpenAI resources can be managed by setting up a GPTifier project under OpenAI's user platform. Some possibilities include setting usage and model limits. To integrate GPTifier with an OpenAI project, open GPTifier's configuration file:
vim +/project-id ~/.gptifier/gptifier.toml
And set project-id
to the project ID associated with the newly created GPTifier project. The ID can be
obtained from the General settings page
(authentication is required).
To run unit tests:
make test
This target will compile the current branch, then run pytest unit tests against the branch. The target will also run Valgrind tests in an attempt to detect memory management bugs.
Code in this project is formatted using ClangFormat. This project uses the Microsoft formatting style. To format the code, run:
make format
Code in this project is linted using cppcheck. To run the linter:
make lint
All bash code in this project is subjected to shellcheck static analysis. Run:
make sc
See shellcheck for more information.